text,summary " The ratio of $K_S K_S$ ($K_L K_L$) and $K_S K_L$ production rates is calculated by considering $K^0 - K^0bar$ oscillation in $J/\psi \to K^0K^0bar$ decay. The theoretical uncertainty due to strong interaction in $J/\psi$ decay is completely canceled in the ratio, therefore, the absolute branching fractions of the \CP violating processes of $J/\psi \to K_S K_S $ and $K_L K_L$ can be cleanly and model-independently determined in case that $J/\psi \to K_S K_L$ decay is precisely measured. In the future $\tau$-Charm factory, the expected \CP violating process of $J/\psi \to K_S K_S $ should be reached. It is important to measure $J/\psi$ to $K_S K_S$ and K_S K_L decays simultaneously, so that many systematic errors will be canceled. More precise measurements are suggested to examine the predicted isospin relation in $J/\psi \to KKbar$ decays. All results can be extended to decays of other vector quarkonia, $\phi$, $\psi(2S)$ and $\Upsilon(1S)$ and so on. ","Clean Prediction of \CP violating processes $\psi$, $\phi$ and $\Upsilon(1S)$ decay to KsKs and KLKL" " With the growth of 5G, Internet of Things (IoT), edge computing and cloud computing technologies, the infrastructure (compute and network) available to emerging applications (AR/VR, autonomous driving, industry 4.0, etc.) has become quite complex. There are multiple tiers of computing (IoT devices, near edge, far edge, cloud, etc.) that are connected with different types of networking technologies (LAN, LTE, 5G, MAN, WAN, etc.). Deployment and management of applications in such an environment is quite challenging. In this paper, we propose ROMA, which performs resource orchestration for microservices-based 5G applications in a dynamic, heterogeneous, multi-tiered compute and network fabric. We assume that only application-level requirements are known, and the detailed requirements of the individual microservices in the application are not specified. As part of our solution, ROMA identifies and leverages the coupling relationship between compute and network usage for various microservices and solves an optimization problem in order to appropriately identify how each microservice should be deployed in the complex, multi-tiered compute and network fabric, so that the end-to-end application requirements are optimally met. We implemented two real-world 5G applications in video surveillance and intelligent transportation system (ITS) domains. Through extensive experiments, we show that ROMA is able to save up to 90%, 55% and 44% compute and up to 80%, 95% and 75% network bandwidth for the surveillance (watchlist) and transportation application (person and car detection), respectively. This improvement is achieved while honoring the application performance requirements, and it is over an alternative scheme that employs a static and overprovisioned resource allocation strategy by ignoring the resource coupling relationships. ",ROMA: Resource Orchestration for Microservices-based 5G Applications " A nanoscale non-contact electrical measurement has been developed based on Auger electron spectroscopy. This approach used the speciality of Auger electron, which is self-generated and free from external influences, to overcome the technical limitations of conventional measurements. The detections of the intrinsic local charge and internal electric field for nanostructured materials were achieved with resolution below 10 nm. As an example, the electrical properties at the GaN/AlGaN/GaN nanointerfaces were characterized. The concentration of the intrinsic polarization sheet charges embedded in GaN/AlGaN nanointerfacial layers were accurately detected to be -4.4 e/nm^2. The mapping of internal electric field across the nanointerface revealed the actual energy band configuration at the early stage of the formation of two-dimensional electron gas. ",Non-contact electrical detection of intrinsic local charge and internal electric field at nanointerfaces " In this paper a wide family of identifying codes over regular Cayley graphs of degree four which are built over finite Abelian groups is presented. Some of the codes in this construction are also perfect. The graphs considered include some well-known graphs such as tori, twisted tori and Kronecker products of two cycles. Therefore, the codes can be used for identification in these graphs. Finally, an example of how these codes can be applied for adaptive identification over these graphs is presented. ",Identifying Codes of Degree 4 Cayley Graphs over Abelian Groups " A cubic polynomial $f$ with a periodic Siegel disk containing an eventual image of a critical point is said to be a \emph{Siegel capture polynomial}. If the Siegel disk is invariant, we call $f$ a \emph{IS-capture polynomial} (or just an IS-capture; IS stands for Invariant Siegel). We study the location of IS-capture polynomials in the parameter space of all cubic polynomials and show that any IS-capture is on the boundary of a unique hyperbolic component determined by the rational lamination of the map. We also relate IS-captures to the cubic Principal Hyperbolic Domain and its closure (by definition, the \emph{cubic Principal Hyperbolic Domain} consists of cubic hyperbolic polynomials with Jordan curve Julia sets) and prove that, in the slice of cubic polynomials given by a fixed multiplier at one of the fixed points, the closure of the cubic principal hyperbolic domain might possibly only have bounded complementary domains $U$ such that (1) critical points of $f\in U$ are distinct and belong to $J(f)$, and (2) $J(f)$ has positive Lebesgue measure and carries an invariant line field. ",Location of Siegel capture polynomials in parameter spaces We study the single production of excited spin-3/2 and spin-1/2 leptons in future high energy $e^+e^-$ collisions. We calculate the production cross section and decay widths of excited spin-3/2 and spin-1/2 leptons according to their effective currents. We show that these possible new excited states can be probed up to the mass $m^*$ ~ $\sqrt{s}$ depending on their couplings to leptons and gauge bosons. We present the angular distributions of final state particles as a measure to discriminate between an excited spin-3/2 and spin-1/2 lepton signal. The signals and the corresponding backgrounds are studied in detail to obtain attainable limits on masses and couplings of excited leptons at future linear colliders ,Search for excited spin-3/2 and spin-1/2 leptons at linear colliders " Due to the linearity of quantum mechanics, it remains a challenge to design quantum generative machine learning models that embed non-linear activations into the evolution of the statevector. However, some of the most successful classical generative models, such as those based on neural networks, involve highly non-linear dynamics for quality training. In this paper, we explore the effect of these dynamics in quantum generative modeling by introducing a model that adds non-linear activations via a neural network structure onto the standard Born Machine framework - the Quantum Neuron Born Machine (QNBM). To achieve this, we utilize a previously introduced Quantum Neuron subroutine, which is a repeat-until-success circuit with mid-circuit measurements and classical control. After introducing the QNBM, we investigate how its performance depends on network size, by training a 3-layer QNBM with 4 output neurons and various input and hidden layer sizes. We then compare our non-linear QNBM to the linear Quantum Circuit Born Machine (QCBM). We allocate similar time and memory resources to each model, such that the only major difference is the qubit overhead required by the QNBM. With gradient-based training, we show that while both models can easily learn a trivial uniform probability distribution, on a more challenging class of distributions, the QNBM achieves an almost 3x smaller error rate than a QCBM with a similar number of tunable parameters. We therefore provide evidence that suggests that non-linearity is a useful resource in quantum generative models, and we put forth the QNBM as a new model with good generative performance and potential for quantum advantage. ",Introducing Non-Linear Activations into Quantum Generative Models " The grand-design face-on spiral galaxy M51 is an excellent laboratory for studying magnetic fields in galaxies. We present new observations of M51 using the VLA at the frequency range of S-band (2-4GHz), to shed new light on the transition region between the disk and halo. We present images of the distributions of the total intensity, polarized intensity, degree of polarization, and rotation measure (RM). The RM distribution in S-band shows a fluctuating pattern without any apparent large-scale structure. We discuss a model of the depolarization of synchrotron radiation in a multi-layer magneto-ionic medium and compare the model predictions to the polarization data of M51 between 1-8GHz. Since the model predictions strongly differ within the wavelength range of the S-band, the new data are essential. The parameters of the model are adjusted to fit to the data of polarization fractions in a few selected regions. In three spiral arm regions, the turbulent field in the disk dominates with strengths between 18muG and 24muG, while the regular field strengths are 8-16muG. In one inter-arm region, the regular field strength of 18muG exceeds that of the turbulent field of 11muG. The regular field strengths in the halo are 3-5muG. The observed RMs in the disk-halo transition region are probably dominated by tangled regular fields, as predicted from models of evolving dynamos, and/or vertical fields, as predicted from numerical simulations of Parker instabilities or galactic winds. Both types of magnetic fields have frequent reversals on scales similar to or larger than the beam size (550pc) that contribute to an increase of the RM dispersion and to distortions of any large-scale pattern of the regular field. Our study devises new ways of analyzing and interpreting broadband multi-frequency polarization data that will be applicable to future data from, for example, the Square Kilometre Array. ",The magnetized disk-halo transition region of M51 " Recent studies have shown that logarithmic divergence of entanglement entropy as function of size of a subsystem is a signature of criticality in quantum models. We demonstrate that the ground state entanglement entropy of $ n$ sites for ferromagnetic Heisenberg spin-1/2 chain of the length $L$ in a sector with fixed magnetization $y$ per site grows as ${1/2}\log_{2} \frac{n(L-n)}{L}C(y)$, where $C(y)=2\pi e({1/4}-y^{2})$ ",Logarithmic divergence of the block entanglement entropy for the ferromagnetic Heisenberg model " We consider the model averaged tail area (MATA) confidence interval proposed by Turek and Fletcher, CSDA, 2012, in the simple situation in which we average over two nested linear regression models. We prove that the MATA for any reasonable weight function belongs to the class of confidence intervals defined by Kabaila and Giri, JSPI, 2009. Each confidence interval in this class is specified by two functions b and s. Kabaila and Giri show how to compute these functions so as to optimize these intervals in terms of satisfying the coverage constraint and minimizing the expected length for the simpler model, while ensuring that the expected length has desirable properties for the full model. These Kabaila and Giri ""optimized"" intervals provide an upper bound on the performance of the MATA for an arbitrary weight function. This fact is used to evaluate the MATA for a broad class of weights based on exponentiating a criterion related to Mallows' C_P. Our results show that, while far from ideal, this MATA performs surprisingly well, provided that we choose a member of this class that does not put too much weight on the simpler model. ",The Performance of the Turek-Fletcher Model Averaged Confidence Interval " We show that the fluctuations associated with ferro orbital order in the $d_{xz}$ and $d_{yz}$ orbitals can develop a sharp resonance mode in the superconducting state with a nodeless gap on the Fermi surface. This orbital resonance mode appears below the particle-hole continuum and is analogous to the magnetic resonance mode found in various unconventional superconductors. If the pairing symmetry is $s_{\pm}$, a dynamical coupling between the orbital ordering and the d-wave subdominant pairing channels is present by symmetry. Therefore the nature of the resonance mode depends on the relative strengths of the fluctuations in these two channels, which could vary significantly for different families of the iron based superconductors. The application of our theory to a recent observation of a new $\delta$-function-like peak in the B$_{1g}$ Raman spectrum of Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$ is discussed. ",Orbital Resonance Mode in Superconducting Iron Pnictides " We present a Bayesian method for characterizing the mating system of populations reproducing through a mixture of self-fertilization and random outcrossing. Our method uses patterns of genetic variation across the genome as a basis for inference about pure hermaphroditism, androdioecy, and gynodioecy. We extend the standard coalescence model to accommodate these mating systems, accounting explicitly for multilocus identity disequilibrium, inbreeding depression, and variation in fertility among mating types. We incorporate the Ewens Sampling Formula (ESF) under the infinite-alleles model of mutation to obtain a novel expression for the likelihood of mating system parameters. Our Markov chain Monte Carlo (MCMC) algorithm assigns locus-specific mutation rates, drawn from a common mutation rate distribution that is itself estimated from the data using a Dirichlet Process Prior (DPP) model. Among the parameters jointly inferred are the population-wide rate of self-fertilization, locus-specific mutation rates, and the number of generations since the most recent outcrossing event for each sampled individual. ",Bayesian co-estimation of selfing rate and locus-specific mutation rates for a partially selfing population " The introduction of pre-trained language models has revolutionized natural language research communities. However, researchers still know relatively little regarding their theoretical and empirical properties. In this regard, Peters et al. perform several experiments which demonstrate that it is better to adapt BERT with a light-weight task-specific head, rather than building a complex one on top of the pre-trained language model, and freeze the parameters in the said language model. However, there is another option to adopt. In this paper, we propose a new adaptation method which we first train the task model with the BERT parameters frozen and then fine-tune the entire model together. Our experimental results show that our model adaptation method can achieve 4.7% accuracy improvement in semantic similarity task, 0.99% accuracy improvement in sequence labeling task and 0.72% accuracy improvement in the text classification task. ",To Tune or Not To Tune? How About the Best of Both Worlds? " Several aspects of regularity theory for parabolic systems are investigated under the effect of random perturbations. The deterministic theory, when strict parabolicity is assumed, presents both classes of systems where all weak solutions are in fact more regular, and examples of systems with weak solutions which develop singularities in finite time. Our main result is the extension of a regularity result due to Kalita to the stochastic case. Concerning the examples with singular solutions (outside the setting of Kalita's regularity result), we do not know whether stochastic noise may prevent the emergence of singularities, as it happens for easier PDEs. We can only prove that, for a linear stochastic parabolic system with coefficients outside the previous regularity theory, the expected value of the solution is not singular. ",Random perturbations of nonlinear parabolic systems " Starting from a bound state (positive or sign-changing) solution to $$ -\Delta \omega_m =|\omega_m|^{p-1} \omega_m -\omega_m \ \ \mbox{in}\ \R^n, \ \omega_m \in H^2 (\R^n)$$ and solutions to the Helmholtz equation $$ \Delta u_0 + \lambda u_0=0 \ \ \mbox{in} \ \R^n, \ \lambda>0, $$ we build new Dancer's type entire solutions to the nonlinear scalar equation $$ -\Delta u =|u|^{p-1} u-u \ \ \mbox{in} \ \R^{m+n}. $$ ",On Helmholtz equation and Dancer's type entire solutions for nonlinear elliptic equations " We define the notions of Reidemeister torsion and analytic torsion for directed graphs by means of the path homology theory introduced by the authors in \cite{Grigoryan-Lin-Muranov-Yau2013, Grigoryan-Lin-Muranov-Yau2014, Grigoryan-Lin-Muranov-Yau2015, Grigoryan-Lin-Muranov-Yau2020}. We prove the identity of the two notions of torsions as well as obtain formulas for torsions of Cartesian products and joins of digraphs. ",Torsion of digraphs and path complexes " A new electronic structure model is developed in which the ground state energy of a molecular system is given by a Hartree-Fock-like expression with parametrized one- and two-electron integrals over an extended (minimal + polarization) set of orthogonalized atom-centered basis functions, the variational equations being solved formally within the minimal basis but the effect of polarization functions being included in the spirit of second-order perturbation theory. It is designed to yield good dipole polarizabilities and improved intermolecular potentials with dispersion terms. The molecular integrals include up to three-center one-electron and two-center two-electron terms, all in simple analytical forms. A method to extract the effective one-electron Hamiltonian of nonlocal-exchange Kohn-Sham theory from the coupled-cluster one-electron density matrix is designed and used to get its matrix representation in a molecule-intrinsic minimal basis as an input to the paramtrization procedure -- making a direct link to the correlated wavefunction theory. The model has been trained for 15 elements (H, Li--F, Na--Cl, 720 parameters) on a set of 5581 molecules (including ions, transition states, and weakly-bound complexes) whose first- and second-order properties were computed by the coupled-cluster theory as a reference, and a good agreement is seen. The model looks promising for the study of large molecular systems, it is believed to be an important step forward from the traditional semiempirical models towards higher accuracy at nearly as low a computational cost. ",A new parametrizable model of molecular electronic structure " Cloud computing is a revolutionary process that has impacted the manner of using networks. It allows a high level of flexibility as Virtual Machines (VMs) run elastically workloads on physical machines in data centers. The issue of placing virtual machines (VMP) in cloud environments is an important challenge that has been thoroughly addressed, although not yet completely resolved. This article discusses the different problems that may disrupt the placement of VMs and Virtual Network Functions (VNFs), and classifies the existing solutions into five major objective functions based on multiple performance metrics such as energy consumption, Quality of Service, Service Level Agreement, and incurred cost. The existing solutions are also classified based on whether they adopt heuristic, deterministic, meta-heuristic or approximation algorithms. The VNF placement in 5G network is also discussed to highlight the convergence toward optimal usage of mobile services by including NFV/Software-Defined-Network technologies. ",Multi-Criteria Virtual Machine Placement in Cloud Computing Environments: A literature Review " This paper is the sequel to ""The equivalence of Heegaard Floer homology and embedded contact homology via open book decompositions I"" and is devoted to proving some of the technical parts of the HF=ECH isomorphism. ",The equivalence of Heegaard Floer homology and embedded contact homology via open book decompositions II " In this paper, we introduce a novel, non-recursive, maximal matching algorithm for double auctions, which aims to maximize the amount of commodities to be traded. It differs from the usual equilibrium matching, which clears a market at the equilibrium price. We compare the two algorithms through experimental analyses, showing that the maximal matching algorithm is favored in scenarios where trading volume is a priority and that it may possibly improve allocative efficiency over equilibrium matching as well. A parameterized algorithm that incorporates both maximal matching and equilibrium matching as special cases is also presented to allow flexible control on how much to trade in a double auction. ",Maximizing Matching in Double-sided Auctions " We show that a simple modification to an optical table with pneumatic vibration isolation can be used to actively reduce the long term drift in the tilt of the table by nearly a factor of 1000. Without active stabilization, we measure a root-mean-square (RMS) tilt variation of \SI{270}{\upmu rad} over three days. The active stabilization can be used to limit the tilt to \SI{0.35}{\upmu rad} RMS over the same time period. This technique can be used to minimize drift in tilt-sensitive experiments. ",Active Optical Table Tilt Stabilization " We perform a detailed study of NLO parton shower matching uncertainties in Higgs boson pair production through gluon fusion at the LHC based on a generic and process independent implementation of NLO subtraction and parton shower matching schemes for loop-induced processes in the Sherpa event generator. We take into account the full top-quark mass dependence in the two-loop virtual corrections and compare the results to an effective theory approximation. In the full calculation, our findings suggest large parton shower matching uncertainties that are absent in the effective theory approximation. We observe large uncertainties even in regions of phase space where fixed-order calculations are theoretically well motivated and parton shower effects expected to be small. We compare our results to NLO matched parton shower simulations and analytic resummation results that are available in the literature. ",Parton Shower and NLO-Matching uncertainties in Higgs Boson Pair Production " Every day, burning buildings threaten the lives of occupants and first responders trying to save them. Quick action is of essence, but some areas might not be accessible or too dangerous to enter. Robotic systems have become a promising addition to firefighting, but at this stage, they are mostly manually controlled, which is error-prone and requires specially trained personal. We present two systems for autonomous firefighting from air and ground we developed for the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2020. The systems use LiDAR for reliable localization within narrow, potentially GNSS-restricted environments while maneuvering close to obstacles. Measurements from LiDAR and thermal cameras are fused to track fires, while relative navigation ensures successful extinguishing. We analyze and discuss our successful participation during the MBZIRC 2020, present further experiments, and provide insights into our lessons learned from the competition. ",Autonomous Fire Fighting with a UAV-UGV Team at MBZIRC 2020 " We extend the study of weak local conditional independence (WCLI) based on a measurability condition made by Commenges and G\'egout-Petit (2009) to a larger class of processes that we call D'. We also give a definition related to the same concept based on certain likelihood processes, using the Girsanov theorem. Under certain conditions, the two definitions coincide on D'. These results may be used in causal models in that we define what may be the largest class of processes in which influences of one component of a stochastic process on another can be described without ambiguity. From WCLI we can contruct a concept of strong local conditional independence (SCLI). When WCLI does not hold, there is a direct influence while when SCLI does not hold there is direct or indirect influence. We investigate whether WCLI and SCLI can be defined via conventional independence conditions and find that this is the case for the latter but not for the former. Finally we recall that causal interpretation does not follow from mere mathematical definitions, but requires working with a good system and with the true probability. ",A general definition of influence between stochastic processes " Background: One important ingredient for many applications of nuclear physics to astrophysics, nuclear energy, and stockpile stewardship are cross sections for reactions of neutrons with rare isotopes. Since direct measurements are often not feasible, indirect methods, e.g. (d,p) reactions, should be used.} Those (d,p) reactions may be viewed as three-body reactions and described with Faddeev techniques. Purpose: Faddeev equations in momentum space have a long tradition of utilizing separable interactions in order to arrive at sets of coupled integral equations in one variable. While there exist several separable representations for the nucleon-nucleon interaction, the optical potential between a neutron (proton) and a nucleus is not readily available in separable form. The purpose of this paper is to introduce a separable representation for complex phenomenological optical potentials of Woods-Saxon type. Results: Starting from a global optical potential, a separable representation thereof is introduced based on the Ernst-Shakin-Thaler (EST) scheme. This scheme is generalized to non-hermitian potentials. Applications to n$+^{48}$Ca, n$+^{132}$Sn and n$+^{208}$Pb are investigated for energies from 0 to 50 MeV and the quality of the representation is examined. Conclusions: We find a good description of the on-shell t-matrix for all systems with rank up to 5. The required rank depends inversely on the angular momentum. The resulting separable interaction exhibits a different off-shell behavior compared to the original potential, reducing the high momentum contributions. ",Separable Representation of Phenomenological Optical Potentials of Woods-Saxon Type " This guide is intended to knit together, and extend, the existing PP and C documentation on PDL internals. It draws heavily from prior work by the authors of the code. Special thanks go to Christian Soeller, and Tuomas Lukka, who together with Glazebrook conceived and implemented PDL and PP; and to Chris Marshall, who has led the PDL development team through several groundbreaking releases and to new levels of usability. ","Practical Magick with C, PDL, and PDL::PP -- a guide to compiled add-ons for PDL" " The Blaschke conjecture claims that every compact Riemannian manifold whose injectivity radius equals its diameter is, up to constant rescaling, a compact rank one symmetric space. We summarize the intuition behind this problem, the proof that such manifolds have the cohomology of compact rank one symmetric spaces, and the proof of the conjecture for homology spheres and homology real projective spaces. We also summarize what is known on the diffeomorphism, homeomorphism and homotopy types of such manifolds. ",Summary of progress on the Blaschke conjecture " We report on the experience with long-range beam--beam effects in the LHC, in dedicated studies as well as the experience from operation. Where possible, we compare the observations with the expectations. ",Long Range Beam-beam Effects in the LHC " We present results on the equation of state in QCD with two light quark flavors and a heavier strange quark. Calculations with improved staggered fermions have been performed on lattices with temporal extent Nt =4 and 6 on a line of constant physics with almost physical quark mass values; the pion mass is about 220 MeV, and the strange quark mass is adjusted to its physical value. High statistics results on large lattices are obtained for bulk thermodynamic observables, i.e. pressure, energy and entropy density, at vanishing quark chemical potential for a wide range of temperatures, 140 MeV < T < 800 MeV. We present a detailed discussion of finite cut-off effects which become particularly significant for temperatures larger than about twice the transition temperature. At these high temperatures we also performed calculations of the trace anomaly on lattices with temporal extent Nt=8. Furthermore, we have performed an extensive analysis of zero temperature observables including the light and strange quark condensates and the static quark potential at zero temperature. These are used to set the temperature scale for thermodynamic observables and to calculate renormalized observables that are sensitive to deconfinement and chiral symmetry restoration and become order parameters in the infinite and zero quark mass limits, respectively. ",The QCD Equation of State with almost Physical Quark Masses " Multipole expansions depend on the coordinate system, so that coefficients of multipole moments can be set equal to zero by an appropriate choice of coordinates. Therefore, it is meaningless to say that a physical system has a nonvanishing quadrupole moment, say, without specifying which coordinate system is used. (Except if this moment is the lowest non-vanishing one.) This result is demonstrated for the case of two equal like electric charges. Specifically, an adapted coordinate system in which the potential is given by a monopole term only is explicitly found, the coefficients of all higher multipoles vanish identically. It is suggested that this result can be generalized to other potential problems, by making equal coordinate surfaces coincide with the potential problem's equipotential surfaces. ",Multipole structure and coordinate systems " The occurrence of unknown words in texts significantly hinders reading comprehension. To improve accessibility for specific target populations, computational modelling has been applied to identify complex words in texts and substitute them for simpler alternatives. In this paper, we present an overview of computational approaches to lexical complexity prediction focusing on the work carried out on English data. We survey relevant approaches to this problem which include traditional machine learning classifiers (e.g. SVMs, logistic regression) and deep neural networks as well as a variety of features, such as those inspired by literature in psycholinguistics as well as word frequency, word length, and many others. Furthermore, we introduce readers to past competitions and available datasets created on this topic. Finally, we include brief sections on applications of lexical complexity prediction, such as readability and text simplification, together with related studies on languages other than English. ",Lexical Complexity Prediction: An Overview " Combinatorial algorithms are widely used for decision-making and knowledge discovery, and it is important to ensure that their output remains stable even when subjected to small perturbations in the input. Failure to do so can lead to several problems, including costly decisions, reduced user trust, potential security concerns, and lack of replicability. Unfortunately, many fundamental combinatorial algorithms are vulnerable to small input perturbations. To address the impact of input perturbations on algorithms for weighted graph problems, Kumabe and Yoshida (FOCS'23) recently introduced the concept of Lipschitz continuity of algorithms. This work explores this approach and designs Lipschitz continuous algorithms for covering problems, such as the minimum vertex cover, set cover, and feedback vertex set problems. Our algorithm for the feedback vertex set problem is based on linear programming, and in the rounding process, we develop and use a technique called cycle sparsification, which may be of independent interest. ",Lipschitz Continuous Algorithms for Covering Problems " We present and discuss a new approach increasing by orders of magnitude the speed of performing Bayesian inference and parameter estimation within the framework of slow-roll inflation. The method relies on the determination of an effective likelihood for inflation which is a function of the primordial amplitude of the scalar perturbations complemented with the necessary number of the so-called Hubble flow functions to reach the desired accuracy. Starting from any cosmological data set, the effective likelihood is obtained by marginalisation over the standard cosmological parameters, here viewed as ""nuisance"" from the early Universe point of view. As being low-dimensional, basic machine-learning algorithms can be trained to accurately reproduce its multidimensional shape and then be used as a proxy to perform fast Bayesian inference on the inflationary models. The robustness and accuracy of the method are illustrated using the Planck Cosmic Microwave Background (CMB) data to perform primordial parameter estimation for the large field models of inflation. In particular, marginalised over all possible reheating history, we find the power index of the potential to verify p < 2.3 at 95% of confidence. ",Fast Bayesian inference for slow-roll inflation " The point cloud learning community witnesses a modeling shift from CNNs to Transformers, where pure Transformer architectures have achieved top accuracy on the major learning benchmarks. However, existing point Transformers are computationally expensive since they need to generate a large attention map, which has quadratic complexity (both in space and time) with respect to input size. To solve this shortcoming, we introduce Patch ATtention (PAT) to adaptively learn a much smaller set of bases upon which the attention maps are computed. By a weighted summation upon these bases, PAT not only captures the global shape context but also achieves linear complexity to input size. In addition, we propose a lightweight Multi-Scale aTtention (MST) block to build attentions among features of different scales, providing the model with multi-scale features. Equipped with the PAT and MST, we construct our neural architecture called PatchFormer that integrates both modules into a joint framework for point cloud learning. Extensive experiments demonstrate that our network achieves comparable accuracy on general point cloud learning tasks with 9.2x speed-up than previous point Transformers. ",PatchFormer: An Efficient Point Transformer with Patch Attention " Recommender systems play a fundamental role in web applications in filtering massive information and matching user interests. While many efforts have been devoted to developing more effective models in various scenarios, the exploration on the explainability of recommender systems is running behind. Explanations could help improve user experience and discover system defects. In this paper, after formally introducing the elements that are related to model explainability, we propose a novel explainable recommendation model through improving the transparency of the representation learning process. Specifically, to overcome the representation entangling problem in traditional models, we revise traditional graph convolution to discriminate information from different layers. Also, each representation vector is factorized into several segments, where each segment relates to one semantic aspect in data. Different from previous work, in our model, factor discovery and representation learning are simultaneously conducted, and we are able to handle extra attribute information and knowledge. In this way, the proposed model can learn interpretable and meaningful representations for users and items. Unlike traditional methods that need to make a trade-off between explainability and effectiveness, the performance of our proposed explainable model is not negatively affected after considering explainability. Finally, comprehensive experiments are conducted to validate the performance of our model as well as explanation faithfulness. ",Explainable Recommender Systems via Resolving Learning Representations " We generalise the Dixmier-Douady classification of continuous-trace C*-algebras to Fell algebras. To do so, we show that C*-diagonals in Fell algebras are precisely abelian subalgebras with the extension property, and use this to prove that every Fell algebra is Morita equivalent to one containing a diagonal subalgebra. We then use the machinery of twisted groupoid C*-algebras and equivariant sheaf cohomology to define the Dixmier-Douady invariant of a Fell algebra A, and to prove our classification theorem. ",A Dixmier-Douady classification for Fell algebras " We present an improved post quantum version of Sakalauskas matrix power function key agreement protocol, using rectangular matrices instead of the original square ones. Sakalauskas matrix power function is an efficient and secure way to generate a shared secret key, and using rectangular matrices provides additional flexibility and security. This method reduces the computational burden by allowing smaller random integer matrices while maintaining equal security. Another advantage of using the rank deficient rectangular matrices over key agreement protocols is that it blocks linearization attacks. ",A Post Quantum Key Agreement Protocol Based on a Modified Matrix Power Function over a Rectangular Matrices Semiring " Martensites are metastable phases, possessing a characteristic morphology, usually formed during a fast quench accross a structural transition. We attempt to understand these morphological features using a coarsegrained free energy functional ${\cal F}[\epsilon ; \Phi]$ which contains, in addition to the usual strain fields $\epsilon_{ij}$ (the `` order parameter'' for the transition), the ``vacancy'' field $\phi$ which arises due to the geometric mismatch at a parent-product interface. The relaxation of this mismatch is slow compared to typical front propagation times and hence $\phi$ is essentially frozen in the reference frame of the growing martensite front. Minimisation of ${\cal F}$ then automatically yeilds typical martensite morphologies. We demonstrate this in two dimensions for the square to rhombus transformation and obtain internally twinned martensites, which grow as thin strips, for ``hard'' martensites (e.g., Fe-based alloys) or with a `single-interface', for ``soft'' martensites (e.g., In-Tl alloys). ",Droplet Free Energy Functional for the Morphology of Martensites " We propose a new density matrix renormalization group (DMRG) approach to study lattices including bosons. The key to the new approach is an exact mapping of a boson site containing 2^N states to N pseudo-sites, each with 2 states. The pseudo-sites can be viewed as the binary digits of a boson level. We apply the pseudo-site DMRG method to the polaron problem in the one- and two-dimensional Holstein models. Ground state results are presented for a wide range of electron-phonon coupling strengths and phonon frequencies on lattices large enough (up to 80 sites in one dimension and up to 20x20 sites in two dimensions) to eliminate finite size effects, with up to 128 phonon states per phonon mode. We find a smooth but quite abrupt crossover from a quasi-free electron ground state with a slightly renormalized mass at weak electron-phonon coupling to a polaronic ground state with a large effective mass at strong coupling, in agreement with previous studies. ",Density matrix renormalization group study of the polaron problem in the Holstein model " The quadratic divergences in the scalar sector of the standard model are considered. Since the divergences are present also in the unbroken theory, a natural scale for the divergence formula is proposed to be at the scale of new physics. The implications of top quark mass on the Higgs mass are investigated by means of the renormalization group equations. The Coleman-Weinberg mechanism for spontaneous symmetry breaking is also considered. ",On Quadratic Divergences and the Higgs Mass " We address intermediate scales within a class of string models. The intermediate scales occur due to the SM singlets S_i acquiring non-zero VEVs due to radiative breaking; the mass-square (m_i)^2 of S_i is driven negative at mu_{RAD} due to order(1) Yukawa couplings of S_i to exotic particles (calculable in a class of string models). The actual VEV of S_i depends on the relative magnitude of the non-renormalizable terms of the type (S_i)^{K+3}/M^K in the superpotential. We mainly consider the case in which the S_i are charged under an additional non-anomalous U(1) gauge symmetry and the VEVs occur along F- and D-flat directions. We explore various scenarios in detail, depending on the type of Yukawa couplings to the exotic particles and on the initial boundary values of the soft SUSY breaking parameters. We then address the implications of these scenarios for the mu parameter and the fermionic masses of the standard model. ","Intermediate Scales, Mu Parameter, and Fermion Masses from String Models" " In this study, a novel Distributed Representation of News (DRNews) model is developed and applied in deep learning-based stock market predictions. With the merit of integrating contextual information and cross-documental knowledge, the DRNews model creates news vectors that describe both the semantic information and potential linkages among news events through an attributed news network. Two stock market prediction tasks, namely the short-term stock movement prediction and stock crises early warning, are implemented in the framework of the attention-based Long Short Term-Memory (LSTM) network. It is suggested that DRNews substantially enhances the results of both tasks comparing with five baselines of news embedding models. Further, the attention mechanism suggests that short-term stock trend and stock market crises both receive influences from daily news with the former demonstrates more critical responses on the information related to the stock market {\em per se}, whilst the latter draws more concerns on the banking sector and economic policies. ",A Novel Distributed Representation of News (DRNews) for Stock Market Predictions We prove the Gross-Zagier-Zhang formula over global function fields of arbitrary characteristics. It is an explicit formula which relates the Neron-Tate heights of CM points on abelian varieties and central derivatives of associated quadratic base change $L$-functions. Our proof is based on an arithmetic variant of a relative trace identity of Jacquet. This approach is proposed by W. Zhang. ,The Gross-Zagier-Zhang formula over function fields " Coulomb coupling between proximal layers in graphene heterostructures results in efficient energy transfer between the layers. We predict that, in the presence of correlated density inhomogeneities in the layers, vertical energy transfer has a strong impact on lateral charge transport. In particular, for Coulomb drag it dominates over the conventional momentum drag near zero doping. The dependence on doping and temperature, which is different for the two drag mechanisms, can be used to separate these mechanisms in experiment. We predict distinct features such as a peak at zero doping and a multiple sign reversal, which provide diagnostics for this new drag mechanism. ",Energy-driven Drag at Charge Neutrality in Graphene " The problem of designing bit-to-pattern mappings and power allocation schemes for orthogonal frequency-division multiplexing (OFDM) systems that employ subcarrier index modulation (IM) is considered. We assume the binary source conveys a stream of independent, uniformly distributed bits to the pattern mapper, which introduces a constraint on the pattern transmission probability distribution that can be quantified using a binary tree formalism. Under this constraint, we undertake the task of maximizing the achievable rate subject to the availability of channel knowledge at the transmitter. The optimization variables are the pattern probability distribution (i.e., the bit-to-pattern mapping) and the transmit powers allocated to active subcarriers. To solve the problem, we first consider the relaxed problem where pattern probabilities are allowed to take any values in the interval [0,1] subject to a sum probability constraint. We develop (approximately) optimal solutions to the relaxed problem by using new bounds and asymptotic results, and then use a novel heuristic algorithm to project the relaxed solution onto a point in the feasible set of the constrained problem. Numerical analysis shows that this approach is capable of achieving the maximum mutual information for the relaxed problem in low and high-SNR regimes and offers noticeable benefits in terms of achievable rate relative to a conventional OFDM-IM benchmark. ",Binary-Tree Encoding for Uniform Binary Sources in Index Modulation Systems " We prove two universal approximation theorems for a range of dropout neural networks. These are feed-forward neural networks in which each edge is given a random $\{0,1\}$-valued filter, that have two modes of operation: in the first each edge output is multiplied by its random filter, resulting in a random output, while in the second each edge output is multiplied by the expectation of its filter, leading to a deterministic output. It is common to use the random mode during training and the deterministic mode during testing and prediction. Both theorems are of the following form: Given a function to approximate and a threshold $\varepsilon>0$, there exists a dropout network that is $\varepsilon$-close in probability and in $L^q$. The first theorem applies to dropout networks in the random mode. It assumes little on the activation function, applies to a wide class of networks, and can even be applied to approximation schemes other than neural networks. The core is an algebraic property that shows that deterministic networks can be exactly matched in expectation by random networks. The second theorem makes stronger assumptions and gives a stronger result. Given a function to approximate, it provides existence of a network that approximates in both modes simultaneously. Proof components are a recursive replacement of edges by independent copies, and a special first-layer replacement that couples the resulting larger network to the input. The functions to be approximated are assumed to be elements of general normed spaces, and the approximations are measured in the corresponding norms. The networks are constructed explicitly. Because of the different methods of proof, the two results give independent insight into the approximation properties of random dropout networks. With this, we establish that dropout neural networks broadly satisfy a universal-approximation property. ",Universal Approximation in Dropout Neural Networks " Photometric variability attributed to cloud phenomena is common in L/T transition brown dwarfs. Recent studies show that such variability may also trace aurorae, suggesting that localized magnetic heating may contribute to observed brown dwarf photometric variability. We assess this potential correlation with a survey of 17 photometrically variable brown dwarfs using the Karl G. Jansky Very Large Array (VLA) at 4 -- 8 GHz. We detect quiescent and highly circularly polarized flaring emission from one source, 2MASS J17502484-0016151, which we attribute to auroral electron cyclotron maser emission. The detected auroral emission extends throughout the frequency band at $\sim$5 -- 25$\sigma$, and we do not detect evidence of a cutoff. Our detection confirms that 2MASS J17502484-0016151 hosts a magnetic field strength of $\geq$2.9 kG, similar to those of other radio-bright ultracool dwarfs. We show that H$\alpha$ emission continues to be an accurate tracer of auroral activity in brown dwarfs. Supplementing our study with data from the literature, we calculate the occurrence rates of quiescent emission in L dwarfs with low- and high-amplitude variability and conclude that high amplitude O/IR variability does not trace radio magnetic activity in L dwarfs. ",On the Correlation between L Dwarf Optical and Infrared Variability and Radio Aurorae " In this paper we investigate the connection between quantum information theory and machine learning. In particular, we show how quantum state discrimination can represent a useful tool to address the standard classification problem in machine learning. Previous studies have shown that the optimal quantum measurement theory developed in the context of quantum information theory and quantum communication can inspire a new binary classification algorithm that can achieve higher inference accuracy for various datasets. Here we propose a model for arbitrary multiclass classification inspired by quantum state discrimination, which is enabled by encoding the data in the space of linear operators on a Hilbert space. While our algorithm is quantum-inspired, it can be implemented on classical hardware, thereby permitting immediate applications. ",Quantum State Discrimination for Supervised Classification " Accurate and precise measurements of masses of galaxy clusters are key to derive robust constraints on cosmological parameters. Rising evidence from observations, however, confirms that X-ray masses, obtained under the assumption of hydrostatic equilibrium, might be underestimated, as previously predicted by cosmological simulations. We analyse more than 300 simulated massive clusters, from `The Three Hundred Project', and investigate the connection between mass bias and several diagnostics extracted from synthetic X-ray images of these simulated clusters. We find that the azimuthal scatter measured in 12 sectors of the X-ray flux maps is a statistically significant indication of the presence of an intrinsic (i.e. 3D) clumpy gas distribution. We verify that a robust correction to the hydrostatic mass bias can be inferred when estimates of the gas inhomogeneity from X-ray maps (such as the azimuthal scatter or the gas ellipticity) are combined with the asymptotic external slope of the gas density or pressure profiles, which can be respectively derived from X-ray and millimetric (Sunyaev-Zeldovich effect) observations. We also obtain that mass measurements based on either gas density and temperature or gas density and pressure result in similar distributions of the mass bias. In both cases, we provide corrections that help reduce both the dispersion and skewness of the mass bias distribution. These are effective even when irregular clusters are included leading to interesting implications for the modelling and correction of hydrostatic mass bias in cosmological analyses of current and future X-ray and SZ cluster surveys. ",The Three Hundred Project: correcting for the hydrostatic-equilibrium mass bias in X-ray and SZ surveys A new approach to study the scaling behavior of the scalar theory near the Gaussian fixed point in $d$-dimensions is presented. For a class of initial data an explicit use of the Green's function of the evolution equation is made. It is thus discussed under which conditions non-polynomial relevant interactions can be generated by the renormalization group flow. ,Non-perturbative scaling in the scalar theory " New ways of documenting and describing language via electronic media coupled with new ways of distributing the results via the World-Wide Web offer a degree of access to language resources that is unparalleled in history. At the same time, the proliferation of approaches to using these new technologies is causing serious problems relating to resource discovery and resource creation. This article describes the infrastructure that the Open Language Archives Community (OLAC) has built in order to address these problems. Its technical and usage infrastructures address problems of resource discovery by constructing a single virtual library of distributed resources. Its governance infrastructure addresses problems of resource creation by providing a mechanism through which the language-resource community can express its consensus on recommended best practices. ",The Open Language Archives Community: An infrastructure for distributed archiving of language resources " The MAGIC 17m diameter Cherenkov telescope will be upgraded with a second telescope with advanced photon detectors and ultra fast readout within the year 2007. The sensitivity of MAGIC-II, the two telescope system, will be improved by a factor of 2. In addition the energy threshold will be reduced and the energy and angular resolution will be improved. The design, status and expected performance of MAGIC-II is presented here. ",Status of the second phase of the MAGIC telescope " We studied the ground and excited states properties for Zr isotopes starting^M from proton to neutron drip-lines using the relativistic and non-relativistic mean field formalisms with BCS and Bogolyubov pairing. The celebrity NL3 and SLy4 parameter sets are used in the calculations. We find spherical ground and low-lying^M superdeformed excited states in most of the isotopes. Several couples of^M $\Omega^{\pi}=1/2^{\pm}$ parity doublets configurations are found,^M while analyzing the single-particle energy levels of the superdeformed ^M configurations. ",Shape co-existence and parity doublet in Zr isotopes " We give two new proofs of Perelman's theorem that shrinking breathers of Ricci flow on closed manifolds are gradient Ricci solitons, using the fact that the singularity models of type I solutions are shrinking gradient Ricci solitons and the fact that non-collapsed type I ancient solutions have rescaled limits being shrinking gradient Ricci solitons. ",New proofs of Perelman's theorem on shrinking Breathers in Ricci flow " The interaction of two screw dislocations in smectic-A liquid crystals is treated using an anharmonic correction to the elastic energy density. In the present contribution the elastic energy and the force between two screw dislocations is evaluated and discussed. For screw dislocations with both parallel and opposite Burgers vectors there is an attraction of dislocations for their small separations while for greater separations there is a repulsion. It can be explained by dominating terms in interaction energy which do not depend on signs of dislocations. In this way, the interaction energy of screw dislocations in smectic A liquid crystal within an anharmonic approximation differs with respect to the case of screw dislocations in solids. ",Screw dislocation interaction in smectic-A liquid crystals in an anharmonic approximation " This is the first in a series of papers in which we develop a twistor-based method of constructing hyperkaehler metrics from holomorphic functions and elliptic curves. As an application, we revisit the Atiyah-Hitchin manifold and derive in an explicit holomorphic coordinate basis closed-form formulas for, among other things, the metric, the holomorphic symplectic form and all three Kaehler potentials. ",Elliptic constructions of hyperkaehler metrics I: The Atiyah-Hitchin manifold " Armchair silicene nanoribbons with width of 9-39 silicon atoms are investigated by using self-consistent field crystal orbital method based on density functional theory. The carrier mobilities obtained from deformation potential theory oscillate with respect to the width and the values are a fraction of what the graphene nanoribbons have. The buckled structure, hydrogen saturation, edge reconstruction as well as edge roughness decrease the carrier mobilities which are explained with the aid of crystal orbitals. ",A theoretical investigation on the carrier mobilities of armchair silicene nanoribbons " The aim of this article is to prove that, under certain conditions, an affine flat normal scheme that is of finite type over a local Dedekind scheme in mixed characteristic admits infinitely many normal effective Cartier divisors. For the proof of this result, we prove the Bertini theorem for normal schemes of some type. We apply the main result to prove a result on the restriction map of divisor class groups of Grothendieck-Lefschetz type in mixed characteristic. ",Normal hyperplane sections of normal schemes in mixed characteristic " We report a hydrodynamic derivation of the Schr\""odinger equation. The derivation only assumes that spin represents the angular momentum associated with the rotation of a charged fluid within a massive particle. The rotation velocity can be evaluated through the current density of the fourth Maxwell equation, leading to a quantum correction of the classical fluid energy. The Schr\""odinger equation then follows in Madelung form by application of the Poisson operator of the Euler equations for an ideal fluid to the total fluid energy including the quantum effect of internal spin. The hydrodynamic representation is then used to obtain the stress-energy-momentum tensor for a quantum particle. We find that the trace of the quantum modification to the stress-energy-momentum tensor expresses the energy density of an oscillator with frequency given by the vorticity of the internal rotation velocity. Finally, the stress-energy-momentum tensor is used to determine the relationship between the Ricci scalar curvature arising from the Einstein field equations and the fluid density associated with the hydrodynamic representation of the quantum particle in a static spherically symmetric configuration. ","Hydrodynamic derivation of the Schr\""odinger equation and spacetime curvature of a quantum particle" " Large-scale high sensitivity laser gyroscopes have important applications for ground-based and space-based gravitational wave detection. We report on the development of a 3 m$\times$3 m heterolithic passive resonant gyroscope (HUST-1) which is installed on the ground of a cave laboratory. We operate the HUST-1 on different longitudinal cavity modes and the rotation sensitivity reaches $1.6\times10^{-9}$ rad/s/$\rm \sqrt{Hz}$ beyond 1 Hz. The drift of the cavity length is one of the major sensitivity limits for our gyroscope in the low frequency regime. By locking cavity length to an ultra-stable reference laser, we achieve a fractional cavity length stability of $5.6\times10^{-9}$ m$/\rm \sqrt{Hz}$ at 0.1 mHz, a four orders of magnitude improvement over the unconstrained cavity in the low frequency regime. We stabilize the cavity length of a large-scale heterolithic passive resonant gyroscope through active feedback and realize long-term operation. The rotation sensitivity reaches $1.7\times10^{-7}$ rad/s/$\sqrt{\rm{Hz}}$ at 0.1 mHz, a three orders of magnitude improvement, which is no longer limited by the cavity length drift in this frequency range. ",3 m$\times$3 m heterolithic passive resonant gyroscope with cavity length stabilization " We investigate the dynamics of kinetically constrained models of glass formers by analysing the statistics of trajectories of the dynamics, or histories, using large deviation function methods. We show that, in general, these models exhibit a first-order dynamical transition between active and inactive dynamical phases. We argue that the dynamical heterogeneities displayed by these systems are a manifestation of dynamical first-order phase coexistence. In particular, we calculate dynamical large deviation functions, both analytically and numerically, for the Fredrickson-Andersen model, the East model, and constrained lattice gas models. We also show how large deviation functions can be obtained from a Landau-like theory for dynamical fluctuations. We discuss possibilities for similar dynamical phase-coexistence behaviour in other systems with heterogeneous dynamics. ",First-order dynamical phase transition in models of glasses: an approach based on ensembles of histories " This paper presents a quasi-deterministic ray tracing (QD-RT) method for analyzing the propagation of electromagnetic waves in street canyons. The method uses a statistical bistatic distribution to model the Radar Cross Section (RCS) of various irregular objects such as cars and pedestrians, instead of relying on exact values as in a deterministic propagation model. The performance of the QD-RT method is evaluated by comparing its generated path loss distributions to those of the deterministic ray tracing (D-RT) model using the Two-sample Cramer-von Mises test. The results indicate that the QD-RT method generates the same path loss distributions as the D-RT model while offering lower complexity. This study suggests that the QD-RT method has the potential to be used for analyzing complicated scenarios such as street canyon scenarios in mmWave wireless communication systems. ",RCS-based Quasi-Deterministic Ray Tracing for Statistical Channel Modeling " FASER is one of the promising experiments which search for long-lived particles beyond the Standard Model. In this paper, we consider charged lepton flavor violation (CLFV) via a light and weakly interacting boson and discuss the detectability by FASER. We focus on four types of CLFV interactions, i.e., the scalar-, pseudoscalar-, vector-, and dipole-type interaction, and calculate the sensitivity of FASER to each CLFV interaction. We show that, with the setup of FASER2, a wide region of the parameter space can be explored. Particularly, it is found that FASER2 has a sensitivity to very small coupling regions in which the rare muon decays, such as $\mu \rightarrow e\gamma$, cannot place bounds, and that there is a possibility to detect CLFV decays of the new light bosons. ",Search for Lepton Flavor Violating Decay at FASER " We have examined the luminosity-size relationship as a function of environment for 12150 SDSS galaxies with precise visual classifications from the catalog of Nair & Abraham (2010a). Our analysis is subdivided into investigations of early-type galaxies and late-type galaxies. Early-type galaxies reveal a surprisingly tight luminosity-size relation. The dispersion in luminosity about the fiducial relation is only ~0.14 dex (0.35 mag), even though the sample contains galaxies which differ by a factor of almost 100 in luminosity. The dispersion about the luminosity-size relation is comparable to the dispersion about the fundamental plane, even though the luminosity-size relation is fundamentally simpler and computed using purely photometric parameters. The key contributors to the dispersion about the luminosity-size relation are found to be color and central concentration. Expanding our analysis to the full range of morphological types, we show that the slope, zero point, and scatter about the luminosity-size relation is independent of environmental density. Our study thus indicates that whatever process is building galaxies is doing so in a way that preserves fundamental scaling laws even as the typical luminosity of galaxies changes with environment. However, the distribution of galaxies along the luminosity-size relation is found to be strongly dependent on galaxy environment. This variation is in the sense that, at a given morphology, larger and more luminous galaxies are rarer in sparser environments. Our analysis of late-type galaxy morphologies reveals that scatter increases towards later Hubble types. Taken together, these results place strong constraints on conventional hierarchical models in which galaxies are built up in an essentially stochastic way. ",The Environmental Dependence of the Luminosity-Size Relation for Galaxies " Digitization of analogue signals has opened up new avenues for information hiding and the recent advancements in the telecommunication field has taken up this desire even further. From copper wire to fiber optics, technology has evolved and so are ways of covert channel communication. By ""Covert"" we mean ""anything not meant for the purpose for which it is being used"". Investigation and detection of existence of such cover channel communication has always remained a serious concern of information security professionals which has now been evolved into a motivating source of an adversary to communicate secretly in ""open"" without being allegedly caught or noticed. This paper presents a survey report on steganographic techniques which have been evolved over the years to hide the existence of secret information inside some cover (Text) object. The introduction of the subject is followed by the discussion which is narrowed down to the area where digital ASCII Text documents are being used as cover. Finally, the conclusion sums up the proceedings. ",State Of The Art In Digital Steganography Focusing ASCII Text Documents " We propose a minimal extension of the standard model of particle physics to accommodate cosmic inflation, dark matter and light neutrino masses. While the inflationary phase is obtained from a modified chaotic inflation scenario, consistent with latest cosmology data, the dark matter particle is a fermion singlet which remains out of equilibrium in the early universe. The scalar field which revives the chaotic inflation scenario by suitable modification also assists in generating tiny couplings of dark matter with its mother particle, naturally realizing the non-thermal or freeze-in type dark matter scenario. Interestingly, the same assisting scalar field also helps in realizing tiny Yukawa couplings required to generate sub-eV Dirac neutrino mass from neutrino couplings to the standard model like Higgs field. The minimality as well as providing a unified solution to all three problems keep the model predictive at experiments spanning out to all frontiers. ","Common origin of modified chaotic inflation, non thermal dark matter and Dirac neutrino mass" " We study a certain polytope depending on a graph $G$ and a parameter $\beta\in(0,1)$ which arises from embedding the Hamiltonian cycle problem in a discounted Markov decision process. Eshragh \emph{et al.} conjectured a lower bound on the proportion of feasible bases corresponding to Hamiltonian cycles in the set of all feasible bases. We make progress towards a proof of the conjecture by proving results about the structure of feasible bases. In particular, we prove three main results: (1) the set of feasible bases is independent of the parameter $\beta$ when the parameter is close to 1, (2) the polytope can be interpreted as a generalized network flow polytope and (3) we deduce a combinatorial interpretation of the feasible bases. We also provide a full characterization for a special class of feasible bases, and we apply this to provide some computational support for the conjecture. ",Feasible bases for a polytope related to the Hamilton cycle problem " We use martingale and stochastic analysis techniques to study a continuous-time optimal stopping problem, in which the decision maker uses a dynamic convex risk measure to evaluate future rewards. We also find a saddle point for an equivalent zero-sum game of control and stopping, between an agent (the ""stopper"") who chooses the termination time of the game, and an agent (the ""controller"", or ""nature"") who selects the probability measure. ",Optimal Stopping for Dynamic Convex Risk Measures We study the classical dynamics of resonantly modulated large-spin systems in a strong magnetic field. We show that these systems have special symmetry. It leads to characteristic nonlinear effects. They include abrupt switching between magnetization branches with varying modulating field without hysteresis and a specific pattern of switching in the presence of multistability and hysteresis. Along with steady forced vibrations the transverse spin components can display transient vibrations at a combination of the Larmor frequency and a slower frequency determined by the anisotropy energy. The analysis is based on a microscopic theory that takes into account relaxation mechanisms important for single-molecule magnets and other large-spin systems. We find how the Landau-Lifshitz model should be modified in order to describe the classical spin dynamics. The occurrence of transient oscillations depends on the interrelation between the relaxation parameters. ,"Hysteresis, transient oscillations, and nonhysteretic switching in resonantly modulated large-spin systems" " We study the interaction between a topological insulator nanoparticle and a quantum dot subject to an applied electric field. The electromagnetic response of the topological insulator is derived from axion electrodynamics in the quasistatic approximation. Localized modes are quantized in terms of dipolar bosonic modes, which couples dipolarly to the quantum dot. Hence, we treat the hybrid as a two-level system interacting with a single bosonic mode, where the coupling strength encodes the information concerning the nontrivial topology of the nanoparticle. The interaction of the hybrid with the environment is implemented through the coupling with a continuum reservoir of radiative output modes and a reservoir of phonon modes. In particular, we use the method of Zubarev's Green functions to derive an expression for the optical absorption spectrum of the system. We apply our results to a realistic system which consists of a topological insulator nanoparticle made of TlBiSe$_{2}$ interacting with a cadmium selenide quantum dot, both immersed in a polymer layer such as poly(methyl methacrylate). The optical absorption spectrum exhibits Fano resonances with a line shape that strongly depends on the polarization of the electric field as well as on the topological magnetoelectric polarizability $\theta$. Our results and methods can also be applied to nontopological magnetoelectric materials such as Cr$_{2}$O$_{3}$. ",Optical response of a topological-insulator--quantum-dot hybrid interacting with a probe electric field " We study the Thermodynamic Bethe Ansatz equations for a one-parameter quantum field theory recently introduced by V.A.Fateev. The presence of chemical potentials produces a kink condensate that modifies the excitation spectrum. For some combinations of the chemical potentials an additional gapless mode appears. Various energy scales emerge in the problem. An effective field theory, describing the low energy excitations, is also introduced. ",Thermodynamics of Fateev's models in the Presence of External Fields " We consider a random walk model in a one-dimensional environment, formed by several zones of finite width with the fixed transition probabilities. It is also assumed that the transitions to the left and right neighboring points have unequal probabilities. In continuous limit, we derive analytically the probability distribution function, which is mainly determined by a walker diffusion and drift and accounts perturbatively for interface effects between zones. It is used for computing the probability to find a walker in a given space-time point and the time dependence of the mean squared displacement of a walker, which reveals the transient anomalous diffusion. To justify our approach, the probability function is compared with the results of numerical simulations for a three-zone environment. ",Asymmetric Random Walk in a One-Dimensional Multi-Zone Environment " High-index surfaces of silicon with adsorbed gold can reconstruct to form highly ordered linear step arrays. These steps take the form of a narrow strip of graphitic silicon. In some cases - specifically, for Si(553)-Au and Si(557)-Au - a large fraction of the silicon atoms at the exposed edge of this strip are known to be spin-polarized and charge-ordered along the edge. The periodicity of this charge ordering is always commensurate with the structural periodicity along the step edge and hence leads to highly ordered arrays of local magnetic moments that can be regarded as ""spin chains"". Here, we demonstrate theoretically as well as experimentally that the closely related Si(775)-Au surface has - despite its very similar overall structure - zero spin polarization at its step edge. Using a combination of density-functional theory and scanning tunneling microscopy, we propose an electron-counting model that accounts for these differences. The model also predicts that unintentional defects and intentional dopants can create local spin moments at Si(hhk)-Au step edges. We analyze in detail one of these predictions and verify it experimentally. This finding opens the door to using techniques of surface chemistry and atom manipulation to create and control silicon spin chains. ",Spin Chains and Electron Transfer at Stepped Silicon Surfaces " We demonstrate the control of resonance characteristics of a drum type graphene mechanical resonator in nonlinear oscillation regime by the photothermal effect, which is induced by a standing wave of light between a graphene and a substrate. Unlike the conventional Duffing type nonlinearity, the resonance characteristics in nonlinear oscillation regime is modulated by the standing wave of light despite a small variation amplitude. From numerical calculations with a combination of equations of heat and motion with Duffing type nonlinearity, this can be explained that the photothermal effect causes delayed modulation of stress or tension of the graphene. ",Resonance control of graphene drum resonator in nonlinear regime by standing wave of light " We study the one-dimensional $S=1/2$ XXZ model on a finite lattice at zero temperature, varying the exchange anisotropy $\Delta$ and the number of sites $N$ of the lattice. Special emphasis is given to the model with $\Delta=1/2$ and $N$ odd, whose ground state, the so-called Razumov-Stroganov state, has a peculiar structure and no finite-size corrections to the energy per site. We find that such model corresponds to a special point on the $\Delta$-axis which separates the region where adding spin-pairs increases the energy per site from that where the longer the chain the lower the energy. Entanglement properties do not hold surprises for $\Delta=1/2$ and $N$ odd. Finite-size corrections to the energy per site non trivially vanish also in the ferromagnetic $\Delta\to -1^+$ isotropic limit, which is consequently addressed; in this case, peculiar features of some entanglement properties, due to the finite length of the chain and related with the change in the symmetry of the Hamiltonian, are evidenced and discussed. In both the above models the absence of finite-size corrections to the energy per site is related to a peculiar structure of the ground state, which has permitted us to provide new exact analytic expressions for some correlation functions. ",When finite-size corrections vanish: The S=1/2 XXZ model and the Razumov-Stroganov state " We study the effects of planarization (the construction of a planar diagram $D$ from a non-planar graph $G$ by replacing each crossing by a new vertex) on graph width parameters. We show that for treewidth, pathwidth, branchwidth, clique-width, and tree-depth there exists a family of $n$-vertex graphs with bounded parameter value, all of whose planarizations have parameter value $\Omega(n)$. However, for bandwidth, cutwidth, and carving width, every graph with bounded parameter value has a planarization of linear size whose parameter value remains bounded. The same is true for the treewidth, pathwidth, and branchwidth of graphs of bounded degree. ",The Effect of Planarization on Width " We present Zeeman-Doppler images of the active K2 star II Peg for the years 2004 and 2007. The surface magnetic field was reconstructed with our new ZDI code ""iMap"" which provides a full polarized radiative transfer driven inversion to simultaneously reconstruct the surface temperature and magnetic vector field distribution. II Peg shows a remarkable large scale magnetic field structure for both years. The magnetic field is predominantly located at high latitudes and is arranged in active longitudes. A dramatic evolution in the magnetic field structure is visible for the two years, where a dominant and largely unipolar field in 2004 has developed into two distinct and large scale bipolar structures in 2007. ",Zeeman-Doppler Imaging of II Peg - Magnetic field restructuring from 2004 to 2007 " We consider a network of banks that optimally choose a strategy of asset liquidations and borrowing in order to cover short term obligations. The borrowing is done in the form of collateralized repurchase agreements, the haircut level of which depends on the total liquidations of all the banks. Similarly the fire-sale price of the asset obtained by each of the banks depends on the amount of assets liquidated by the bank itself and by other banks. By nature of this setup, banks' behavior is considered as a Nash equilibrium. This paper provides two forms for market clearing to occur: through a common closing price and through an application of the limit order book. The main results of this work are providing the existence of maximal and minimal clearing solutions (i.e., liquidations, borrowing, fire sale prices, and haircut levels) as well as sufficient conditions for uniqueness of the clearing solutions. ",A Repo Model of Fire Sales with VWAP and LOB Pricing Mechanisms " An analysis about the antisymmetric tensor matter fields Avdeev-Chizhov theory in a curved space-time is performed. We show, in a curved spacetime, that the Avdeev-Chizhov theory can be seen as a kind of a $\lambda\phi^{4}$ theory for a ""complex self-dual"" field. This relationship between Avdeev-Chizhov theory and $\lambda\phi^{4}$ theory simplify the study of tensor matter fields in a curved space-time. The energy-momentum tensor for matter fields is computed. ",Antisymmetric tensor matter fields in a curved space-time " In this work, we consider a tight binding lattice with two non-Hermitian impurities. The system is described by a non-Hermitian generalization of the Aubry Andre model. We show for the first time that there exists topologically nontrivial edge states with real spectra in the PT symmetric region. ",Topological phase in a non-Hermitian PT symmetric system " We consider K3 surfaces which are double cover of rational elliptic surfaces. The former are endowed with a natural elliptic fibration, which is induced by the latter. There are also other elliptic fibrations on such K3 surfaces, which are necessarily induced by special linear systems on the rational elliptic surfaces. We describe these linear systems. In particular, we observe that every conic bundle on the rational surface induces a genus 1 fibration on the K3 surface and we classify the singular fibers of the genus 1 fibration on the K3 surface it terms of singular fibers and special curves on the conic bundle on the rational surface. ",Linear systems on rational elliptic surfaces and elliptic fibrations on K3 surfaces We present a numerical study of a trapped binary Bose-condensed gas by solving the corresponding Hartree-Fock equations. The density profile of the binary Bose gas is solved with a harmonic trapping potential as a function of temperature in two and three dimensions. We find a symmetry breaking in the two dimensional case where the two condensates separate. We also present a phase diagram in the three dimensional case of the different regions where the binary condensate becomes a single condensate and eventually an ordinary gas as function of temperature and the interaction strength between the atoms. ,Hartree-Fock treatment of the two-component Bose-Einstein condensate " We study one-dimensional very singular parabolic equations with periodic boundary conditions and initial data in $BV$, which is the energy space. We show existence of solutions in this energy space and then we prove that they are viscosity solutions in the sense of Giga-Giga. ",Energy solutions to one-dimensional singular parabolic problems with $BV$ data are viscosity solutions " The efficiency of quantum state tomography is discussed from the point of view of quantum parameter estimation theory, in which the trace of the weighted covariance is to be minimized. It is shown that tomography is optimal only when a special weight is adopted. ",Efficiency of quantum state tomography for qubits " In this paper we present an algorithm for enumerating without repetitions all the non-crossing generically minimally rigid bar-and-joint frameworks under edge constraints (also called constrained non-crossing Laman frameworks) on a given generic set of $n$ points. Our algorithm is based on the reverse search paradigm of Avis and Fukuda. It generates each output graph in $O(n^4)$ time and O(n) space, or, slightly different implementation, in $O(n^3)$ time and $O(n^2)$ space. In particular, we obtain that the set of all the constrained non-crossing Laman frameworks on a given point set is connected by flips which restore the Laman property. ",Enumerating Constrained Non-crossing Minimally Rigid Frameworks " A group $G$ has a Frobenius graphical representation (GFR) if there is a simple graph $\varGamma$ whose full automorphism group is isomorphic to $G$ and it acts on vertices as a Frobenius group. In particular, any group $G$ with GFR is a Frobenius group and $\varGamma$ is a Cayley graph. The existence of an infinite family of groups with GFR whose Frobenius kernel is a non-abelian $2$-group has been an open question. In this paper, we give a positive answer by showing that the Higman group $A(f,q_0)$ has a GFR for an infinite sequence of $f$ and $q_0$. ",Graphical Frobenius representations of non-abelian groups " LOFT (Large Observatory For x-ray Timing) is one of the ESA M3 missions selected within the Cosmic Vision program in 2011 to carry out an assessment phase study and compete for a launch opportunity in 2022-2024. The phase-A studies of all M3 missions were completed at the end of 2013. LOFT is designed to carry on-board two instruments with sensitivity in the 2-50 keV range: a 10 m 2 class Large Area Detector (LAD) with a <1{\deg} collimated FoV and a wide field monitor (WFM) making use of coded masks and providing an instantaneous coverage of more than 1/3 of the sky. The prime goal of the WFM will be to detect transient sources to be observed by the LAD. However, thanks to its unique combination of a wide field of view (FoV) and energy resolution (better than 500 eV), the WFM will be also an excellent monitoring instrument to study the long term variability of many classes of X-ray sources. The WFM consists of 10 independent and identical coded mask cameras arranged in 5 pairs to provide the desired sky coverage. We provide here an overview of the instrument design, configuration, and capabilities of the LOFT WFM. The compact and modular design of the WFM could easily make the instrument concept adaptable for other missions. ",The design of the wide field monitor for LOFT " Current research environments are witnessing high enormities of presentations occurring in different sessions at academic conferences. This situation makes it difficult for researchers (especially juniors) to attend the right presentation session(s) for effective collaboration. In this paper, we propose an innovative venue recommendation algorithm to enhance smart conference participation. Our proposed algorithm, Social Aware Recommendation of Venues and Environments (SARVE), computes the Pearson Correlation and social characteristic information of conference participants. SARVE further incorporates the current context of both the smart conference community and participants in order to model a recommendation process using distributed community detection. Through the integration of the above computations and techniques, we are able to recommend presentation sessions of active participant presenters that may be of high interest to a particular participant. We evaluate SARVE using a real world dataset. Our experimental results demonstrate that SARVE outperforms other state-of-the-art methods. ",Socially-Aware Venue Recommendation for Conference Participants " Symplectic instanton homology is an invariant for closed oriented three-manifolds, defined by Manolescu and Woodward, which conjecturally corresponds to a symplectic version of a variant of Floer's instanton homology. In this thesis we study the behaviour of this invariant under connected sum, Dehn surgery, and four-dimensional cobordisms. We prove a K\""unneth-type formula for the connected sum : let $Y$ and $Y'$ be two closed oriented three-manifolds, we show that the symplectic instanton homology of their connected sum is isomorphic to the direct sum of the tensor product of their symplectic instanton homology, and a shift of their torsion product. We define twisted versions of this homology, and then prove an analog of the Floer exact sequence, relating the invariants of a Dehn surgery triad. We use this exact sequence to compute the rank of the groups associated to branched double covers of quasi-alternating links, some plumbings of disc bundles over spheres, and some integral Dehn surgeries along certain knots. We then define invariants for four dimensional cobordisms as maps between the symplectic instanton homology of the two boundaries. We show that among the three morphisms in the surgery exact sequence, two are such maps, associated to the handle-attachment cobordisms. We also give a vanishing criteria for such maps associated to blow-ups. ","Homologie Instanton Symplectique : somme connexe, chirurgie enti\`ere, et applications induites par cobordismes" " We investigate the quark masses and mixings by including vector-like down-type quark singlets in universality of strength for Yukawa couplings (USY). In contrast with the standard model with USY, the sufficient $ CP $ violation is obtained for the Cabibbo-Kobayashi-Maskawa matrix through the mixing between the ordinary quarks and quark singlets. The top-bottom mass hierarchy $ m_t \gg m_b $ also appears naturally in the USY scheme with the down-type quark singlets. ",Universality of strength for Yukawa couplings with extra down-type quark singlets " We introduce a lattice model with local U(1) gauge symmetry which incorporates explicit frustration in d >2. The form of the action is inspired from the loop expansion of the fermionic determinant in standard lattice QED. We study through numerical simulations the phase diagram of the model, revealing the existence of a frustrated (antiferromagnetic) phase for d=3 and d=4, once an appropriate order parameter is identified. ",Investigation of a Toy Model for Frustration in Abelian Lattice Gauge Theory " We present and analyze the optical and X-ray catalogs of moderate-redshift cluster candidates from the ROSAT Optical X-ray Survey, or ROXS. The survey covers 4.8 square degrees (23 ROSAT PSPC pointings). The cross-correlated cluster catalogs were constructed by comparing two independent catalogs extracted from the optical and X-ray bandpasses, using a matched filter technique for the optical data and a wavelet technique for the X-ray data. We cross-id cluster candidates in each catalog. In Paper II we present the cluster catalogs and a numerical simulation of ROXS. We also present color-magnitude plots for several cluster candidates, and examine the prominence of the red sequence in each. We find that the X-ray clusters analyzed in this way do not all have a prominent red sequence. We conclude that while the red sequence may be distinct for massive, virialized clusters, it may be less so for lower-mass clusters at even moderate redshifts. Multiple, complementary methods of selecting and defining clusters may be essential, particularly at high redshift where all methods run into completeness limits, incomplete understanding of physical evolution, and projection effects. ",Distant Cluster Hunting II: A comparison of X-ray and optical cluster detection techniques and catalogs from the ROX Survey " We report observations of the flickering variability of the recurrent nova RS Oph at quiescence on the basis of simultaneous observations in 5 bands (UBVRI). RS Oph has flickering source with (U-B)_0=-0.62 \pm 0.07, (B-V)_0=0.15 \pm 0.10, (V-R)_0=0.25 \pm 0.05. We find for the flickering source a temperature T_fl = 9500 \pm 500 K, and luminosity L_fl = 50 - 150 L_sun (using a distance of d=1.6kpc). We also find that on a (U-B) vs (B-V) diagram the flickering of the symbiotic stars differs from that of the cataclysmic variables. The possible source of the flickering is discussed. The data are available upon request from the authors and on the web www.astro.bas.bg/~rz/RSOph.UBVRI.2010.MNRAS.tar.gz. ",UBVRI observations of the flickering of RS Ophiuchi at Quiescence " Wireless Energy Transfer (WET) is emerging as a potential solution for powering small energy-efficient devices. We propose strategies that use multiple antennas at a power station, which wirelessly charges a large set of single-antenna devices. Proposed strategies operate without Channel State Information (CSI) and we attain the distribution and main statistics of the harvested energy under Rician fading channels with sensitivity and saturation energy harvesting (EH) impairments. A switching antenna strategy, where a single antenna with full power transmits at a time, provides the most predictable energy source, and it is particularly suitable for powering sensor nodes with highly sensitive EH hardware operating under non-LOS (NLOS) conditions; while other WET schemes perform alike or better in terms of the average harvested energy. Under NLOS switching antennas is the best, while when LOS increases transmitting simultaneously with equal power in all antennas is the most beneficial. Moreover, spatial correlation is not beneficial unless the power station transmits simultaneously through all antennas, raising a trade-off between average and variance of the harvested energy since both metrics increase with the spatial correlation. Moreover, the performance gap between CSI-free and CSI-based strategies decreases quickly as the number of devices increases. ",Statistical Analysis of Multiple Antenna Strategies for Wireless Energy Transfer " The orbital magnetic susceptibility of an electron gas in a periodic potential depends not only on the zero field energy spectrum but also on the geometric structure of cell-periodic Bloch states which encodes interband effects. In addition to the Berry curvature, we explicitly relate the orbital susceptibility of two-band models to a quantum metric tensor defining a distance in Hilbert space. Within a simple tight-binding model allowing for a tunable Bloch geometry, we show that interband effects are essential even in the absence of Berry curvature. We also show that for a flat band model, the quantum metric gives rise to a very strong orbital paramagnetism. ",Geometric orbital susceptibility: quantum metric without Berry curvature " Inspired by ""quantum graphity"" models for spacetime, a statistical model of graphs is proposed to explore possible realizations of emergent manifolds. Graphs with given numbers of vertices and edges are considered, governed by a very general Hamiltonian that merely favors graphs with near-constant valency and local rotational symmetry. The ratio of vertices to edges controls the dimensionality of the emergent manifold. The model is simulated numerically in the canonical ensemble for a given vertex to edge ratio, where it is found that the low-energy states are almost triangulations of two-dimensional manifolds. The resulting manifold shows topological ""handles"" and surface intersections in a higher embedding space, as well as non-trivial fractal dimension consistent with previous spectral analysis, and nonlocal links consistent with models of disordered locality. The transition to an emergent manifold is first order, and thus dependent on microscopic structure. Issues involved in interpreting nearly-fixed valency graphs as Feynman diagrams dual to a triangulated manifold as in matrix models are discussed. Another interesting phenomenon is that the entropy of the graphs are super-extensive, a fact known since Erd\H{o}s, which results in a transition temperature of zero in the limit of infinite system size: infinite manifolds are always disordered. Aside from a finite universe or diverging coupling constraints as possible solutions to this problem, long-range interactions between vertex defects also resolve the problem and restore a nonzero transition temperature, in a manner similar to that in low-dimensional condensed-matter systems. ",Statistical mechanics of graph models and their implications for emergent spacetime manifolds " We examine over 40,000 International Protection (IP) determinations for non-EEA nationals covering a 16 year period in Ireland. We reconfigure these individual outcomes into a set of over 23,000 matched pairs based on combination of direct matching and propensity score matching. A key feature of this approach is that it replicates the statistical features of an experimental set-up where observational data only are to hand. As a consequence we are able to identify those explanatory factors that in fact contribute to the grant of IP. This is a key innovation in the analysis of protection outcomes. We centre our study in the realm of International Relations studies on protection. We are particularly interested in whether immigration policy is a latent tool used to influence the odds of a grant of IP, specifically via the introduction of the Immigration Act 2004. Using both conditional maximum likelihood and mixed effects models we find this is not the case, this conclusion is both novel and profound in a matched pair context. On this basis we conclude there can be little justification for the perception that immigration policy is a latent tool affecting the protection process in Ireland. ",A Matched Pairs Analysis of International Protection Outcomes in Ireland " We study phenomenological implications of the ATLAS and CMS hint of a $125\pm 1$ GeV Higgs boson for the singlet, and singlet plus doublet non-supersymmetric dark matter models, and for the phenomenology of the CMSSM. We show that in scalar dark matter models the vacuum stability bound on Higgs boson mass is lower than in the standard model and the 125 GeV Higgs boson is consistent with the models being valid up the GUT or Planck scale. We perform a detailed study of the full CMSSM parameter space keeping the Higgs boson mass fixed to $125\pm 1$ GeV, and study in detail the freeze-out processes that imply the observed amount of dark matter. After imposing all phenomenological constraints except for the muon $(g-2)_\mu,$ we show that the CMSSM parameter space is divided into well separated regions with distinctive but in general heavy sparticle mass spectra. Imposing the $(g-2)_\mu$ constraint introduces severe tension between the high SUSY scale and the experimental measurements -- only the slepton co-annihilation region survives with potentially testable sparticle masses at the LHC. In the latter case the spin-independent DM-nucleon scattering cross section is predicted to be below detectable limit at the XENON100 but might be of measurable magnitude in the general case of light dark matter with large bino-higgsino mixing and unobservably large scalar masses. ",Implications of the 125 GeV Higgs boson for scalar dark matter and for the CMSSM phenomenology " Novel mathematical models of three different repressilator topologies are introduced. As designable transcription factors have been shown to bind to DNA non-cooperatively, we have chosen models containing non-cooperative elements. The extended topologies involve three additional transcription regulatory elements---which can be easily implemented by synthetic biology---forming positive feedback loops. This increases the number of variables to six, and extends the complexity of the equations in the model. To perform our analysis we had to use combinations of modern symbolic algorithms of computer algebra systems Mathematica and Singular. The study shows that all the three models have simple dynamics what can also be called regular behaviour: they have a single asymptotically stable steady state with small amplitude oscillations in the 3D case and no oscillation in one of the 6D cases and damping oscillation in the second 6D case. Using the program QeHopf we were able to exclude the presence of Hopf bifurcation in the 3D system. ",On three genetic repressilator topologies We introduce proof of spending in a block-chain system. In this system the probability for a node to create a legal block is proportional to the total amount of coins it has spent in history. ,Proof of spending in block-chain systems " Based on a nonabelian generalization of electric-magnetic duality, the Dualized Standard Model (DSM) suggests a natural explanation for exactly 3 generations of fermions as the `dual colour' $\widetilde{SU}(3)$ symmetry broken in a particular manner. The resulting scheme then offers on the one hand a fermion mass hierarchy and a perturbative method for calculating the mass and mixing parameters of the Standard Model fermions, and on the other testable predictions for new phenomena ranging from rare meson decays to ultra-high energy cosmic rays. Calculations to 1-loop order gives, at the cost of adjusting only 3 real parameters, values for the following quantities all (except one) in very good agreement with experiment: the quark CKM matrix elements $|V_{rs}|$, the lepton CKM matrix elements $|U_{rs}|$, and the second generation masses $m_c, m_s, m_\mu$. This means, in particular, that it gives near maximal mixing $U_{\mu3}$ between $\nu_\mu$ and $\nu_\tau$ as observed by SuperKamiokande, Kamiokande and Soudan, while keeping small the corresponding quark angles $V_{cb}, V_{ts}$. In addition, the scheme gives (i) rough order-of-magnitude estimates for the masses of the lowest generation, (ii) predictions for low energy FCNC effects such as $K_L \to e \mu$, (iii) a possible explanation for the long-standing puzzle of air showers beyond the GZK cut-off. All these together, however, still represent but a portion of the possible physical consequences derivable from the DSM scheme the majority of which are yet to be explored. ",The Dualized Standard Model and its Applications---an Interim Report We study the entanglement emergence in a dipolar-coupled nuclear spin-1/2 system cooled using the adiabatic demagnetization technique. The unexpected behavior of entanglement for the next- and next-next-neighbor spins is revealed: entangled states of a spin system appear in two distinct temperature and magnetic field regions separated by a zero-concurrence area. The magnetic field dependence of the concurrence can have two maximums which positions are determined by the initial conditions and number of spins in a chain. ,Adiabatic demagnetization and generation of entanglement in spin systems " We analyze the transformation properties of Faraday law in an empty space and its relationship with Maxwell equations. In our analysis we express the Faraday law via the four-potential of electromagnetic field and the field of four-velocity, defined on a circuit under its deforming motion. The obtained equations show one more facet of the physical meaning of electromagnetic potentials, where the motional and transformer parts of the flux rule are incorporated into a common phenomenon, reflecting the dependence of four-potential on spatial and time coordinates, correspondingly. It has been explicitly shown that for filamentary closed deforming circuit the flux rule is Lorentz-invariant. At the same time, analyzing a transformation of e.m.f., we revealed a controversy: due to causal requirements, the e.m.f. should be the value of fixed sign, whereas the Lorentz invariance of flux rule admits the cases, where the e.m.f. might change its sign for different inertial observers. Possible resolutions of this controversy are discussed. ",The Faraday induction law in relativity theory " Efficiencies of organic solar cells have practically doubled since the development of non-fullerene acceptors (NFAs). However, generic chemical design rules for donor-NFA combinations are still needed. Such rules are proposed by analyzing inhomogeneous electrostatic fields at the donor-acceptor interface. It is shown that an acceptor-donor-acceptor molecular architecture, and molecular alignment parallel to the interface, results in energy level bending that destabilizes the charge transfer state, thus promoting its dissociation into free charges. By analyzing a series of PCE10:NFA solar cells, with NFAs including Y6, IEICO, and ITIC, as well as their halogenated derivatives, it is suggested that the molecular quadrupole moment of ca 75 Debye A balances the losses in the open circuit voltage and gains in charge generation efficiency. ",Chemical design rules for non-fullerene acceptors in organic solar cells " Good quantum codes, such as quantum MDS codes, are typically nondegenerate, meaning that errors of small weight require active error-correction, which is--paradoxically--itself prone to errors. Decoherence free subspaces, on the other hand, do not require active error correction, but perform poorly in terms of minimum distance. In this paper, examples of degenerate quantum codes are constructed that have better minimum distance than decoherence free subspaces and allow some errors of small weight that do not require active error correction. In particular, two new families of [[n,1,>= sqrt(n)]]_q degenerate quantum codes are derived from classical duadic codes. ",Remarkable Degenerate Quantum Stabilizer Codes Derived from Duadic Codes " Staged rollout is a strategy of incrementally releasing software updates to portions of the user population in order to accelerate defect discovery without incurring catastrophic outcomes such as system wide outages. Some past studies have examined how to quantify and automate staged rollout, but stop short of simultaneously considering multiple product or process metrics explicitly. This paper demonstrates the potential to automate staged rollout with multi-objective reinforcement learning in order to dynamically balance stakeholder needs such as time to deliver new features and downtime incurred by failures due to latent defects. ",Automating Staged Rollout with Reinforcement Learning " The supernova remnant LMC N132D is a remarkably luminous gamma-ray emitter at $\sim$50 kpc with an age of $\sim$2500 years. It belongs to the small group of oxygen-rich SNRs, which includes Cassiopeia A (Cas A) and Puppis A. N132D is interacting with a nearby molecular cloud. By adding 102 hours of new observations with the High Energy Stereoscopic System (H.E.S.S.) to the previously published data with exposure time of 150 hours, we achieve the significant detection of N132D at a 5.7$\sigma$ level in the very high energy (VHE) domain. The gamma-ray spectrum is compatible with a single power law extending above 10 TeV. We set a lower limit on an exponential cutoff energy at 8 TeV with 95% CL. The multi-wavelength study supports a hadronic origin of VHE gamma-ray emission indicating the presence of sub-PeV cosmic-ray protons. The detection of N132D is remarkable since the TeV luminosity is higher than that of Cas A by more than an order of magnitude. Its luminosity is comparable to, or even exceeding the luminosity of RX J1713.7-3946 or HESS J1640-465. Moreover, the extended power-law tail in the VHE spectrum of N132D is surprising given both the exponential cutoff at 3.5 TeV in the spectrum of its 340-year-old sibling, Cassiopeia A, and the lack of TeV emission from a Fermi- LAT 2FHL source (E > 50 GeV) associated with Puppis A. We discuss a physical scenario leading to the enhancement of TeV emission via the interaction between N132D and a near molecular cloud. ",LMC N132D: a mature supernova remnant with a youthful gamma-ray spectrum " Measure the similarity of the nodes in the complex networks have interested many researchers to explore it. In this paper, a new method which is based on the degree centrality and the Relative-entropy is proposed to measure the similarity of the nodes in the complex networks. The results in this paper show that, the nodes which have a common structure property always have a high similarity to others nodes. The nodes which have a high influential to others always have a small value of similarity to other nodes and the marginal nodes also have a low similar to other nodes. The results in this paper show that the proposed method is useful and reasonable to measure the similarity of the nodes in the complex networks. ",Measure the similarity of nodes in the complex networks " A high resolution vacuum ultraviolet (HRVUV) beamline based on a 6.65 meter off-plane Eagle spectrometer is in operation at the Indus-1 synchrotron radiation source, RRCAT, Indore, India. To facilitate position sensitive detection and fast spectral recording, a new BaFBr:Eu2+ phosphor based image plate (IP) detection system interchangeable with the existing photomultiplier (PMT) scanning system has been installed on this beamline. VUV photoabsorption studies on Xe, O2, N2O and SO2 are carried out to evaluate the performance of the IP detection system. An FWHM of ~ 0.5 {\AA} is achieved for the Xe atomic line at 1469.6 {\AA}. Reproducibility of spectra is found to be within the experimental resolution. Compared to the PMT scanning system, the IP shows several advantages in terms of sensitivity, recording time and S/N ratio, which are highlighted in the paper. This is the first report of incorporation of an IP detection system in a VUV beamline using synchrotron radiation. Commissioning of the new detection system is expected to greatly enhance the utilization of the HRVUV beamline as a number of spectroscopic experiments which require fast recording times combined with a good signal to noise ratio are now feasible. ",Photostimulated phosphor based image plate detection system for HRVUV beamline at Indus-1 synchrotron radiation source " This note explains consequences of recent work of Frank Quinn for computations of Nil groups in algebraic K-theory, in particular the Nil groups occurring in the K-theory of polynomial rings, Laurent polynomial rings, and the group ring of the infinite dihedral group. ",Some remarks on Nil groups in algebraic K-theory " We investigate the design aspects of feature distillation methods achieving network compression and propose a novel feature distillation method in which the distillation loss is designed to make a synergy among various aspects: teacher transform, student transform, distillation feature position and distance function. Our proposed distillation loss includes a feature transform with a newly designed margin ReLU, a new distillation feature position, and a partial L2 distance function to skip redundant information giving adverse effects to the compression of student. In ImageNet, our proposed method achieves 21.65% of top-1 error with ResNet50, which outperforms the performance of the teacher network, ResNet152. Our proposed method is evaluated on various tasks such as image classification, object detection and semantic segmentation and achieves a significant performance improvement in all tasks. The code is available at https://sites.google.com/view/byeongho-heo/overhaul ",A Comprehensive Overhaul of Feature Distillation " We investigate the connection between quantum resources and extractable work in quantum batteries. We demonstrate that quantum coherence in the battery or the battery-charger entanglement is a necessary resource for generating nonzero extractable work during the charging process. At the end of the charging process, we also establish a tight link of coherence and entanglement with the final extractable work: coherence naturally promotes the coherent work while coherence and entanglement inhibit the incoherent work. We also show that obtaining maximally coherent work is faster than obtaining maximally incoherent work. Examples ranging from the central-spin battery and the Tavis-Cummings battery to the spin-chain battery are given to illustrate these results. ","Entanglement, Coherence, and Extractable Work in Quantum Batteries" " The weighting of critical-point samples in the weighted randomized maximum likelihood method depend on the magnitude of the data mismatch at the critical points and on the Jacobian of the transformation from the prior density to the proposal density. When standard iterative ensemble smoothers are applied for data assimilation, the Jacobian is identical for all samples. If a hybrid data assimilation method is applied, however, there is the possibility that each ensemble member can have a distinct Jacobian and that the posterior density can be multimodal. In order to apply a hybrid method iterative ensemble smoother, it is necessary that a part of the transformation from the prior Gaussian random variable to the data be analytic. Examples might include analytic transformation from a latent Gaussian random variable to permeability followed by a black-box transformation from permeability to state variables in porous media flow, or a Gaussian hierarchical model for variables followed by a similar black-box transformation from permeability to state variables. In this paper, we investigate the application of weighting to both types of examples. ",Weighted RML using ensemble-methods for data assimilation " We establish the equations which translate a conservation law for the problem of the seismic response of an above-ground structure (e.g., building, hill or mountain) of arbitrary shape and inquire whether both the implicit (formal) and explicit (numerical) solutions for the response obey this law for the case of a cylindrical, rectangular protuberance. Both the low-order (poor approximations of the response) as well as higher-order (supposedly better approximations) turn out to satisfy the conservation of flux relation, which means that the satisfaction of this relation is a necessary, but not sufficient, means for determining whether a solution to the scattering problem is valid. ",A conservation law for testing methods of prediction of the seismic wave response of a protuberance emerging from flat ground We show that a large family of groups without non-abelian free subgroups satisfy the following strengthening of non-amenability: they each have a rich supply of irreducible representations defining exotic C*-algebras. The construction is explicit. ,A family of exotic group C*-algebras " The exact fermion master equation previously obtained in [Phys. Rev. B \textbf{78}, 235311 (2008); New J. Phys. \textbf{12}, 083013 (2010)] describes the dynamics of quantum states of a principal system of fermionic particles under the influences of external fermion reservoirs (e.g. nanoelectronic systems). Here, we present the general solution to this exact fermion master equation. The solution is analytically expressed in the most intuitive particle number representation. It is applicable to an arbitrary number of orbitals in the principal system prepared at arbitrary initial states. We demonstrate the usefulness of such general solution with the transient dynamics of nanostructured artificial molecules. We show how various initial states can lead to distinct transient dynamics, manifesting a multitude of underlying transition pathways. ",General analytical solution to exact fermion master equation " Emerging contaminants (ECs) have come out as the latest class of environmental contaminants, which are highly recalcitrant and toxic in nature. Currently, no suitable rectification methods are available against the ECs, resulting in a continuous increase in their concentration. Non-thermal plasma, as an advanced oxidation process, has been emerging as a promising technology against the ECs treatment. In the present work, a detailed experimental study is carried out to evaluate the efficacy of a non-thermal plasma jet with two dyes, Rhodamine B and Methylene Blue, as model contaminants. The plasma jet provided a complete dye decoloration in 30 min with an applied voltage of 6.5 kV. .OH, having the highest oxidation potential, acts as the main reactive species, which with direct action on contaminants also acts indirectly by getting converted into H2O2 and O3. Further, the effect of critical operational parameters viz., sample pH, applied voltage (4.5-6.5 kV), conductivity (5-20 mScm-1), and sample distance on plasma treatment efficacy was also examined. Out of all the assessed parameters, the applied voltage and sample conductivity was found to be the most significant operating parameter. A high voltage and low conductivity were found to favor the dye decoloration, while the pH effect was not that significant. To understand the influence of plasma discharge gas on treatment efficacy, all the experiments are conducted with Argon and Helium gases under the fixed geometrical configuration. Both the gases provided a similar dye decoloration efficiency. The DBD plasma system with complete dye removal also rendered maximum mineralization of 73 % for Rd. B, and 60 % for Met. Blue. Finally, the system's efficiency against the actual ECs (four pharmaceutical compounds, viz., metformin, atenolol, acetaminophen, and ranitidine) and microbial contaminant (Escherichia coli) was also tested. ",Development and optimization of low power non-thermal plasma jet operational parameters for treating dyes and emerging contaminants " We show that for every closed Riemannian manifold there exists a continuous family of $1$-cycles (defined as finite collections of disjoint closed curves) parametrized by a sphere and sweeping out the whole manifold so that the lengths of all connected closed curves are bounded in terms of the volume (or the diameter) and the dimension $n$ of the manifold, when $n \geq 3$. An alternative form of this result involves a modification of Gromov's definition of waist of sweepouts, where the space of parameters can be any finite polyhedron (and not necessarily a pseudomanifold). We demonstrate that the so-defined polyhedral $1$-dimensional waist of a closed Riemannian manifold is equal to its filling radius up to at most a constant factor. We also establish upper bounds for the polyhedral $1$-waist of some homology classes in terms of the volume or the diameter of the ambient manifold. In addition, we provide generalizations of these results for sweepouts by polyhedra of higher dimension using the homological filling functions. Finally, we demonstrate that the filling radius and the hypersphericity of a closed Riemannian manifold can be arbitrarily far apart. ",Sweepouts of closed Riemannian manifolds " We demonstrate phase control of magnons in the van der Waals antiferromagnet NiPS$_3$ using optical excitation by polarized light. The sign of the coherent precession of spin amplitude changes upon (1) reversing the helicity of a circularly polarized pump beam, or (2) rotating the polarization of a linearly polarized pump by $\pi/2$. Because these two excitation pathways have comparable generation efficiency, the phase of spin precession can be continuously tuned from 0 to $2\pi$ by controlling the polarization state of the pump pulse. The ability to excite magnons with a desired phase has potential applications in the design of a spin-wave phased array and ultrafast spin information processing. ",Phase control of magnons in the van der Waals antiferromagnet NiPS$_3$ " We report the first detection of an inverse Compton X-ray emission, spatially correlated with a very steep spectrum radio source (VSSRS), 0038-096, without any detected optical counterpart, in cluster Abell 85. The ROSAT PSPC data and its multiscale wavelet analysis reveal a large scale (linear diameter of the order of 500 h^{-1}_{50} kpc), diffuse X-ray component, in excess to the thermal bremsstrahlung, overlapping an equally large scale VSSRS. The primeval 3 K background photons, scattering off the relativistic electrons can produce the X-rays at the detected level. The inverse Compton flux is estimated to be (6.5\pm 0.5) 10^{-13} erg s^{-1}cm^{-2} in the 0.5-2.4 keV X-ray band. A new 327 MHz radio map is presented for the cluster field. The synchrotron emission flux is estimated to be (6.6\pm 0.90) \times 10^{-14} erg s^{-1} cm^{-2} in the 10-100 MHz radio band. The positive detection of both radio and X-ray emission from a common ensemble of relativistic electrons leads to an estimate of (0.95\pm 0.10) 10^{-6} G for the cluster-scale magnetic field strength. The estimated field is free of the `equipartition' conjecture, the distance, and the emission volume. Further, the radiative fluxes and the estimated magnetic field imply the presence of `relic' (radiative lifetime > 10^{9} yr) relativistic electrons with Lorentz factors \gamma \approx 700-1700, that would be a significant source of radio emission in the hitherto unexplored frequency range \nu \approx 2-10 MHz. ","The diffuse, relic radio source in Abell 85: estimation of cluster scale magnetic field from inverse Compton X-rays" " By gradually changing the degree of the anisotropy in a XXZ chain we study the defect formation in a quantum system that crosses an extended critical region. We discuss two qualitatively different cases of quenches, from the antiferromagnetic to the ferromagnetic phase and from the critical to the antiferromegnetic phase. By means of time-dependent DMRG simulations, we calculate the residual energy at the end of the quench as a characteristic quantity gauging the loss of adiabaticity. We find the dynamical scalings of the residual energy for both types of quenches, and compare them with the predictions of the Kibble-Zurek and Landau-Zener theories. ",Adiabatic quenches through an extended quantum critical region " This paper addresses the performance of systems which use commercial wireless devices to make bistatic RF channel measurements for non-contact respiration sensing. Published research has typically presented results from short controlled experiments on one system. In this paper, we deploy an extensive real-world comparative human subject study. We observe twenty patients during their overnight sleep (a total of 160 hours), during which contact sensors record ground-truth breathing data, patient position is recorded, and four different RF breathing monitoring systems simultaneously record measurements. We evaluate published methods and algorithms. We find that WiFi channel state information measurements provide the most robust respiratory rate estimates of the four RF systems tested. However, all four RF systems have periods during which RF-based breathing estimates are not reliable. ",Comparing Respiratory Monitoring Performance of Commercial Wireless Devices " Modelling the growth histories of specific galaxies often involves generating the entire population of objects that arise in a given cosmology and selecting systems with appropriate properties. This approach is highly inefficient when targeting rare systems such as the extremely luminous high-redshift galaxy candidates detected by JWST. Here, we present a novel framework for generating merger trees with branches that are guaranteed to achieve a desired halo mass at a chosen redshift. This method augments extended Press Schechter theory solutions with constrained random processes known as Brownian bridges and is implemented in the open-source semi-analytic model $\texttt{Galacticus}$. We generate ensembles of constrained merger trees to predict the growth histories of seven high-redshift JWST galaxy candidates, finding that these systems most likely merge $\approx 2~\mathrm{Gyr}$ after the observation epoch and occupy haloes of mass $\gtrsim 10^{14}~M_{\mathrm{\odot}}$ today. These calculations are thousands of times more efficient than existing methods, are analytically controlled, and provide physical insights into the evolution of haloes with rapid early growth. Our constrained merger tree implementation is publicly available at http://github.com/galacticusorg/galacticus. ",Growing the First Galaxies' Merger Trees " We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via nonnegative values of real-valued measures and all the quantum product expectations -- via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence of this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH>2 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable (LqHV) model introduced in [Loubenets, J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in [Loubenets, J. Phys. A: Math. Theor. 45, 185306 (2012)]. ",Context-invariant quasi hidden variable (qHV) modelling of all joint von Neumann measurements for an arbitrary Hilbert space " We present a radio-quiet quasar at z=0.237 discovered ""turning on"" by the intermediate Palomar Transient Factory (iPTF). The transient, iPTF 16bco, was detected by iPTF in the nucleus of a galaxy with an archival SDSS spectrum with weak narrow-line emission characteristic of a low-ionization emission line region (LINER). Our follow-up spectra show the dramatic appearance of broad Balmer lines and a power-law continuum characteristic of a luminous (L_bol~10^45 erg/s) type 1 quasar 12 years later. Our photometric monitoring with PTF from 2009-2012, and serendipitous X-ray observations from the XMM-Newton Slew Survey in 2011 and 2015, constrain the change of state to have occurred less than 500 days before the iPTF detection. An enhanced broad Halpha to [OIII]5007 line ratio in the type 1 state relative to other changing-look quasars also is suggestive of the most rapid change of state yet observed in a quasar. We argue that the >10 increase in Eddington ratio inferred from the brightening in UV and X-ray continuum flux is more likely due to an intrinsic change in the accretion rate of a pre-existing accretion disk, than an external mechanism such as variable obscuration, microlensing, or the tidal disruption of a star. However, further monitoring will be helpful in better constraining the mechanism driving this change of state. The rapid ""turn on"" of the quasar is much shorter than the viscous infall timescale of an accretion disk, and requires a disk instability that can develop around a ~10^8 M_sun black hole on timescales less than a year. ","iPTF Discovery of the Rapid ""Turn On"" of a Luminous Quasar" " We study from the numerical point of view, instabilities developed in a fluid layer with a free surface, in a cylindrical container which is non-homogeneously heated from below. In particular we consider the case in which the applied heat is localized around the origin. An axysimmetric basic state appears as soon a non-zero horizontal temperature gradient is imposed. The basic state may bifurcate to different solutions depending on vertical and lateral temperature gradients and on the shape of the heating function. We find different kinds of instabilities: extended patterns growing on the whole domain which include those known as targets and spirals waves. Spirals are present even for infinite Prandtl number. Localized structures both at the origin and at the outer part of the cylinder may appear either as Hopf or stationary bifurcations. An overview of the developed instabilities as a function of the dimesionless parameters is reported in this article. ",Spiral Instabilities in Rayleigh-B\'enard Convection under Localized Heating " Media bias and its extreme form, fake news, can decisively affect public opinion. Especially when reporting on policy issues, slanted news coverage may strongly influence societal decisions, e.g., in democratic elections. Our paper makes three contributions to address this issue. First, we present a system for bias identification, which combines state-of-the-art methods from natural language understanding. Second, we devise bias-sensitive visualizations to communicate bias in news articles to non-expert news consumers. Third, our main contribution is a large-scale user study that measures bias-awareness in a setting that approximates daily news consumption, e.g., we present respondents with a news overview and individual articles. We not only measure the visualizations' effect on respondents' bias-awareness, but we can also pinpoint the effects on individual components of the visualizations by employing a conjoint design. Our bias-sensitive overviews strongly and significantly increase bias-awareness in respondents. Our study further suggests that our content-driven identification method detects groups of similarly slanted news articles due to substantial biases present in individual news articles. In contrast, the reviewed prior work rather only facilitates the visibility of biases, e.g., by distinguishing left- and right-wing outlets. ",Newsalyze: Effective Communication of Person-Targeting Biases in News Articles " This work was developed aiming to employ Statistical techniques to the field of Music Emotion Recognition, a well-recognized area within the Signal Processing world, but hardly explored from the statistical point of view. Here, we opened several possibilities within the field, applying modern Bayesian Statistics techniques and developing efficient algorithms, focusing on the applicability of the results obtained. Although the motivation for this project was the development of a emotion-based music recommendation system, its main contribution is a highly adaptable multivariate model that can be useful interpreting any database where there is an interest in applying regularization in an efficient manner. Broadly speaking, we will explore what role a sound theoretical statistical analysis can play in the modeling of an algorithm that is able to understand a well-known database and what can be gained with this kind of approach. ",Use of Variational Inference in Music Emotion Recognition " We obtain large deviations estimates for the self-intersection local times for a symmetric random walk in dimension 3. Also, we show that the main contribution to making the self-intersection large, in a time period of length $n$, comes from sites visited less than some power of $\log(n)$. This is opposite to the situation in dimensions larger or equal to 5. Finally, we present two applications of our estimates: (i) to moderate deviations estimates for the range of a random walk, and (ii) to moderate deviations for random walk in random sceneries. ",Large deviations estimates for self-intersection local times for simple random walk in $\Z^3$ " We study the estimation of a high dimensional approximate factor model in the presence of both cross sectional dependence and heteroskedasticity. The classical method of principal components analysis (PCA) does not efficiently estimate the factor loadings or common factors because it essentially treats the idiosyncratic error to be homoskedastic and cross sectionally uncorrelated. For efficient estimation it is essential to estimate a large error covariance matrix. We assume the model to be conditionally sparse, and propose two approaches to estimating the common factors and factor loadings; both are based on maximizing a Gaussian quasi-likelihood and involve regularizing a large covariance sparse matrix. In the first approach the factor loadings and the error covariance are estimated separately while in the second approach they are estimated jointly. Extensive asymptotic analysis has been carried out. In particular, we develop the inferential theory for the two-step estimation. Because the proposed approaches take into account the large error covariance matrix, they produce more efficient estimators than the classical PCA methods or methods based on a strict factor model. ",Efficient Estimation of Approximate Factor Models via Regularized Maximum Likelihood " Fibrosis is a common lesion in different pathologic diseases and is defined by the excessive accumulation of collagen. Different approaches have been used to treat different conditions characterized by fibrosis. FDA and EMA approved collagenase to treat palmar fibromatosis, Dupuyten disease. EMA approved additionally its use in severe Peyronie disease, but it has been used off label in other conditions.1, 2. Approved treatment includes up to 3, in palmar fibromatosis or up to 8, in penile fibromatosis, injections followed by finger extension or penile modelling procedures, typically causing severe pain. Frequently single injections are enough to treat palmar fibromatosis. 3, The need to inject repeatedly doses of this enzyme can be originated by by the labile nature of collagenase which exhibits a complete activity loss after short periods of time. Herein, a novel strategy to manage this enzyme based on the synthesis of polymeric nanocapsules which contains collagenase housed within their matrix is presented. These nanocapsules have been engineered for achieving a gradual release of the encapsulated enzyme for longer times which can be up to ten days. The efficacy of these nanocapsules have been tested in murine model of local dermal fibrosis yielding higher fibrosis reduction in comparison with the injection of free enzyme which represent a significant improvement over conventional therapy. ",Collagenase Nanocapsules: An Approach to Fibrosis Treatment " The exceptional mechanical strengths of medium and high-entropy alloys have been attributed to hardening in random solid solutions. Here, we evidence non-random chemical mixings in CrCoNi alloys, resulting from short range ordering. A novel data-mining approach of electron nanodiffraction patterns enabled the study, which is assisted by neutron scattering, atom probe tomography, and diffraction simulation using first principles theory models. Results reveal two critical types of short range orders in nanoclusters that minimize the Cr and Cr nearest neighbors (L11) or segregate Cr on alternating close-packed planes (L12). The makeup of ordering-strengthened nanoclusters can be tuned by heat treatments to affect deformation mechanisms. These findings uncover a mixture of bonding preferences and their control at the nanoscopic scale in CrCoNi and provide general opportunities for an atomistic-structure study in concentrated alloys for the design of strong and ductile materials. ",Chemical Short-Range Ordering in a CrCoNi Medium-Entropy Alloy " We study mean-field phases of the t-J model with long-range Coulomb interaction. In the order of increasing doping density we find a classical antiferromagnet, charge and spin stripes, and a uniform d-wave superconductor, at the realistic doping parameters. Both in-phase and anti-phase stripes exist as metastable configurations, but the in-phase stripes have a slightly lower energy. The dependence of the stripe width and the inter-stripe spacing on the doping is examined. Effects of fluctuations around the mean-field states are discussed. ","Antiferromagnetism, Stripes, and Superconductivity in the t-J Model with Coulomb Interaction" " The use of information and communication technology in our day to day activities is now unavoidable. In tourism developments, destination information and management systems are used to guide visitors and provide information to both visitors and management of the tour sites. In this paper, information and navigation system was designed for tourists, taking some Niger state of Nigeria tourism destinations into account. The information management system was designed using Java Applet (NetBeans IDE 6.1), Hypertext MarkUp Language (HTML), Personal Home Page (PHP), Java script and MySQL as the back-end integration database. Two different MySQL servers were used, the MySQL query browser and the WAMP5 server to compare the effectiveness of the system developed. ",Destination Information Management System for Tourist " There is a proliferation in the number of satellites launched each year, resulting in downlinking of terabytes of data each day. The data received by ground stations is often unprocessed, making this an expensive process considering the large data sizes and that not all of the data is useful. This, coupled with the increasing demand for real-time data processing, has led to a growing need for on-orbit processing solutions. In this work, we investigate the performance of CNN-based object detectors on constrained devices by applying different image compression techniques to satellite data. We examine the capabilities of the NVIDIA Jetson Nano and NVIDIA Jetson AGX Xavier; low-power, high-performance computers, with integrated GPUs, small enough to fit on-board a nanosatellite. We take a closer look at object detection networks, including the Single Shot MultiBox Detector (SSD) and Region-based Fully Convolutional Network (R-FCN) models that are pre-trained on DOTA - a Large Scale Dataset for Object Detection in Aerial Images. The performance is measured in terms of execution time, memory consumption, and accuracy, and are compared against a baseline containing a server with two powerful GPUs. The results show that by applying image compression techniques, we are able to improve the execution time and memory consumption, achieving a fully runnable dataset. A lossless compression technique achieves roughly a 10% reduction in execution time and about a 3% reduction in memory consumption, with no impact on the accuracy. While a lossy compression technique improves the execution time by up to 144% and the memory consumption is reduced by as much as 97%. However, it has a significant impact on accuracy, varying depending on the compression ratio. Thus the application and ratio of these compression techniques may differ depending on the required level of accuracy for a particular task. ",Optimizing Data Processing in Space for Object Detection in Satellite Imagery " There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP). To better mimic the brain, training a network $\textit{one layer at a time}$ with only a ""single forward pass"" has been proposed as an alternative to bypass BP; we refer to these networks as ""layer-wise"" networks. We continue the work on layer-wise networks by answering two outstanding questions. First, $\textit{do they have a closed-form solution?}$ Second, $\textit{how do we know when to stop adding more layers?}$ This work proves that the kernel Mean Embedding is the closed-form weight that achieves the network global optimum while driving these networks to converge towards a highly desirable kernel for classification; we call it the $\textit{Neural Indicator Kernel}$. ",Deep Layer-wise Networks Have Closed-Form Weights " The evolutionary path of rotating CO WDs directly accreting CO-rich matter is followed up to few seconds before the explosive breakout in the framework of the Double Degenerate rotationally-driven accretion scenario. We find that the evolutionary properties depend only on the actual mass of the accreting WD and not on the previous history. We determine the expected frequency and amplitude of the gravitational wave emission, which occurs during the mass transfer process and acts as a self-tuning mechanism of the accretion process itself. The gravitational signal related to Galactic sources can be easily detected with the next generation of space-born interferometers and can provide notable constraints to the progenitor model. The expected statistical distribution of pre-explosive objects in the Galaxy is provided also in the effective temperature-apparent bolometric magnitude diagrams which can be used to identify merged DD systems via UV surveys. We emphasize that the thermonuclear explosion occurs owing to the decay of physical conditions keeping over-stable the structure above the classical Chandrasekhar limit and not by a steady increase of the WD mass up to this limit. This conclusion is independent of the evolutionary scenario for the progenitors, but it is a direct consequence of the stabilizing effect of rotation. Such an occurrence is epistemological change of the perspective in defining the ignition process in accreting WDs. Moreover, this requires a long evolutionary period (several million years) to attain the explosion after the above mentioned conditions cease to keep stable the WD. Therefore it is practically impossible to detect the trace of the exploding WD companion in recent pre-explosion frames of even very near SN Ia. ",Pre-Explosive Observational Properties of Type Ia Supernovae " The Random Resistor cum Tunneling Network (RRTN) model was proposed from our group by considering an extra phenomenological (semi-classical) tunneling process into a classical RRN bond percolation model. We earlier reported about early-stage two inverse power-laws, followed by large time purely exponential tail in some of the RRTN macroscopic current relaxations. In this paper, we investigate on the broader perspective of current relaxation. We present here an analytical argument behind the strong convergence (irrespective of initial voltage configuration) of the bulk current towards its steady-state, mapping the problem into a special kind of Gauss-Seidel method. We find two phenomenological time-scales (referred as $\tau_t$ and $\tau_s$), those emerge from the variation of macroscopic quantities during current dynamics. We show that not both, only one of them is independent. Thus there exists a {\it single} scale in time which controls the entire dynamics. ",Estimate of time-scale for the current relaxation of percolative Random Resistor cum Tunneling Network model The $2$-fold Bailey lemma is a special case of the $s$-fold Bailey lemma introduced by Andrews in 2000. We examine this special case and its applications to partitions and recently discovered $q$-series identities. Our work provides a general comparison of the utility of the $2$-fold Bailey lemma and the more widely applied $1$-fold Bailey lemma. We also offer a discussion of the $spt_M(n)$ function and related identities. ,Some implications of the $2$-fold Bailey lemma " Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data. However, a large quantity of labeled graphs is difficult to obtain, which significantly limits the true success of GNNs. Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research. In this paper, we present an investigation on active learning with GNNs for node classification tasks. Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning. With a theoretical bound analysis we justify the design choice of our approach. In our experiments on four benchmark datasets, the proposed method outperforms other representative baseline methods consistently and significantly. ",Active Learning for Graph Neural Networks via Node Feature Propagation " Due to importance of reducing of time solution in numerical codes, we propose an algorithm for parallel LU decomposition solver for dense and sparse matrices on GPU. This algorithm is based on first bi-vectorizing a triangular matrices of decomposed coefficient matrix and then equalizing vectors. So we improve performance of LU decomposition on equal contributed scheme on threads. This algorithm also is convenient for other parallelism method and multi devices. Several test cases show advantage of this method over other familiar method. ",Equal bi-Vectorized (EBV) method to high performance on GPU " We analyze the role of maximally aligned isoscalar pairs in heavy $N=Z$ nuclei by employing a formalism of quartets. Quartets are superpositions of two neutrons and two protons coupled to total isospin $T=0$ and given $J$. The study is focused on the contribution of spin-aligned pairs carrying the angular momentum $J=9$ to the structure of $^{96}$Cd and $^{92}$Pd. We show that the role played by the $J=9$ pairs is quite sensitive to the model space and, in particular, it decreases considerably by passing from the simple $0g_{9/2}$ space to the more complete $1p_{1/2}$,$1p_{3/2}$,$0f_{5/2}$,$0g_{9/2}$ space. In the latter case the description of these nuclei in terms of only spin-aligned $J=9$ pairs turns out to be unsatisfactory while an important contribution, particularly in the ground state, is seen to arise from isovector $J=0$ and isoscalar $J=1$ pairs. Thus, contrary to previous studies, we find no compelling evidence of a spin-aligned pairing phase in $^{92}$Pd. ",Quarteting and spin-aligned proton-neutron pairs in heavy N=Z nuclei " We study a holographic model which exhibits a quantum phase transition from the strongly interacting Weyl semimetal phase to an insulating phase. In the holographic insulating phase there is a hard gap in the real part of frequency dependent diagonal conductivities. However, the anomalous Hall conductivity is nonzero at zero frequency, indicting that it is a Chern insulator. This holographic quantum phase transition is always of first order, signified by a discontinuous anomalous Hall conductivity at the phase transition, in contrast to the very continuous holographic Weyl semimetal/trivial semimetal phase transition. Our work reveals the novel phase structure of strongly interacting Weyl semimetal. ",Weyl semimetal/insulator transition from holography " This paper introduces the first provably accurate algorithms for differentially private, top-down decision tree learning in the distributed setting (Balcan et al., 2012). We propose DP-TopDown, a general privacy preserving decision tree learning algorithm, and present two distributed implementations. Our first method NoisyCounts naturally extends the single machine algorithm by using the Laplace mechanism. Our second method LocalRNM significantly reduces communication and added noise by performing local optimization at each data holder. We provide the first utility guarantees for differentially private top-down decision tree learning in both the single machine and distributed settings. These guarantees show that the error of the privately-learned decision tree quickly goes to zero provided that the dataset is sufficiently large. Our extensive experiments on real datasets illustrate the trade-offs of privacy, accuracy and generalization when learning private decision trees in the distributed setting. ",Scalable and Provably Accurate Algorithms for Differentially Private Distributed Decision Tree Learning " The q-state Potts model in two dimensions exhibits a first-order transition for q>4. As q->4+ the correlation length at this transition diverges. We argue that this limit defines a massive integrable quantum field theory whose lowest excitations are kinks connecting 4+1 degenerate ground states. We construct the S-matrix of this theory and the two-particle form factors, and hence estimate a number of universal amplitude ratios. These are in very good agreement with the results of extrapolated series in q^(-1/2) as well as Monte Carlo results for q=5. ",The Field Theory of the q->4+ Potts Model " We characterize the statistical properties of a large number of online auctions run on eBay. Both stationary and dynamic properties, like distributions of prices, number of bids etc., as well as relations between these quantities are studied. The analysis of the data reveals surprisingly simple distributions and relations, typically of power-law form. Based on these findings we introduce a simple method to identify suspicious auctions that could be influenced by a form of fraud known as shill bidding. Furthermore the influence of bidding strategies is discussed. The results indicate that the observed behavior is related to a mixture of agents using a variety of strategies. ",Statistical properties of online auctions " Let $S$ be subsemigroup with nonempty interior of a complex simple Lie group $G$. It is proved that $S=G$ if $S$ contains a subgroup $G(\alpha) \approx \mathrm{Sl}(2,\mathbb{C}) $ generated by the $\exp \mathfrak{g}_{\pm \alpha}$, where $\mathfrak{g}%_{\alpha}$ is the root space of the root $\alpha $. The proof uses the fact, proved before, that the invariant control set of $S$ is contractible in some flag manifold if $S$ is proper, and exploits the fact that several orbits of $G(\alpha)$ are 2-spheres not null homotopic. The result is applied to revisit a controllability theorem and get some improvements. ",Controllability of control systems simple Lie groups and the topology of flag manifolds " We prove the convergence of an incremental projection numerical scheme for the time-dependent incompressible Navier--Stokes equations, without any regularity assumption on the weak solution. The velocity and the pressure are discretised in conforming spaces, whose the compatibility is ensured by the existence of an interpolator for regular functions which preserves approximate divergence free properties. Owing to a priori estimates, we get the existence and uniqueness of the discrete approximation. Compactness properties are then proved, relying on a Lions-like lemma for time translate estimates. It is then possible to show the convergence of the approximate solution to a weak solution of the problem. The construction of the interpolator is detailed in the case of the lowest degree Taylor-Hood finite element. ",Convergence of the incremental projection method using conforming approximations " We investigate implications of quark-hadron duality for hybrid mesons in the large-Nc limit. A simple formalism is developed which implements duality for QCD two-point functions of currents of quark bilinears, with any number of gluons. We argue that the large-Nc meson masses share a common parameter, which is related to the QCD string tension. This parameter is fixed from correlators of conserved vector and axial-vector currents, and using lattice QCD determinations of the string tension. Our results predict towers of hybrid mesons which, within expected 1/Nc corrections, naturally accommodate the 1^(-+) experimental hybrid candidates. ",Quark-Hadron Duality for Hybrid Mesons at Large-Nc " We present high resolution (~0.1""), very high Strehl ratio (0.97+-0.03) mid-infrared (IR) adaptive optics (AO) images of the AGB star RV Boo utilizing the MMT adaptive secondary AO system. RV Boo was observed at a number of wavelengths over two epochs (9.8 um in May 2003, 8.8, 9.8 and 11.7 um in February 2004) and appeared slightly extended at all wavelengths. While the extension is very slight at 8.8 and 11.7 um data, the extension is somewhat more pronounced at 9.8 um. With such high Strehls we can achieve super-resolutions of 0.1"" by deconvolving RV Boo with a point-spread function (PSF) derived from an unresolved star. We tentatively resolve RV Boo into a 0.16"" FWHM extension at a position angle of 120 degrees. At a distance of 390(+250)(-100) pc, this corresponds to a FWHM of 60(+40)(-15) AU. We measure a total flux at 9.8 um of 145+-24 Jy for the disk and star. Based on a dust thermal emission model for the observed IR spectral energy distribution and the 9.8 um AO image, we derive a disk dust mass of 1.6x10^-6 Msun and an inclination of 30 to 45 degrees from edge-on. We discuss whether the dust disk observed around RV Boo is an example of the early stages in the formation of asymmetric structure in planetary nebula. ",High Resolution Mid - Infrared Imaging of the AGB Star RV Boo with the Steward Observatory Adaptive Optics System " We develop a topological method of measuring Chern-Simons number change in the real time evolution of classical lattice SU(2) and SU(2) Higgs theory. We find that the Chern-Simons number diffusion rate per physical 4-volume is very heavily suppressed in the broken phase, and that it decreases with lattice spacing in pure Yang-Mills theory, although not as quickly as predicted by Arnold, Son, and Yaffe. ",Lattice Chern-Simons Number Without Ultraviolet Problems " We study the non-autonomous variational problem: \begin{equation*} \inf_{(\phi,\theta)} \bigg\{\int_0^1 \bigg(\frac{k}{2}\phi'^2 + \frac{(\phi-\theta)^2}{2}-V(x,\theta)\bigg)\text{d}x\bigg\} \end{equation*} where $k>0$, $V$ is a bounded continuous function, $(\phi,\theta)\in H^1([0,1])\times L^2([0,1])$ and $\phi(0)=0$ in the sense of traces. The peculiarity of the problem is its setting in the product of spaces of different regularity order. Problems with this form arise in elastostatics, when studying the equilibria of a nonlinear Timoshenko beam under distributed load, and in classical dynamics of coupled particles in time-depending external fields. We prove the existence and qualitative properties of global minimizers and study, under additional assumptions on $V$, the existence and regularity of local minimizers. ",A non-autonomous variational problem describing a nonlinear Timoshenko beam " Advanced instruments in a variety of scientific domains are collecting massive amounts of data that must be post-processed and organized to support scientific research activities. Astronomers have been pioneers in the use of databases to host highly structured repositories of sky survey data. As more powerful telescopes come online, the increased volume and complexity of the data collected poses enormous challenges to state-of-the-art database systems and data-loading techniques. When the data source is an instrument taking ongoing samples, the database loading must, at a minimum, keep up with the data-acquisition rate. These challenges are being faced not only by the astronomy community, but also by other scientific disciplines interested in building scalable databases to house multi-terabyte archives of complex structured data. In this paper we present SkyLoader, our novel framework for fast and scalable data loading that is being used to populate a multi-table, multi-terabyte database repository for the Palomar-Quest sky survey. Our framework consists of an efficient algorithm for bulk loading, an effective data structure to support data integrity and proper error handling during the loading process, support for optimized parallelism that matches the number of concurrent loaders with the database host capabilities, and guidelines for database and system tuning. Performance studies showing the positive effects of the adopted strategies are also presented. Our parallel bulk loading with array buffering technique has made fast population of a multi-terabyte repository a reality, reducing the loading time for a 40-gigabyte data set from more than 20 hours to less than 3 hours. We believe our framework offers a promising approach for loading other large and complex scientific databases. ",Optimized Data Loading for a Multi-Terabyte Sky Survey Repository " The continuum of $^{10}$He nucleus is studied theoretically in a three-body $^{8}$He+$n$+$n$ model basing on the recent information concerning $^9$He spectrum [Golovkov, \textit{et al.}, Phys. Rev. C \textbf{76}, 021605(R) (2007)]. The $^{10}$He ground state (g.s.) candidate with structure $[p_{1/2}]^2$ for new g.s. energy of $^9$He is predicted to be at about $2.0-2.3$ MeV. The peak in the cross section associated with this state may be shifted to a lower energy (e.g. $\sim 1.2$ MeV) when $^{10}$He is populated in reactions with $^{11}$Li due to peculiar reaction mechanism. Formation of the low-energy ($E< 250$ keV) ``alternative'' ground state with structure $[s_{1/2}]^2$ is highly probable in $^{10}$He in the case of considerable attraction (e.g. $a<-5$ fm) in the s-wave $^9$He channel, which properties are still quite uncertain. This result either questions the existing experimental low-energy spectrum of $^{10}$He or place a limit on the scattering length in $^9$He channel, which contradicts existing data. ",Problems with interpretation of $^{10}$He ground state We compare the existing observational data on type Ia Supernovae with the evolutions of the universe predicted by a one-parameter family of tachyon models which we have introduced recently in paper \cite{we-tach}. Among the set of the trajectories of the model which are compatible with the data there is a consistent subset for which the universe ends up in a new type of soft cosmological singularity dubbed Big Brake. This opens up yet another scenario for the future history of the universe besides the one predicted by the standard $\Lambda$CDM model. ,"Tachyon cosmology, supernovae data and the Big Brake singularity" " Let $R$ be a ring and $b, c\in R$. In this paper, we give some characterizations of the $(b,c)$-inverse, in terms of the direct sum decomposition, the annihilator and the invertible elements. Moreover, elements with equal $(b,c)$-idempotents related to their $(b, c)$-inverses are characterized, and the reverse order rule for the $(b,c)$-inverse is considered. ","Characterizations of the $(b, c)$-inverse in a ring" " We consider a class of sparse learning problems in high dimensional feature space regularized by a structured sparsity-inducing norm which incorporates prior knowledge of the group structure of the features. Such problems often pose a considerable challenge to optimization algorithms due to the non-smoothness and non-separability of the regularization term. In this paper, we focus on two commonly adopted sparsity-inducing regularization terms, the overlapping Group Lasso penalty $l_1/l_2$-norm and the $l_1/l_\infty$-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As the core building-block of this framework, we develop new algorithms using an alternating partial-linearization/splitting technique, and we prove that the accelerated versions of these algorithms require $O(\frac{1}{\sqrt{\epsilon}})$ iterations to obtain an $\epsilon$-optimal solution. To demonstrate the efficiency and relevance of our algorithms, we test them on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms. ",Structured Sparsity via Alternating Direction Methods " There is a group-theoretical connection between fermion mixing matrices and minimal horizontal symmetry groups. Applying this connection to the tri-bimaximal neutrino mixing matrix, we show that the minimal horizontal symmetry group for leptons is uniquely $S_4$, the permutation group of four objects. ",The Unique Horizontal Symmetry of Leptons " In the last few years, a new type of Nb$_3$Sn superconducting composite, containing a high density of artificial pinning centers (APC) generated via an internal oxidation approach, has demonstrated a significantly superior performance relative to present, state-of-the-art commercial Nb$_3$Sn conductors. This was achieved via the internal oxidation of Nb-4at.%Ta-1at.%Zr alloy. On the other hand, our recent studies have shown that internal oxidation of Nb-Ta-Hf alloys can also lead to dramatic improvements in Nb$_3$Sn performance. In this work we follow up this latter approach, fabricating a 61-stack APC wire based on the internal oxidation of Nb-4at.%Ta-1at.%Hf alloy, and compare its critical current density (Jc) and irreversibility field (Birr) with APC wires made using Nb-4at.%Ta-1at.%Zr. A second goal of this work was to improve the filamentary design of APC wires in order to improve their wire quality and electromagnetic stability. Our new modifications have led to significantly improved RRR and stability in the conductors, while still keeping non-Cu Jc at or above the FCC Jc specification. Further improvement via optimization of the wire recipe and design is ongoing. Finally, additional work needed to make APC conductors ready for applications in magnets is discussed. ",APC Nb$_3$Sn superconductors based on internal oxidation of Nb-Ta-Hf alloys We present an efficient quantum algorithm for estimating Gauss sums over finite fields and finite rings. This is a natural problem as the description of a Gauss sum can be done without reference to a black box function. With a reduction from the discrete logarithm problem to Gauss sum estimation we also give evidence that this problem is hard for classical algorithms. The workings of the quantum algorithm rely on the interaction between the additive characters of the Fourier transform and the multiplicative characters of the Gauss sum. ,Efficient Quantum Algorithms for Estimating Gauss Sums " Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views. ",VIANA: Visual Interactive Annotation of Argumentation " The excitation energies of parahydrogen clusters have been systematically calculated by the diffusion Monte Carlo technique in steps of one molecule from 3 to 40 molecules. These clusters possess a very rich spectra, with angular momentum excitations arriving up to L=13 for the heavier ones. No regular pattern can be guessed in terms of the angular momenta and the size of the cluster. Clusters with N=13 and 36 are characterized by a peak in the chemical potential and a large energy gap of the first excited level, which indicate the magical character of these clusters. From the calculated excitation energies the partition function has been obtained, thus allowing for an estimate of thermal effects. An enhanced production is predicted for cluster sizes N=13, 31 and 36, in agreement with experiment. ",Excitation levels and magic numbers of small para-Hydrogen clusters (N$ \le 40$) We combine Wooley's efficient congruencing method with earlier work of Vinogradov and Hua to get effective bounds on Vinogradov's mean value theorem. ,Effective Vinogradov's mean value theorem via efficient boxing " Schur-Weyl duality is a ubiquitous tool in quantum information. At its heart is the statement that the space of operators that commute with the tensor powers of all unitaries is spanned by the permutations of the tensor factors. In this work, we describe a similar duality theory for tensor powers of Clifford unitaries. The Clifford group is a central object in many subfields of quantum information, most prominently in the theory of fault-tolerance. The duality theory has a simple and clean description in terms of finite geometries. We demonstrate its effectiveness in several applications: (1) We resolve an open problem in quantum property testing by showing that ""stabilizerness"" is efficiently testable: There is a protocol that, given access to six copies of an unknown state, can determine whether it is a stabilizer state, or whether it is far away from the set of stabilizer states. We give a related membership test for the Clifford group. (2) We find that tensor powers of stabilizer states have an increased symmetry group. We provide corresponding de Finetti theorems, showing that the reductions of arbitrary states with this symmetry are well-approximated by mixtures of stabilizer tensor powers (in some cases, exponentially well). (3) We show that the distance of a pure state to the set of stabilizers can be lower-bounded in terms of the sum-negativity of its Wigner function. This gives a new quantitative meaning to the sum-negativity (and the related mana) -- a measure relevant to fault-tolerant quantum computation. The result constitutes a robust generalization of the discrete Hudson theorem. (4) We show that complex projective designs of arbitrary order can be obtained from a finite number (independent of the number of qudits) of Clifford orbits. To prove this result, we give explicit formulas for arbitrary moments of random stabilizer states. ","Schur-Weyl Duality for the Clifford Group with Applications: Property Testing, a Robust Hudson Theorem, and de Finetti Representations" " We consider the following quasi-linear parabolic system of backward partial differential equations: $(\partial_t+L)u+f(\cdot,\cdot,u, \nabla u\sigma)=0$ on $[0,T]\times \mathbb{R}^d\qquad u_T=\phi$, where $L$ is a possibly degenerate second order differential operator with merely measurable coefficients. We solve this system in the framework of generalized Dirichlet forms and employ the stochastic calculus associated to the Markov process with generator $L$ to obtain a probabilistic representation of the solution $u$ by solving the corresponding backward stochastic differential equation. The solution satisfies the corresponding mild equation which is equivalent to being a generalized solution of the PDE. A further main result is the generalization of the martingale representation theorem using the stochastic calculus associated to the generalized Dirichlet form given by $L$. The nonlinear term $f$ satisfies a monotonicity condition with respect to $u$ and a Lipschitz condition with respect to $\nabla u$. ",BSDE and generalized Dirichlet forms: the finite dimensional case " Ulam has defined a history-dependent random sequence of integers by the recursion $X_{n+1}$ $= X_{U(n)}+X_{V(n)}, n \geqslant r$ where $U(n)$ and $V(n)$ are independently and uniformly distributed on $\{1,\dots,n\}$, and the initial sequence, $X_1=x_1,\dots,X_r=x_r$, is fixed. We consider the asymptotic properties of this sequence as $n \to \infty$, showing, for example, that $n^{-2} \sum_{k=1}^n X_k$ converges to a non-degenerate random variable. We also consider the moments and auto-covariance of the process, showing, for example, that when the initial condition is $x_1 =1$ with $r =1$, then $\lim_{n\to \infty} n^{-2} E X^2_n = (2 \pi)^{-1} \sinh(\pi)$; and that for large $m < n$, we have $(m n)^{-1} E X_m X_n \doteq (3 \pi)^{-1} \sinh(\pi).$ We further consider new random adding processes where changes occur independently at discrete times with probability $p$, or where changes occur continuously at jump times of an independent Poisson process. The processes are shown to have properties similar to those of the discrete time process with $p=1$, and to be readily generalised to a wider range of related sequences. ",Ulam's History-dependent Random Adding Process " The selection of optimal targets in the search for life represents a highly important strategic issue. In this Letter, we evaluate the benefits of searching for life around a potentially habitable planet orbiting a star of arbitrary mass relative to a similar planet around a Sun-like star. If recent physical arguments implying that the habitability of planets orbiting low-mass stars is selectively suppressed are correct, we find that planets around solar-type stars may represent the optimal targets. ",Optimal Target Stars in the Search for Life " The problem of endogeneity in statistics and econometrics is often handled by introducing instrumental variables (IV) which fulfill the mean independence assumption, i.e. the unobservable is mean independent of the instruments. When full independence of IV's and the unobservable is assumed, nonparametric IV regression models and nonparametric demand models lead to nonlinear integral equations with unknown integral kernels. We prove convergence rates for the mean integrated square error of the iteratively regularized Newton method applied to these problems. Compared to related results we derive stronger convergence results that rely on weaker nonlinearity restrictions. We demonstrate in numerical simulations for a nonparametric IV regression that the method produces better results than the standard model. ",Adaptive estimation for some nonparametric instrumental variable models " We examine the high-frequency optical mode of {\alpha}-Fe2O3 and report that Dzyaloshinskii-Moriya (DM) interaction generates a new type of torque on the magnetic resonance. Using a continuous-wave terahertz interferometer, we measure the optical mode spectra, where the asymmetric absorption with a large amplitude and broad linewidth is observed near the magnetic transition point, Morin temperature (TM ~ 254.3 K). Based on the spin wave model, the spectral anomaly is attributed to the DM interaction-induced torque, enabling to extract the strength of DM interaction field of 4 T. Our work opens a new avenue to characterize the spin resonance behaviors at an antiferromagnetic singular point for next-generation and high-frequency spin-based information technologies. ",Dzyaloshinskii-Moriya torque-driven resonance in antiferromagnetic {\alpha}-Fe2O3 " A code $\mathcal{C} \subseteq \{0, 1, 2\}^n$ is said to be trifferent with length $n$ when for any three distinct elements of $\mathcal{C}$ there exists a coordinate in which they all differ. Defining $\mathcal{T}(n)$ as the maximum cardinality of trifferent codes with length $n$, $\mathcal{T}(n)$ is unknown for $n \ge 5$. In this note, we use an optimized search algorithm to show that $\mathcal{T}(5) = 10$ and $\mathcal{T}(6) = 13$. ",The maximum cardinality of trifferent codes with lengths 5 and 6 " This paper documents the release of the ELKI data mining framework, version 0.7.5. ELKI is an open source (AGPLv3) data mining software written in Java. The focus of ELKI is research in algorithms, with an emphasis on unsupervised methods in cluster analysis and outlier detection. In order to achieve high performance and scalability, ELKI offers data index structures such as the R*-tree that can provide major performance gains. ELKI is designed to be easy to extend for researchers and students in this domain, and welcomes contributions of additional methods. ELKI aims at providing a large collection of highly parameterizable algorithms, in order to allow easy and fair evaluation and benchmarking of algorithms. We will first outline the motivation for this release, the plans for the future, and then give a brief overview over the new functionality in this version. We also include an appendix presenting an overview on the overall implemented functionality. ","ELKI: A large open-source library for data analysis - ELKI Release 0.7.5 ""Heidelberg""" " We present a framework for similarity search based on Locality-Sensitive Filtering (LSF), generalizing the Indyk-Motwani (STOC 1998) Locality-Sensitive Hashing (LSH) framework to support space-time tradeoffs. Given a family of filters, defined as a distribution over pairs of subsets of space with certain locality-sensitivity properties, we can solve the approximate near neighbor problem in $d$-dimensional space for an $n$-point data set with query time $dn^{\rho_q+o(1)}$, update time $dn^{\rho_u+o(1)}$, and space usage $dn + n^{1 + \rho_u + o(1)}$. The space-time tradeoff is tied to the tradeoff between query time and update time, controlled by the exponents $\rho_q, \rho_u$ that are determined by the filter family. Locality-sensitive filtering was introduced by Becker et al. (SODA 2016) together with a framework yielding a single, balanced, tradeoff between query time and space, further relying on the assumption of an efficient oracle for the filter evaluation algorithm. We extend the LSF framework to support space-time tradeoffs and through a combination of existing techniques we remove the oracle assumption. Building on a filter family for the unit sphere by Laarhoven (arXiv 2015) we use a kernel embedding technique by Rahimi & Recht (NIPS 2007) to show a solution to the $(r,cr)$-near neighbor problem in $\ell_s^d$-space for $0 < s \leq 2$ with query and update exponents $\rho_q=\frac{c^s(1+\lambda)^2}{(c^s+\lambda)^2}$ and $\rho_u=\frac{c^s(1-\lambda)^2}{(c^s+\lambda)^2}$ where $\lambda\in[-1,1]$ is a tradeoff parameter. This result improves upon the space-time tradeoff of Kapralov (PODS 2015) and is shown to be optimal in the case of a balanced tradeoff. Finally, we show a lower bound for the space-time tradeoff on the unit sphere that matches Laarhoven's and our own upper bound in the case of random data. ",A Framework for Similarity Search with Space-Time Tradeoffs using Locality-Sensitive Filtering " This is an extended (factor 2.5) version of arXiv:math/0601371 and arXiv:0808.3486. We present new results in the theory of the classical $\theta$-functions of Jacobi: series expansions and defining ordinary differential equations (\odes). The proposed dynamical systems turn out to be Hamiltonian and define fundamental differential properties of theta-functions; they also yield an exponential quadratic extension of the canonical $\theta$-series. An integrability condition of these \odes\ explains appearance of the modular $\vartheta$-constants and differential properties thereof. General solutions to all the \odes\ are given. For completeness, we also solve the Weierstrassian elliptic modular inversion problem and consider its consequences. As a nontrivial application, we apply proposed techni\-que to the Hitchin case of the sixth Painlev\'e equation. ",Non-canonical extension of theta-functions and modular integrability of theta-constants " ""Epigenetic Tracking"" is a model of systems of biological cells, able to generate arbitrary 2 or 3-dimensional cellular shapes of any kind and complexity (in terms of number of cells, number of colours, etc.) starting from a single cell. If the complexity of such structures is interpreted as a metaphor for the complexity of biological structures, we can conclude that this model has the potential to generate the complexity typical of living beings. It can be shown how the model is able to reproduce a simplified version of key biological phenomena such as development, the presence of ""junk DNA"", the phenomenon of ageing and the process of carcinogenesis. The model links properties and behaviour of genes and cells to properties and behaviour of the organism, describing and interpreting the said phenomena with a unified framework: for this reason, we think it can be proposed as a model for all biology. The material contained in this work is not new: the model and its implications have all been described in previous works from a computer-science point of view. This work has two objectives: 1) To present the whole theory in an organic and structured way and 2) To introduce Epigenetic Tracking from a biological perspective. The work is divided into six parts: the first part is the introduction; the second part describes the cellular model; the third part is dedicated to the evo-devo method and transposable-elements; the fourth part deals with junk DNA and ageing; the fifth part explores the topic of cancer; the sixth part draws the conclusions. ",Epigenetic Tracking: a model for all biology " The theory of generalized local scale invariance of strongly anisotropic scale invariant systems proposed some time ago by Henkel [Nucl. Phys. B \textbf{641}, 405 (2002)] is examined. The case of so-called type-I systems is considered. This was conjectured to be realized by systems at m-axial Lifshitz points; in support of this claim, scaling functions of two-point cumulants at the uniaxial Lifshitz point of the three-dimensional ANNNI model were predicted on the basis of this theory and found to be in excellent agreement with Monte Carlo results [Phys. Rev. Lett. \textbf{87}, 125702 (2001)]. The consequences of the conjectured invariance equations are investigated. It is shown that fewer solutions than anticipated by Henkel generally exist and contribute to the scaling functions if these equations are assumed to hold for all (positive and negative) values of the d-dimensional space (or space time) coordinates $(t,\bm{r})\in \mathbb{R}\times\mathbb{R}^{d-1}$. Specifically, a single rather than two independent solutions exists in the case relevant for the mentioned fit of Monte Carlo data for the ANNNI model. Renormalization-group improved perturbation theory in $4+m/2-\epsilon$ dimensions is used to determine the scaling functions of the order-parameter and energy-density two-point cumulants in momentum space to two-loop order. The results are mathematically incompatible with Henkel's predictions except in free-field-theory cases. However, the scaling function of the energy-density cumulant we obtain for m=1 upon extrapolation of our two-loop RG results to d=3 differs numerically little from that of an effective free field theory. ",On conjectured local generalizations of anisotropic scale invariance and their implications " We compute the isomorphism class in $\mathfrak{KK}^{alg}$ of all noncommutative generalized Weyl algebras $A=\CC[h](\sigma, P)$, where $\sigma(h)=qh+h_0$ is an automorphism of $\CC[h]$, except when $q\neq 1$ is a root of unity. In particular, we compute the isomorphism class in $\mathfrak{KK}^{alg}$ of the quantum Weyl algebra, the primitive factors $B_{\lambda}$ of $U(\mathfrak{sl}_2)$ and the quantum weighted projective lines $\mathcal{O}(\mathbb{WP}_q(k, l))$. ",Bivariant K-theory of generalized Weyl algebras " A vibrating plate is set into a chaotic state of wave turbulence by either a periodic or a random local forcing. Correlations between the forcing and the local velocity response of the plate at the forcing point are studied. Statistical models with fairly good agreement with the experiments are proposed for each forcing. Both distributions of injected power have a logarithmic cusp for zero power, while the tails are Gaussian for the periodic driving and exponential for the random one. The distributions of injected work over long time intervals are investigated in the framework of the fluctuation theorem, also known as the Gallavotti-Cohen theorem. It appears that the conclusions of the theorem are verified only for the periodic, deterministic forcing. Using independent estimates of the phase space contraction, this result is discussed in the light of available theoretical framework. ",Statistics of power injection in a plate set into chaotic vibration " Two-dimensional gravity in the light-cone gauge was shown by Polyakov to exhibit an underlying $SL(2,R)$ Kac-Moody symmetry, which may be used to express the energy-momentum tensor for the metric component $h_{++}$ in terms of the $SL(2,R)$ currents {\it via}\ the Sugawara construction. We review some recent results which show that in a similar manner, $W_\infty$ and $W_{1+\infty}$ gravities have underlying $SL(\infty,R)$ and $GL(\infty,R)$ Kac-Moody symmetries respectively. ","$SL(\infty,R)$ Symmetry of $W_\infty$ Gravity" " The phase structure of the bosonized multi-flavor Schwinger model is investigated by means of the differential renormalization group (RG) method. In the limit of small fermion mass the linearized RG flow is sufficient to determine the low-energy behavior of the N-flavor model, if it has been rotated by a suitable rotation in the internal space. For large fermion mass, the exact RG flow has been solved numerically. The low-energy behavior of the multi-flavor model is rather different depending on whether N=1 or N>1, where N is the number of flavors. For N>1 the reflection symmetry always suffers breakdown in both the weak and strong coupling regimes, in contrary to the N=1 case, where it remains unbroken in the strong coupling phase. ",On the renormalization of the bosonized multi-flavor Schwinger model " This paper is dedicated to the construction of global weak solutions to the quantum Navier-Stokes equation, for any initial value with bounded energy and entropy. The construction is uniform with respect to the Planck constant. This allows to perform the semi-classical limit to the associated compressible Navier-Stokes equation. One of the difficulty of the problem is to deal with the degenerate viscosity, together with the lack of integrability on the velocity. Our method is based on the construction of weak solutions that are renormalized in the velocity variable. The existence, and stability of these solutions do not need the Mellet-Vasseur inequality. ",Global weak solutions to the compressible quantum navier-stokes equation and its semi-classical limit " In the literature on electron-phonon scatterings very often a phenomenological expression for the transition matrix element is used which was derived in the textbooks of Ashcroft/Mermin and of Czycholl. There are various steps in the derivation of this expression. In the textbooks in part different arguments have been used in these steps, but the final result is the same. In the present paper again slightly different arguments are used which motivate the procedure in a more intuitive way. Furthermore, we generalize the phenomenological expression to describe the dependence of the matrix elements on the spin state of the initial and final electron state. ",Derivation of phenomenological expressions for transition matrix elements for electron-phonon scattering " We discuss the absorption cross section for the minimally-coupled massless scalar field into a stationary and circularly symmetric black hole with nonzero angular velocity in four or higher dimensions. In particular, we show that it equals the horizon area in the zero-frequency limit provided that the solution of the scalar field equation with an incident monochromatic plane wave converges pointwise to a smooth time-independent solution outside the black hole and on the future horizon, with the error term being at most linear in the frequency. We also show that this equality holds for static black holes which are not necessarily spherically symmetric. The zero-frequency scattering cross section is found to vanish in both cases. It is shown in an Addendum that the equality holds for any stationary black hole with vanishing expansion if the limit solution is known to be a constant. ",Low-frequency scalar absorption cross sections for stationary black holes " We study a one-parameter generalization of the symmetric simple exclusion process on a one dimensional lattice. In addition to the usual dynamics (where particles can hop with equal rates to the left or to the right with an exclusion constraint), annihilation and creation of pairs can occur. The system is driven out of equilibrium by two reservoirs at the boundaries. In this setting the model is still integrable: it is related to the open XXZ spin chain through a gauge transformation. This allows us to compute the full spectrum of the Markov matrix using Bethe equations. Then, we derive the spectral gap in the thermodynamical limit. We also show that the stationary state can be expressed in a matrix product form permitting to compute the multi-points correlation functions as well as the mean value of the lattice current and of the creation-annihilation current. Finally the variance of the lattice current is exactly computed for a finite size system. In the thermodynamical limit, it matches perfectly the value obtained from the associated macroscopic fluctuation theory. It provides a confirmation of the macroscopic fluctuation theory for dissipative system from a microscopic point of view. ",Integrable dissipative exclusion process: Correlation functions and physical properties " Liquid leaf targets show promise as high repetition rate targets for laser-based ion acceleration using the Target Normal Sheath Acceleration (TNSA) mechanism and are currently under development. In this work, we discuss the effects of different ion species and investigate how they can be leveraged for use as a possible laser-driven neutron source. To aid in this research, we develop a surrogate model for liquid leaf target laser-ion acceleration experiments, based on artificial neural networks. The model is trained using data from Particle-In-Cell (PIC) simulations. The fast inference speed of our deep learning model allows us to optimize experimental parameters for maximum ion energy and laser-energy conversion efficiency. An analysis of parameter influence on our model output, using Sobol and PAWN indices, provides deeper insights into the laser-plasma system. ",Modeling of a Liquid Leaf Target TNSA Experiment using Particle-In-Cell Simulations and Deep Learning " To relieve the computational cost of design evaluations using expensive finite element simulations, surrogate models have been widely applied in computer-aided engineering design. Machine learning algorithms (MLAs) have been implemented as surrogate models due to their capability of learning the complex interrelations between the design variables and the response from big datasets. Typically, an MLA regression model contains model parameters and hyperparameters. The model parameters are obtained by fitting the training data. Hyperparameters, which govern the model structures and the training processes, are assigned by users before training. There is a lack of systematic studies on the effect of hyperparameters on the accuracy and robustness of the surrogate model. In this work, we proposed to establish a hyperparameter optimization (HOpt) framework to deepen our understanding of the effect. Four frequently used MLAs, namely Gaussian Process Regression (GPR), Support Vector Machine (SVM), Random Forest Regression (RFR), and Artificial Neural Network (ANN), are tested on four benchmark examples. For each MLA model, the model accuracy and robustness before and after the HOpt are compared. The results show that HOpt can generally improve the performance of the MLA models in general. HOpt leads to few improvements in the MLAs accuracy and robustness for complex problems, which are featured by high-dimensional mixed-variable design space. The HOpt is recommended for the design problems with intermediate complexity. We also investigated the additional computational costs incurred by HOpt. The training cost is closely related to the MLA architecture. After HOpt, the training cost of ANN and RFR is increased more than that of the GPR and SVM. To sum up, this study benefits the selection of HOpt method for the different types of design problems based on their complexity. ",Understanding the effect of hyperparameter optimization on machine learning models for structure design problems " Quantum depletion from an atomic quasi one dimensional Bose-Einstein condensate with a dark soliton is studied in a framework of the Bogoliubov theory. Depletion is dominated by an anomalous mode localized in a notch of the condensate wave function. Depletion in the anomalous mode requires different treatment than depletion without anomalous modes. In particular, quantum depletion in the Bogoliubov vacuum of the anomalous mode is experimentally irrelevant. A dark soliton is initially prepared in a state with minimal depletion which is not a stationary state of the Bogoliubov theory. The notch fills up with incoherent atoms depleted from the condensate. For realistic parameters the filling time can be as short as 10 ms. ",Greying of the Dark Soliton: Depletion in the Anomalous Mode of the Bogoliubov Theory " We investigate group-theoretic ""signatures"" of odd cycles of a graph, and their connections to topological obstructions to 3-colourability. In the case of signatures derived from free groups, we prove that the existence of an odd cycle with trivial signature is equivalent to having the coindex of the hom-complex at least 2 (which implies that the chromatic number is at least 4). In the case of signatures derived from elementary abelian 2-groups we prove that the existence of an odd cycle with trivial signature is a sufficient condition for having the index of the hom-complex at least 2 (which again implies that the chromatic number is at least 4). ",Topologically $4$-chromatic graphs and signatures of odd cycles " Weakly supervised semantic segmentation (WSSS) with image-level labels is a challenging task. Mainstream approaches follow a multi-stage framework and suffer from high training costs. In this paper, we explore the potential of Contrastive Language-Image Pre-training models (CLIP) to localize different categories with only image-level labels and without further training. To efficiently generate high-quality segmentation masks from CLIP, we propose a novel WSSS framework called CLIP-ES. Our framework improves all three stages of WSSS with special designs for CLIP: 1) We introduce the softmax function into GradCAM and exploit the zero-shot ability of CLIP to suppress the confusion caused by non-target classes and backgrounds. Meanwhile, to take full advantage of CLIP, we re-explore text inputs under the WSSS setting and customize two text-driven strategies: sharpness-based prompt selection and synonym fusion. 2) To simplify the stage of CAM refinement, we propose a real-time class-aware attention-based affinity (CAA) module based on the inherent multi-head self-attention (MHSA) in CLIP-ViTs. 3) When training the final segmentation model with the masks generated by CLIP, we introduced a confidence-guided loss (CGL) focus on confident regions. Our CLIP-ES achieves SOTA performance on Pascal VOC 2012 and MS COCO 2014 while only taking 10% time of previous methods for the pseudo mask generation. Code is available at https://github.com/linyq2117/CLIP-ES. ",CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation " Using our previously derived simple analytic expression for the bolometric light curves of supernovae, we demonstrate that the collision of the fast debris of ordinary supernova explosions with relatively slow-moving shells from pre-supernova eruptions can produce the observed bolometric light curves of superluminous supernovae (SLSNe) of all types. These include both, those which can be explained as powered by spinning-down millisecond magnetars and those which cannot. That and the observed close similarity between the bolometric light-curves of SLSNe and ordinary interacting SNe suggest that SLSNe are powered mainly by collisions with relatively slow moving circumstellar shells from pre-supernova eruptions rather than by the spin-down of millisecond magnetars born in core collapse supernova explosions. ",Are Superluminous Supernovae Powered By Collision Or By Millisecond Magnetars? " In this paper we calculate the effects produced by temperature in the renormalized vaccum expectation value of the square of the massless scalar field in the pointlike global monopole spacetime. In order to develop this calculation, we had to construct the Euclidean thermal Green function associated with this field in this background. We also calculate the high-temperature limit for the thermal average of the zero-zero component of the energy-momentum tensor. ",Vaccum Polarization for a Massless Scalar Field in the Global Monopole Spacetime at Finite Temperature " In this work, we define and solve the Fair Top-k Ranking problem, in which we want to determine a subset of k candidates from a large pool of n >> k candidates, maximizing utility (i.e., select the ""best"" candidates) subject to group fairness criteria. Our ranked group fairness definition extends group fairness using the standard notion of protected groups and is based on ensuring that the proportion of protected candidates in every prefix of the top-k ranking remains statistically above or indistinguishable from a given minimum. Utility is operationalized in two ways: (i) every candidate included in the top-$k$ should be more qualified than every candidate not included; and (ii) for every pair of candidates in the top-k, the more qualified candidate should be ranked above. An efficient algorithm is presented for producing the Fair Top-k Ranking, and tested experimentally on existing datasets as well as new datasets released with this paper, showing that our approach yields small distortions with respect to rankings that maximize utility without considering fairness criteria. To the best of our knowledge, this is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list. ",FA*IR: A Fair Top-k Ranking Algorithm " We have performed an unbiased deep near-infrared survey toward the Aquila molecular cloud with a sky coverage of ~1 deg2. We identified 45 molecular hydrogen emission-line objects(MHOs), of which only 11 were previously known. Using the Spitzer archival data we also identified 802 young stellar objects (YSOs) in this region. Based on the morphology and the location of MHOs and YSO candidates, we associate 43 MHOs with 40 YSO candidates. The distribution of jet length shows an exponential decrease in the number of outflows with increasing length and the molecular hydrogen outflows seem to be oriented randomly. Moreover, there is no obvious correlation between jet lengths, jet opening angles, or jet H2 1-0 S(1) luminosities and spectral indices of the possible driving sources in this region. We also suggest that molecular hydrogen outflows in the Aquila molecular cloud are rather weak sources of turbulence, unlikely to generate the observed velocity dispersion in the region of survey. ",A deep near-infrared survey toward the Aquila molecular cloud - I. Molecular hydrogen outflows " We consider a series of massive scaling limits m_1 -> infty, q -> 0, lim m_1 q = Lambda_{3} followed by m_4 -> infty, Lambda_{3} -> 0, lim m_4 Lambda_{3} = (Lambda_2)^2 of the beta-deformed matrix model of Selberg type (N_c=2, N_f=4) which reduce the number of flavours to N_f=3 and subsequently to N_f=2. This keeps the other parameters of the model finite, which include n=N_L and N=n+N_R, namely, the size of the matrix and the ""filling fraction"". Exploiting the method developed before, we generate instanton expansion with finite g_s, epsilon_{1,2} to check the Nekrasov coefficients (N_f =3,2 cases) to the lowest order. The limiting expressions provide integral representation of irregular conformal blocks which contains a 2d operator lim frac{1}{C(q)} : e^{(1/2) \alpha_1 \phi(0)}: (int_0^q dz : e^{b_E phi(z)}:)^n : e^{(1/2) alpha_2 phi(q)}: and is subsequently analytically continued. ",Massive Scaling Limit of beta-Deformed Matrix Model of Selberg Type " Unsupervised deformable image registration is one of the challenging tasks in medical imaging. Obtaining a high-quality deformation field while preserving deformation topology remains demanding amid a series of deep-learning-based solutions. Meanwhile, the diffusion model's latent feature space shows potential in modeling the deformation semantics. To fully exploit the diffusion model's ability to guide the registration task, we present two modules: Feature-wise Diffusion-Guided Module (FDG) and Score-wise Diffusion-Guided Module (SDG). Specifically, FDG uses the diffusion model's multi-scale semantic features to guide the generation of the deformation field. SDG uses the diffusion score to guide the optimization process for preserving deformation topology with barely any additional computation. Experiment results on the 3D medical cardiac image registration task validate our model's ability to provide refined deformation fields with preserved topology effectively. Code is available at: https://github.com/xmed-lab/FSDiffReg.git. ",FSDiffReg: Feature-wise and Score-wise Diffusion-guided Unsupervised Deformable Image Registration for Cardiac Images " Newsletters and social networks can reflect the opinion about the market and specific stocks from the perspective of analysts and the general public on products and/or services provided by a company. Therefore, sentiment analysis of these texts can provide useful information to help investors trade in the market. In this paper, a hierarchical stack of Transformers model is proposed to identify the sentiment associated with companies and stocks, by predicting a score (of data type real) in a range between -1 and +1. Specifically, we fine-tuned a RoBERTa model to process headlines and microblogs and combined it with additional Transformer layers to process the sentence analysis with sentiment dictionaries to improve the sentiment analysis. We evaluated it on financial data released by SemEval-2017 task 5 and our proposition outperformed the best systems of SemEval-2017 task 5 and strong baselines. Indeed, the combination of contextual sentence analysis with the financial and general sentiment dictionaries provided useful information to our model and allowed it to generate more reliable sentiment scores. ",Contextual Sentence Analysis for the Sentiment Prediction on Financial Data " The reliability of current virtual reality (VR) delivery is low due to the limited resources on VR head-mounted displays (HMDs) and the transmission rate bottleneck of sub-6 GHz networks. In this paper, we propose a dual-connectivity sub-6 GHz and mmWave heterogeneous network architecture empowered by mobile edge capability. The core idea of the proposed architecture is to utilize the complementary advantages of sub-6 GHz links and mmWave links to conduct a collaborative edge resource design, which aims to improve the reliability of VR delivery. From the perspective of stochastic geometry, we analyze the reliability of VR delivery and theoretically demonstrate that sub-6 GHz links can be used to enhance the reliability of VR delivery despite the large mmWave bandwidth. Based on our analytical work, we formulate a joint caching and computing optimization problem with the goal to maximize the reliability of VR delivery. By analyzing the coupling caching and computing strategies at HMDs, sub-6 GHz and mmWave base stations (BSs), we further transform the problem into a multiple-choice multi-dimension knapsack problem. A best-first branch and bound algorithm and a difference of convex programming algorithm are proposed to obtain the optimal and sub-optimal solution, respectively. Numerical results demonstrate the performance improvement using the proposed algorithms, and reveal that caching more monocular videos at sub-6 GHz BSs and more stereoscopic videos at mmWave BSs can improve the VR delivery reliability efficiently. ",Reliability Enhancement for VR Delivery in Mobile-Edge Empowered Dual-Connectivity Sub-6 GHz and mmWave HetNets " The modification of the nature and size of bandgaps for III-V semiconductors is of strong interest for optoelectronic applications. Strain can be used to systematically tune the bandgap over a wide range of values and induce indirect-to-direct (IDT), direct-to-indirect (DIT), and other changes in bandgap nature. Here, we establish a predictive ab initio approach, based on density functional theory, to analyze the effect of uniaxial, biaxial, and isotropic strain on the bandgap. We show that systematic variation is possible. For GaAs, DITs were observed at 1.52% isotropic compressive strain and 3.52% tensile strain, while for GaP an IDT was found at 2.63% isotropic tensile strain. We additionally propose a strategy for the realization of direct-indirect transition by combining biaxial strain with uniaxial strain. Further transition points were identified for strained GaSb, InP, InAs, and InSb and compared to the elemental semiconductor silicon. Our analyses thus provide a systematic and predictive approach to strain-induced bandgap tuning in binary III-V semiconductors. ",Systematic strain-induced bandgap tuning in binary III-V semiconductors from density functional theory " Despite its dominance, hydrogen has been largely ignored in studies of the abundance patterns of the chemical elements in gradual solar energetic-particle (SEP) events; those neglected abundances show a surprising new pattern of behavior. Abundance enhancements of elements with 2 <= Z <= 56, relative to coronal abundances, show a power-law dependence, versus their average mass-to-charge ratio A/Q, that varies from event to event and with time during events. The ion charge states Q depend upon the source plasma temperature T. For most gradual SEP events, shock waves have accelerated ambient coronal material with T < 2 MK with decreasing power-laws in A/Q. In this case, the proton abundances agree rather well with the power-law fits extrapolated from elements with Z >= 6 at A/Q > 2 down to hydrogen at A/Q = 1. Thus the abundances of the elements with Z >= 6 fairly accurately predict the observed abundance of H, at a similar velocity, in most SEP events. However, for those gradual SEP events where ion enhancements follow positive powers of A/Q, especially those with T > 2 MK where shock waves have reaccelerated residual suprathermal ions from previous impulsive SEP events, proton abundances commonly exceed the extrapolated expectation, usually by a factor of order ten. This is a new and unexpected pattern of behavior that is unique to the abundances of protons and may be related to the need for more streaming protons to produce sufficient waves for scattering and acceleration of more heavy ions at the shock. ",Hydrogen and the Abundances of Elements in Gradual Solar Energetic-Particle Events " Online socio-technical systems can be studied as proxy of the real world to investigate human behavior and social interactions at scale. Here we focus on Instagram, a media-sharing online platform whose popularity has been rising up to gathering hundred millions users. Instagram exhibits a mixture of features including social structure, social tagging and media sharing. The network of social interactions among users models various dynamics including follower/followee relations and users' communication by means of posts/comments. Users can upload and tag media such as photos and pictures, and they can ""like"" and comment each piece of information on the platform. In this work we investigate three major aspects on our Instagram dataset: (i) the structural characteristics of its network of heterogeneous interactions, to unveil the emergence of self organization and topically-induced community structure; (ii) the dynamics of content production and consumption, to understand how global trends and popular users emerge; (iii) the behavior of users labeling media with tags, to determine how they devote their attention and to explore the variety of their topical interests. Our analysis provides clues to understand human behavior dynamics on socio-technical systems, specifically users and content popularity, the mechanisms of users' interactions in online environments and how collective trends emerge from individuals' topical interests. ",Online Popularity and Topical Interests through the Lens of Instagram " This work proposes a feature-based technique to recognize vehicle types within day and night times. Support vector machine (SVM) classifier is applied on image histogram and CENsus Transformed histogRam Oriented Gradient (CENTROG) features in order to classify vehicle types during the day and night. Thermal images were used for the night time experiments. Although thermal images suffer from low image resolution, lack of colour and poor texture information, they offer the advantage of being unaffected by high intensity light sources such as vehicle headlights which tend to render normal images unsuitable for night time image capturing and subsequent analysis. Since contour is useful in shape based categorisation and the most distinctive feature within thermal images, CENTROG is used to capture this feature information and is used within the experiments. The experimental results so obtained were compared with those obtained by employing the CENsus TRansformed hISTogram (CENTRIST). Experimental results revealed that CENTROG offers better recognition accuracies for both day and night times vehicle types recognition. ",Centrog Feature technique for vehicle type recognition at day and night times " Knowledge amalgamation (KA) aims to learn a compact student model to handle the joint objective from multiple teacher models that are are specialized for their own tasks respectively. Current methods focus on coarsely aligning teachers and students in the common representation space, making it difficult for the student to learn the proper decision boundaries from a set of heterogeneous teachers. Besides, the KL divergence in previous works only minimizes the probability distribution difference between teachers and the student, ignoring the intrinsic characteristics of teachers. Therefore, we propose a novel Contrastive Knowledge Amalgamation (CKA) framework, which introduces contrastive losses and an alignment loss to achieve intra-class cohesion and inter-class separation.Contrastive losses intra- and inter- models are designed to widen the distance between representations of different classes. The alignment loss is introduced to minimize the sample-level distribution differences of teacher-student models in the common representation space.Furthermore, the student learns heterogeneous unsupervised classification tasks through soft targets efficiently and flexibly in the task-level amalgamation. Extensive experiments on benchmarks demonstrate the generalization capability of CKA in the amalgamation of specific task as well as multiple tasks. Comprehensive ablation studies provide a further insight into our CKA. ",Contrastive Knowledge Amalgamation for Unsupervised Image Classification " We describe the practical implementation of the sideband search, a search for periodic gravitational waves from neutron stars in binary systems. The orbital motion of the source in its binary system causes frequency-modulation in the combination of matched filters known as the $\mathcal{F}$-statistic. The sideband search is based on the incoherent summation of these frequency-modulated $\mathcal{F}$-statistic sidebands. It provides a new detection statistic for sources in binary systems, called the $\mathcal{C}$-statistic. The search is well suited to low-mass X-ray binaries, the brightest of which, called Sco X-1, is an ideal target candidate. For sources like Sco X-1, with well constrained orbital parameters, a slight variation on the search is possible. The extra orbital information can be used to approximately demodulate the data from the binary orbital motion in the coherent stage, before incoherently summing the now reduced number of sidebands. We investigate this approach and show that it improves the sensitivity of the standard Sco X-1 directed sideband search. Prior information on the neutron star inclination and gravitational wave polarization can also be used to improve upper limit sensitivity. We estimate the sensitivity of a Sco X-1 directed sideband search on 10 days of LIGO data and show that it can beat previous upper limits in current LIGO data, with a possibility of constraining theoretical upper limits using future advanced instruments. ",Implementation of the frequency-modulated sideband search method for gravitational waves from low mass X-ray binaries " Following the detection of GW170817 and the accompanying kilonova AT2017gfo, it has become crucial to model and understand the various channels through which mass is ejected in neutron-star binary mergers. We discuss the impact that high stellar spins prior to merger have on the ejection of mass focussing, in particular, on the dynamically ejected mass by performing general-relativistic magnetohydrodynamic simulations employing finite-temperature equations of state and neutrino-cooling effects. Using eight different models with dimensionless spins ranging from $\chi\simeq-0.14$ to $\chi\simeq0.29$ we discuss how the presence of different spins affects the angular distribution and composition of the ejected matter. Most importantly, we find that the dynamical component of the ejected mass can be strongly suppressed in the case of high spins aligned with the orbital angular momentum. In this case, in fact, the merger remnant has an excess angular momentum yielding a more extended and ""colder"" object, with reduced ability to shed mass dynamically. We discuss how this result impacts the analysis of the recent merger event GW170817 and its kilonova afterglow. ",Impact of high spins on the ejection of mass in GW170817 A simple argument is presented in favour of the equidistant spectrum in semiclassical limit for the horizon area of a black hole. The following quantization rules for the mass $M_N$ and horizon area $A_{Nj}$ are proposed: M_N = m_p [N(N+1)]^{1/4}; A_{Nj} = 8\pi l_p^2 [\sqrt{N(N+1)} + \sqrt{N(N+1) - j(j+1)} ]. Here both $N$ and $j$ are nonnegative integers or half-integers. ,On Quantization of Black Holes " Context. Upcoming weak lensing surveys such as Euclid will provide an unprecedented opportunity to quantify the geometry and topology of the cosmic web, in particular in the vicinity of lensing clusters. Aims. Understanding the connectivity of the cosmic web with unbiased mass tracers, such as weak lensing, is of prime importance to probe the underlying cosmology, seek dynamical signatures of dark matter, and quantify environmental effects on galaxy formation. Methods. Mock catalogues of galaxy clusters are extracted from the N-body PLUS simulation. For each cluster, the aperture multipolar moments of the convergence are calculated in two annuli (inside and outside the virial radius). By stacking their modulus, a statistical estimator is built to characterise the angular mass distribution around clusters. The moments are compared to predictions from perturbation theory and spherical collapse. Results. The main weakly chromatic excess of multipolar power on large scales is understood as arising from the contraction of the primordial cosmic web driven by the growing potential well of the cluster. Besides this boost, the quadrupole prevails in the cluster (ellipsoidal) core, while at the outskirts, harmonic distortions are spread on small angular modes, and trace the non-linear sharpening of the filamentary structures. Predictions for the signal amplitude as a function of the cluster-centric distance, mass, and redshift are presented. The prospects of measuring this signal are estimated for current and future lensing data sets. Conclusions. The Euclid mission should provide all the necessary information for studying the cosmic evolution of the connectivity of the cosmic web around lensing clusters using multipolar moments and probing unique signatures of, for example, baryons and warm dark matter. ",Multipolar moments of weak lensing signal around clusters. Weighing filaments in harmonic space " The Darwin field model addresses an approximation to Maxwell's equations where radiation effects are neglected. It allows to describe general quasistatic electromagnetic field phenomena including inductive, resistive and capacitive effects. A Darwin formulation based on the Darwin-Amp\`ere equation and the implicitly included Darwin-continuity equation yields a non-symmetric and ill-conditioned algebraic systems of equations received from applying a geometric spatial discretization scheme and the implicit backward differentiation time integration method. A two-step solution scheme is presented where the underlying block-Gauss-Seidel method is shown to change the initially chosen gauge condition and the resulting scheme only requires to solve a weakly coupled electro-quasistatic and a magneto-quasistatic discrete field formulation consecutively in each time step. Results of numerical test problems validate the chosen approach. ","A Darwin Time Domain Scheme for the Simulation of Transient Quasistatic Electromagnetic Fields Including Resistive, Capacitive and Inductive Effects" " We study a family of convex polytopes, called SIM-bodies, which were introduced by Giannakopoulos and Koutsoupias (2018) to analyze so-called Straight-Jacket Auctions. First, we show that the SIM-bodies belong to the class of generalized permutahedra. Second, we prove an optimality result for the Straight-Jacket Auctions among certain deterministic auctions. Third, we employ computer algebra methods and mathematical software to explicitly determine optimal prices and revenues. ",Generalized permutahedra and optimal auctions " We show that almost all circulant graphs have automorphism groups as small as possible. Of the circulant graphs that do not have automorphism group as small as possible, we give some families of integers such that it is not true that almost all circulant graphs whose order lies in any one of these families, are normal. That almost all Cayley (di)graphs whose automorphism group is not as small as possible are normal was conjectured by the second author, so these results provide counterexamples to this conjecture. It is then shown that there is a large family of integers for which almost every circulant digraph whose order lies in this family and that does not have automorphism group as small as possible, is normal. We additionally explore the asymptotic behavior of the automorphism groups of circulant (di)graphs that are not normal, and show that no general conclusion can be obtained. ",Asymptotic Automorphism Groups of Circulant Graphs and Digraphs " Deep learning has been extensively applied in many optical imaging applications in recent years. Despite the success, the limitations and drawbacks of deep learning in optical imaging have been seldom investigated. In this work, we show that conventional linear-regression-based methods can outperform the previously proposed deep learning approaches for two black-box optical imaging problems in some extent. Deep learning demonstrates its weakness especially when the number of training samples is small. The advantages and disadvantages of linear-regression-based methods and deep learning are analyzed and compared. Since many optical systems are essentially linear, a deep learning network containing many nonlinearity functions sometimes may not be the most suitable option. ",Does deep learning always outperform simple linear regression in optical imaging? " We investigate a modification of the 2+1 dimensional abelian Chern-Simons theory, obtained by adding a Proca mass term to the gauge field. We are particularly interested in the infrared limit, which can be described by two {\it a priori} different ""topological"" quantum mechanical models. We apply methods of equivariant cohomology and the ensuing supersymmetry to analyze the partition functions of these quantum mechanical models. In particular, we find that a previously discussed phase-space reductive limiting procedure which relates these two models can be seen as a direct consequence of our supersymmetry. ",On the infrared limit of the Chern-Simons-Proca theory " In this work, a general class of wormhole geometries in conformal Weyl gravity is analyzed. A wide variety of exact solutions of asymptotically flat spacetimes is found, in which the stress energy tensor profile differs radically from its general relativistic counterpart. In particular, a class of geometries is constructed that satisfies the energy conditions in the throat neighborhood, which is in clear contrast to the general relativistic solutions. ",General class of wormhole geometries in conformal Weyl gravity " Motivated by recent experiments by Basov {\it et al}, we study the differential sum rule for the effective scattering rate $1/\tau (\omega)$. We show that in a dirty BCS superconductor, the area under $1/\tau (\omega)$ does not change between the normal and the superconducting states. For magnetically mediated pairing, similar result holds between $T2$. ",Local regularity results for value functions of tug-of-war with noise and running payoff " In this investigation, we determined the Concentration (C) and Asymmetry (A) parameters in a sample of Tidal Dwarf Galaxies (TDG) or candidate galaxies. Most of the galaxies in the sample were found to be in a very precise region of the C-A plane, which clearly separates them from other galaxies. In addition, the stellar mass ($M_{star}$) and the star formation rate ($SFR$) in the sample were determined using optical images and GALEX observations. The main results are: the $M_{star}$ and the $SFR$ in the TDG sample do not follow a linear correlation with the $C$ and $A$ respectively, as observed in the rest of galaxies and the $M_{star}$ and the $SFR$ have a linear correlation similar to that followed by galaxies at high redshift. Then, we can conclude that the C-A plane can be a useful method for the morphological identification of candidates for TDG or dwarf objects from very turbulent environments. ",Morphological study of a sample of dwarf tidal galaxies using the C-A plane " We axiomatically introduce risk-consistent conditional systemic risk measures defined on multidimensional risks. This class consists of those conditional systemic risk measures which can be decomposed into a state-wise conditional aggregation and a univariate conditional risk measure. Our studies extend known results for unconditional risk measures on finite state spaces. We argue in favor of a conditional framework on general probability spaces for assessing systemic risk. Mathematically, the problem reduces to selecting a realization of a random field with suitable properties. Moreover, our approach covers many prominent examples of systemic risk measures from the literature and used in practice. ",Risk-Consistent Conditional Systemic Risk Measures " Technological advances in instrumentation have led to an exponential increase in exoplanet detection and scrutiny of stellar features such as spots and faculae. While the spots and faculae enable us to understand the stellar dynamics, exoplanets provide us with a glimpse into stellar evolution. While the ubiquity of noise (e.g., telluric, instrumental, or photonic) is unavoidable, combining this with increased spectrographic resolution compounds technological challenges. To account for these noise sources and resolution issues, we use a temporal multifractal framework to study data from the SOAP 2.0 tool, which simulates a stellar spectrum in the presence of a spot, a facula or a planet. Given these controlled simulations, we vary the resolution as well as the signal-to-noise (S/N) ratio to obtain a lower limit on the resolution and S/N required to robustly detect features. We show that a spot and a facula with a 1% coverage of the stellar disk can be robustly detected for a S/N (per pixel) of 35 and 60 respectively, for any spectral resolution above 20,000, while a planet with a radial velocity (RV) of 10 m/s can be detected for a S/N (per pixel) of 600. Rather than viewing noise as an impediment, our approach uses noise as a source of information. ",Minimal Data Fidelity for Detection of Stellar Features or Companions " We discuss the space-time structure for the process of decay of a bubble of hypothetic phase -- disoriented chiral condensate (DCC). The evolution of the initial classical field configuration corresponding to the bubble of DCC is studied both numerically and analytically. The process of decay of this initial configuration depends crucially on selfinteraction of the pionic fields. It is shown that in some cases this selfinteraction leads to the formation of sort of breather solution, formed from pionic fields situated in the center of the initial bubble of DCC. This breather looks like a long-lived source of pionic fields. ",On decay of bubble of disoriented chiral condensate " In this paper, we study asymmetric Ramsey properties of the random graph $G_{n,p}$. Let $r \in \mathbb{N}$ and $H_1, \ldots, H_r$ be graphs. We write $G_{n,p} \to (H_1, \ldots, H_r)$ to denote the property that whenever we colour the edges of $G_{n,p}$ with colours from the set $[r] := \{1, \ldots, r\}$ there exists $i \in [r]$ and a copy of $H_i$ in $G_{n,p}$ monochromatic in colour $i$. There has been much interest in determining the asymptotic threshold function for this property. R\""{o}dl and Ruci\'{n}ski determined the threshold function for the general symmetric case; that is, when $H_1 = \cdots = H_r$. A conjecture of Kohayakawa and Kreuter, if true, would fully resolve the asymmetric problem. Recently, the 1-statement of this conjecture was confirmed by Mousset, Nenadov and Samotij. Building on work of Marciniszyn, Skokan, Sp\""{o}hel and Steger, we reduce the 0-statement of Kohayakawa and Kreuter's conjecture to a certain deterministic subproblem. To demonstrate the potential of this approach, we show this subproblem can be resolved for almost all pairs of regular graphs. This therefore resolves the 0-statement for all such pairs of graphs. ",Towards the 0-statement of the Kohayakawa-Kreuter conjecture " Visual-based 3D semantic occupancy perception (also known as 3D semantic scene completion) is a new perception paradigm for robotic applications like autonomous driving. Compared with Bird's Eye View (BEV) perception, it extends the vertical dimension, significantly enhancing the ability of robots to understand their surroundings. However, due to this very reason, the computational demand for current 3D semantic occupancy perception methods generally surpasses that of BEV perception methods and 2D perception methods. We propose a novel 3D semantic occupancy perception method, OccupancyDETR, which consists of a DETR-like object detection module and a 3D occupancy decoder module. The integration of object detection simplifies our method structurally - instead of predicting the semantics of each voxels, it identifies objects in the scene and their respective 3D occupancy grids. This speeds up our method, reduces required resources, and leverages object detection algorithm, giving our approach notable performance on small objects. We demonstrate the effectiveness of our proposed method on the SemanticKITTI dataset, showcasing an mIoU of 23 and a processing speed of 6 frames per second, thereby presenting a promising solution for real-time 3D semantic scene completion. ",OccupancyDETR: Making Semantic Scene Completion as Straightforward as Object Detection " Large-scale annotation of image segmentation datasets is often prohibitively expensive, as it usually requires a huge number of worker hours to obtain high-quality results. Abundant and reliable data has been, however, crucial for the advances on image understanding tasks achieved by deep learning models. In this paper, we introduce FreeLabel, an intuitive open-source web interface that allows users to obtain high-quality segmentation masks with just a few freehand scribbles, in a matter of seconds. The efficacy of FreeLabel is quantitatively demonstrated by experimental results on the PASCAL dataset as well as on a dataset from the agricultural domain. Designed to benefit the computer vision community, FreeLabel can be used for both crowdsourced or private annotation and has a modular structure that can be easily adapted for any image dataset. ",FreeLabel: A Publicly Available Annotation Tool based on Freehand Traces " There has been tremendous interest in manipulating electron and hole-spin states in low-dimensional structures for electronic and spintronic applications. We study the edge magnetic coupling and anisotropy in zigzag stanene nanoribbons, by first-principles calculations. Taking into account considerable spin-orbit coupling and ferromagnetism at each edge, zigzag stanene nanoribbon is insulating and its band gap depends on the inter-edge magnetic coupling and the magnetization direction. Especially for nanoribbon edges with out-of-plane antiferromagnetic coupling, two non-degenerate valleys of edge states emerge and the spin degeneracy is tunable by a transverse electric field, which give full play to spin and valley degrees of freedom. More importantly, both the magnetic order and anisotropy can be selectively controlled by electron and hole doping, demonstrating a readily accessible gate-induced modulation of magnetism. These intriguing features offer a practical avenue for designing energy-efficient devices based on multiple degrees of freedom of electron and magneto-electric couplings. ",Electric control of the edge magnetization in zigzag stanene nanoribbon " The continuous growth of data production in almost all scientific areas raises new problems in data access and management, especially in a scenario where the end-users, as well as the resources that they can access, are worldwide distributed. This work is focused on the data caching management in a Data Lake infrastructure in the context of the High Energy Physics field. We are proposing an autonomous method, based on Reinforcement Learning techniques, to improve the user experience and to contain the maintenance costs of the infrastructure. ",Smart caching in a Data Lake for High Energy Physics analysis " We study a dispersive counterpart of the classical gas dynamics problem of the interaction of a shock wave with a counter-propagating simple rarefaction wave often referred to as the shock wave refraction. The refraction of a one-dimensional dispersive shock wave (DSW) due to its head-on collision with the centred rarefaction wave (RW) is considered in the framework of defocusing nonlinear Schr\""odinger (NLS) equation. For the integrable cubic nonlinearity case we present a full asymptotic description of the DSW refraction by constructing appropriate exact solutions of the Whitham modulation equations in Riemann invariants. For the NLS equation with saturable nonlinearity, whose modulation system does not possess Riemann invariants, we take advantage of the recently developed method for the DSW description in non-integrable dispersive systems to obtain main physical parameters of the DSW refraction. The key features of the DSW-RW interaction predicted by our modulation theory analysis are confirmed by direct numerical solutions of the full dispersive problem. ",Refraction of dispersive shock waves " In this paper, we consider three semi-discrete modified Korteweg-de Vries type equations which are the nonlinear lumped self-dual network equation,the semi-discrete lattice potential modified Korteweg-de Vries equation and a semi-discrete modified Korteweg-de Vries equation. We derive several kinds of exact solutions, in particular rational solutions, in terms of the Casorati determinant for these three equations respectively. For some rational solutions, we present the related asymptotic analysis to understand their dynamics better. ",Rational solutions for three semi-discrete modified Korteweg-de Vries type equations " Let $f$ be the germ of a real analytic function at the origin in $\mathbb{R}^n $ for $n \geq 2$, and suppose the codimension of the zero set of $f$ at $\mathbf{0}$ is at least $2$. We show that $\log |f|$ is $W^{1,1}_{\operatorname{loc}}$ near $\mathbf{0}$. In particular, this implies the differential inequality $|\nabla f |\leq V |f|$ holds with $V \in L^1_{\operatorname{loc}}$. As an application, we derive an inequality relating the {\L}ojasiewicz exponent and singularity exponent for such functions. ",Sobolev Differentiability Properties of Logarithmic Modulus of Real Analytic Functions " Josephson junctions based on three-dimensional topological insulators offer intriguing possibilities to realize unconventional $p$-wave pairing and Majorana modes. Here, we provide a detailed study of the effect of a uniform magnetization in the normal region: We show how the interplay between the spin-momentum locking of the topological insulator and an in-plane magnetization parallel to the direction of phase bias leads to an asymmetry of the Andreev spectrum with respect to transverse momenta. If sufficiently large, this asymmetry induces a transition from a regime of gapless, counterpropagating Majorana modes to a regime with unprotected modes that are unidirectional at small transverse momenta. Intriguingly, the magnetization-induced asymmetry of the Andreev spectrum also gives rise to a Josephson Hall effect, that is, the appearance of a transverse Josephson current. The amplitude and current phase relation of the Josephson Hall current are studied in detail. In particular, we show how magnetic control and gating of the normal region can enable sizable Josephson Hall currents compared to the longitudinal Josephson current. Finally, we also propose in-plane magnetic fields as an alternative to the magnetization in the normal region and discuss how the planar Josephson Hall effect could be observed in experiments. ",Planar Josephson Hall effect in topological Josephson junctions " Singh and Kumar (2011) suggested estimators for calculating population variance using auxiliary attributes. This paper proposes a family of estimators based on an adaptation of the estimators presented by Kadilar and Cingi (2004) and Singh et al. (2007), and introduces a new family of estimators using auxiliary attributes. The expressions of the mean square errors (MSEs) of the adapted and proposed families are derived. It is shown that adapted estimators and suggested estimators are more efficient than Singh and Kumar (2011) estimators. The theoretical findings are supported by a numerical example. ",Improved estimator of population variance using information on auxiliary attribute in simple random sampling " Molecular emission arising from the interactions of supernova remnant (SNR) shock waves and molecular clouds provide a tool for studying the dispersion and compression that might kick-start star formation as well as understanding cosmic ray enhancement in SNRs. Purely-rotational CO emission created by magneto-hydrodynamic shock in the SNR - molecular cloud interaction is an effective shock tracer, particularly for slow-moving, continuous shocks into cold inner clumps of the molecular cloud. In this work, we present a new theoretical radiative transfer framework for predicting the line profile of CO with the Paris-Durham 1D shock model. We generated line profile predictions for CO emission produced by slow, magnetized C-shocks into gas of density ~ 100,000 cm-3 with shock speeds of 35 and 50 km s-1. The numerical framework to reproduce the CO line profile utilizes the Large Velocity Gradient (LVG) approximation and the omission of optically-thick plane-parallel slabs. With this framework, we generated predictions for various CO spectroscopic observations up to J=16 in SNRs W28 and IC443, obtained with SOFIA, IRAM-30m, APEX, and KPNO. We found that CO line profile prediction offers constraints on the shock velocity and preshock density independent of the absolute line brightness and requiring fewer CO lines than diagnostics using an rotational excitation diagram. ",Modeling CO Line Profiles in Shocks of W28 and IC44 " The origin of magnetoresistance is bipolar organic materials is the influence of magnetic field on the dynamics of recombination within localized electron-hole pairs. Recombination from the $S$ spin-state of the pair in preceded by the beatings between the states $S$ and $T_0$. Period of the beating is set by the the random hyperfine field. For the case when recombination time from $S$ is shorter than the period, we demonstrate that a {\em weak} resonant ac drive, which couples $T_0$ to $T_+$ and $T_{-}$ affects dramatically the recombination dynamics and, thus, the current A distinctive characteristics of the effect is that the current versus the drive amplitude exhibits a {\em maximum}. ",Effect of the resonant ac-drive on the spin-dependent recombination of polaron pairs: Relation to organic magnetoresistance " The presence of the $(B+L)$-conserving decay modes $n \to K^+ e^-,$ $n \to K^+ \mu^-,$ $p \to K^+ e^- \pi^+$ and $p \to K^+ \mu^- \pi^+$ is shown to be a characteristic feature of a class of models with explicit breaking of $R$-parity. These modes dominate over the $(B-L)$-conserving ones in certain regions of the parameter space; the impact of this scenario for nucleon decay search at the Super-Kamiokande is discussed. ",$(B+L)$-conserving Nucleon Decays in Supersymmetric Models " This work investigates the compatibility of $Zr_{2}AlC$ MAX phase-based ceramics with liquid LBE, and proposes a mechanism to explain the observed local $Zr_{2}AlC$/LBE interaction. The ceramics were exposed to oxygen-poor ($C_{O}\le2.2 \cdot10^{-10}$ mass%), static liquid LBE at 500{\deg}C for 1000 h. A new $Zr_{2}(Al,Bi,Pb)C$ MAX phase solid solution formed in-situ in the LBE-affected $Zr_{2}AlC$ grains. Out-of-plane ordering was favorable in the new solid solution, whereby $\textit{A}$-layers with high and low-Bi/Pb contents alternated in the crystal structure, in agreement with first-principles calculations. Bulk $Zr_{2}(Al,Bi,Pb)C$ was synthesized by reactive hot pressing to study the crystal structure of the solid solution by neutron diffraction. ","Compatibility of $Zr_{2}AlC$ MAX phase-based ceramics with oxygen-poor, static liquid lead-bismuth eutectic" " Giant pandas, stereotyped as silent animals, make significantly more vocal sounds during breeding season, suggesting that sounds are essential for coordinating their reproduction and expression of mating preference. Previous biological studies have also proven that giant panda sounds are correlated with mating results and reproduction. This paper makes the first attempt to devise an automatic method for predicting mating success of giant pandas based on their vocal sounds. Given an audio sequence of mating giant pandas recorded during breeding encounters, we first crop out the segments with vocal sound of giant pandas, and normalize its magnitude, and length. We then extract acoustic features from the audio segment and feed the features into a deep neural network, which classifies the mating into success or failure. The proposed deep neural network employs convolution layers followed by bidirection gated recurrent units to extract vocal features, and applies attention mechanism to force the network to focus on most relevant features. Evaluation experiments on a data set collected during the past nine years obtain promising results, proving the potential of audio-based automatic mating success prediction methods in assisting giant panda reproduction. ",Audio-based automatic mating success prediction of giant pandas " We examine astronomical observations that would be achievable over a future timeline corresponding to the documented history of human civilization so far, $\sim 10^4$ years. We examine implications for measurements of the redshift drift, evolution of the CMB, and cosmic parallax. A number of events that are rare on the scale of centuries will become easily observable on a timescale $\sim 10^4$ years. Implications for several measurements related to gravity are discussed. ",Ultra Long-Term Cosmology and Astrophysics " Theoretical and numerical (Monte Carlo) N-particle computer model simulations show that Penrose Compton scattering (PCS) near the event horizon and Penrose pair production (PPP) at or near the photon orbit, in the ergosphere of a supermassive rotating black hole, can generate the necessary energy-momentum spectra to explain the origin of the mysterious fluxes of ultrarelativistic electrons, inferred from observations to emerge from the cores of Quasars 3C 279 and 3C 273, and other active galactic nuclei (AGNs). Particles from an accretion disk surrounding the black hole fall into the ergosphere and scatter off particles that are in trapped or bound unstable orbits. The Penrose mechanism allows rotational energy of a Kerr black hole, and energy-momentum produced by its strong gravitational field, to be extracted by scattered particles escaping from the ergosphere to infinity. The results of these model calculations show that the Penrose mechanism is capable of producing the observed high energy particles (~GeV) emitted by quasars and other AGNs. This mechanism can extract hard X-ray/gamma-ray photons from PCS of initially infalling low energy UV/soft X-ray photons by target orbiting electrons in the ergosphere. The PPP allows the escape of relativistic e-e+ pairs--produced by infalling low energy photons interacting with highly blueshifted target photons at the photon orbit. Moreover, and importantly, the emission of scattered particles by this mechanism naturally produces relativistic jets collimated about the polar axis, and in most cases one-sided or asymmetrical, agreeing with observations of AGNs. In these fully relativistic calculations, the energy-momentum four vectors (or four-momenta) of the scattered particles are obtained.(ABRIDGED) ",Production of the High Energy-Momentum Spectra of Quasars 3C 279 and 3C 273 Using the Penrose Mechanism The dynamical evolution of internal space-like dimensions breaks the invariance of the Maxwell's equations under Weyl rescaling of the (conformally flat) four-dimensional metric. Depending upon the number and upon the dynamics of internal dimensions large scale magnetic fields can be created. The requirements coming from magnetogenesis together with the other cosmological constraints are examined under the assumption that the internal dimensions either grow or shrink (in conformal time) prior to a radiation dominated epoch. If the internal dimensions are growing the magnitude of the generated magnetic fields can seed the galactic dynamo mechanism. ,Magnetogenesis and the dynamics of internal dimensions " Action recognition is a critical task for social robots to meaningfully engage with their environment. 3D human skeleton-based action recognition is an attractive research area in recent years. Although, the existing approaches are good at action recognition, it is a great challenge to recognize a group of actions in an activity scene. To tackle this problem, at first, we partition the scene into several primitive actions (PAs) based upon motion attention mechanism. Then, the primitive actions are described by the trajectory vectors of corresponding joints. After that, motivated by text classification based on word embedding, we employ convolution neural network (CNN) to recognize activity scenes by considering motion of joints as ""word"" of activity. The experimental results on the scenes of human activity dataset show the efficiency of the proposed approach. ","Object Activity Scene Description, Construction and Recognition" " Stochastic Dual Coordinate Descent (SDCD) has become one of the most efficient ways to solve the family of $\ell_2$-regularized empirical risk minimization problems, including linear SVM, logistic regression, and many others. The vanilla implementation of DCD is quite slow; however, by maintaining primal variables while updating dual variables, the time complexity of SDCD can be significantly reduced. Such a strategy forms the core algorithm in the widely-used LIBLINEAR package. In this paper, we parallelize the SDCD algorithms in LIBLINEAR. In recent research, several synchronized parallel SDCD algorithms have been proposed, however, they fail to achieve good speedup in the shared memory multi-core setting. In this paper, we propose a family of asynchronous stochastic dual coordinate descent algorithms (ASDCD). Each thread repeatedly selects a random dual variable and conducts coordinate updates using the primal variables that are stored in the shared memory. We analyze the convergence properties when different locking/atomic mechanisms are applied. For implementation with atomic operations, we show linear convergence under mild conditions. For implementation without any atomic operations or locking, we present the first {\it backward error analysis} for ASDCD under the multi-core environment, showing that the converged solution is the exact solution for a primal problem with perturbed regularizer. Experimental results show that our methods are much faster than previous parallel coordinate descent solvers. ",PASSCoDe: Parallel ASynchronous Stochastic dual Co-ordinate Descent " We present a method to generate realistic, three-dimensional networks of crosslinked semiflexible polymers. The free energy of these networks is obtained from the force-extension characteristics of the individual polymers and their persistent directionality through the crosslinks. A Monte Carlo scheme is employed to obtain isotropic, homogeneous networks that minimize the free energy, and for which all of the relevant parameters can be varied: the persistence length, the contour length as well as the crosslinking length may be chosen at will. We also provide an initial survey of the mechanical properties of our networks subjected to shear strains, showing them to display the expected non-linear stiffening behavior. Also, a key role for non-affinity and its relation to order in the network is uncovered. ",Monte Carlo study of multiply crosslinked semiflexible polymer networks " We derive explicit transformation formulae relating the renormalized quark mass and field as defined in the MS-bar scheme with the corresponding quantities defined in any other scheme. By analytically computing the three-loop quark propagator in the high-energy limit (that is keeping only massless terms and terms of first order in the quark mass) we find the NNNLO conversion factors transforming the MS-bar quark mass and the renormalized quark field to those defined in a ``Regularization Invariant'' (RI) scheme which is more suitable for lattice QCD calculations. The NNNLO contribution in the mass conversion factor turns out to be large and comparable to the previous NNLO contribution at a scale of 2 GeV --- the typical normalization scale employed in lattice simulations. Thus, in order to get a precise prediction for the MS-bar masses of the light quarks from lattice calculations the latter should use a somewhat higher scale of around, say, 3 GeV where the (apparent) convergence of the perturbative series for the mass conversion factor is better. We also compute two more terms in the high-energy expansion of the MS-bar renormalized quark propagator. The result is then used to discuss the uncertainty caused by the use of the high energy limit in determining the MS-bar mass of the charmed quark. As a by-product of our calculations we determine the four-loop anomalous dimensions of the quark mass and field in the Regularization Invariant scheme. Finally, we discuss some physical reasons lying behind the striking absence of zeta(4) in these computed anomalous dimensions. ",Renormalization and Running of Quark Mass and Field in the Regularization Invariant and MS-bar Schemes at Three and Four Loops " Starting with just the assumption of uniformly distributed orbital orientations, we derive expressions for the distributions of the Keplerian orbital elements as functions of arbitrary distributions of eccentricity and semi-major axis. We present methods for finding the probability density functions of the true anomaly, eccentric anomaly, orbital radius, and other parameters used in describing direct planetary observations. We also demonstrate the independence of the distribution of phase angle, which is highly significant in the study of direct searches, and present examples validating the derived expressions. ",Parameter distributions of Keplerian orbits " We demonstrate the successful operation of a multi-element superconducting nanowire single-photon detector (SSPD) array integrated with a single-flux-quantum (SFQ) readout circuit in a compact 0.1 W Gifford-McMahon cryocooler. A time-resolved readout technique, where output signals from each element enter the SFQ readout circuit with finite time intervals, revealed crosstalk-free operation of the four-element SSPD array connected with the SFQ readout circuit. The timing jitter and the system detection efficiency were measured to be 50 ps and 11.4%, respectively, which were comparable to the performance of practical single-pixel SSPD systems. ",Crosstalk-free operation of multi-element SSPD array integrated with SFQ circuit in a 0.1 Watt GM cryocooler " Let $X$ be bipartite mixed graph and for a unit complex number $\alpha$, $H_\alpha$ be its $\alpha$-hermitian adjacency matrix. If $X$ has a unique perfect matching, then $H_\alpha$ has a hermitian inverse $H_\alpha^{-1}$. In this paper we give a full description of the entries of $H_\alpha^{-1}$ in terms of the paths between the vertices. Furthermore, for $\alpha$ equals the primitive third root of unity $\gamma$ and for a unicyclic bipartite graph $X$ with unique perfect matching, we characterize when $H_\gamma^{-1}$ is $\pm 1$ diagonally similar to $\gamma$-hermitian adjacency matrix of a mixed graph. Through our work, we have provided a new construction for the $\pm 1$ diagonal matrix. ",Inverse of $\alpha$-Hermitian Adjacency Matrix of a Unicyclic Bipartite Graph " Let $\mu$ be a probability measure on $\text{Out}(F_N)$ with finite first logarithmic moment with respect to the word metric, finite entropy, and whose support generates a nonelementary subgroup of $\text{Out}(F_N)$. We show that almost every sample path of the random walk on $(\text{Out}(F_N),\mu)$, when realized in Culler and Vogtmann's outer space, converges to the simplex of a free, arational tree. We then prove that the space $\mathcal{FI}$ of simplices of free and arational trees, equipped with the hitting measure, is the Poisson boundary of $(\text{Out}(F_N),\mu)$. Using Bestvina-Reynolds' and Hamenst\""adt's description of the Gromov boundary of the complex $\mathcal{FF}_N$ of free factors of $F_N$, this gives a new proof of the fact, due to Calegari and Maher, that the realization in $\mathcal{FF_N}$ of almost every sample path of the random walk converges to a boundary point. We get in addition that $\partial\mathcal{FF}_N$, equipped with the hitting measure, is the Poisson boundary of $(\text{Out}(F_N),\mu)$. ",The Poisson boundary of $\text{Out}(F_N)$ " A symmetry in quantum mechanics is described by the projective representations of a Lie symmetry group that transforms between physical quantum states such that the square of the modulus of the states is invariant. The Heisenberg commutation relations, that are fundamental to quantum mechanics, must be valid in all of these physical states. This paper shows that the maximal quantum symmetry group, whose projective representations preserve the Heisenberg commutation relations in this manner, is the inhomogeneous symplectic group. The projective representations are equivalent to the unitary representations of the central extension of the inhomogeneous symplectic group. This centrally extended group is the semidirect product of the cover of the symplectic group and the Weyl-Heisenberg group. Its unitary irreducible representations are computed explicitly using the Mackey representation theorems for semidirect product groups. ",Maximal quantum mechanical symmetry: Projective representations of the inhomogenous symplectic group " For users to trust model predictions, they need to understand model outputs, particularly their confidence - calibration aims to adjust (calibrate) models' confidence to match expected accuracy. We argue that the traditional calibration evaluation does not promote effective calibrations: for example, it can encourage always assigning a mediocre confidence score to all predictions, which does not help users distinguish correct predictions from wrong ones. Building on those observations, we propose a new calibration metric, MacroCE, that better captures whether the model assigns low confidence to wrong predictions and high confidence to correct predictions. Focusing on the practical application of open-domain question answering, we examine conventional calibration methods applied on the widely-used retriever-reader pipeline, all of which do not bring significant gains under our new MacroCE metric. Toward better calibration, we propose a new calibration method (ConsCal) that uses not just final model predictions but whether multiple model checkpoints make consistent predictions. Altogether, we provide an alternative view of calibration along with a new metric, re-evaluation of existing calibration methods on our metric, and proposal of a more effective calibration method. ",Re-Examining Calibration: The Case of Question Answering " We propose to extend the commonly known flow analysis in the transverse $p_x$-$p_y$ plane to novel flow coefficients based on the angular distribution in the $p_x$-$p_z$ and $p_y$-$p_z$ planes. The new flow coefficients, called $u_n$ and $w_n$ (in addition to $v_n$), turn out to be also highly sensitive to the nuclear Equation-of-State and can be used to explore the EoS in more detail than is possible using only $v_n$. As an example to quantify the effect of the EoS, the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) model is used to investigate 20-30\% central Au+Au collisions at E$_\mathrm{lab}=1.23~A$GeV. ",3-dimensional flow analysis: A novel tool to study the collision geometry and the Equation-of-State " The Galactic Center has been under intense scrutiny in the recent years thanks to the unprecedented missions aiming at measuring the gas and star dynamics near the supermassive black hole (SMBH) and at finding gravitational wave (GW) signatures of inspiralling stellar black holes. In the crowded environment of galactic nuclei, the two-body interactions alter the distribution of stars on long timescales, making them drift in energy and angular momentum. We present a simplified analytical treatment of the scattering processes in galactic stellar nuclei, assuming all stars have the same mass. We have discussed how the interplay between two-body relaxation and gravitational wave emission modifies the slope of the inner stellar cusp within the SMBH sphere of influence, and calculated the rates of tidal disruption events (TDEs) and main-sequence extreme-mass ratio inspirals (MS-EMRIs) of stars that are tidally disrupted by the SMBH. We find that typically the ratio of the TDE and MS-EMRI rates is the square of the ratio of the tidal and Schwarzschild radii. For our Galaxy, this implies that the rate of MS-EMRIs is just about a percent of the TDE rate. We then consider the role of stars injected on highly eccentric orbits in the vicinity of the SMBH due to Hills binary disruption mechanism, and show that the MS-EMRI rate can almost approach the TDE rate if the binary fraction at the SMBH influence radius is close to unity. Finally, we discuss that physical stellar collisions affect a large area of phase space. ","Tidal disruption events, main-sequence extreme-mass ratio inspirals and binary star disruptions in galactic nuclei" " In addition to generating the appropriate perturbation power spectrum, an inflationary scenario must take into account the need for inflation to end subsequently. In the context of single-field inflation models where inflation ends by breaking of the slow-roll condition, we constrain the first and second derivatives of the inflaton potential using this additional requirement. We compare this with current observational constraints from the primordial spectrum and discuss several issues relating to our results. ",From the production of primordial perturbations to the end of inflation " In this paper we design {\sf FPT}-algorithms for two parameterized problems. The first is \textsc{List Digraph Homomorphism}: given two digraphs $G$ and $H$ and a list of allowed vertices of $H$ for every vertex of $G$, the question is whether there exists a homomorphism from $G$ to $H$ respecting the list constraints. The second problem is a variant of \textsc{Multiway Cut}, namely \textsc{Min-Max Multiway Cut}: given a graph $G$, a non-negative integer $\ell$, and a set $T$ of $r$ terminals, the question is whether we can partition the vertices of $G$ into $r$ parts such that (a) each part contains one terminal and (b) there are at most $\ell$ edges with only one endpoint in this part. We parameterize \textsc{List Digraph Homomorphism} by the number $w$ of edges of $G$ that are mapped to non-loop edges of $H$ and we give a time $2^{O(\ell\cdot\log h+\ell^2\cdot \log \ell)}\cdot n^{4}\cdot \log n$ algorithm, where $h$ is the order of the host graph $H$. We also prove that \textsc{Min-Max Multiway Cut} can be solved in time $2^{O((\ell r)^2\log \ell r)}\cdot n^{4}\cdot \log n$. Our approach introduces a general problem, called {\sc List Allocation}, whose expressive power permits the design of parameterized reductions of both aforementioned problems to it. Then our results are based on an {\sf FPT}-algorithm for the {\sc List Allocation} problem that is designed using a suitable adaptation of the {\em randomized contractions} technique (introduced by [Chitnis, Cygan, Hajiaghayi, Pilipczuk, and Pilipczuk, FOCS 2012]). ",Parameterized Algorithms for Min-Max Multiway Cut and List Digraph Homomorphism " The identity of dark matter is one of the greatest puzzles of our Universe. Its solution may be associated with supersymmetry which is a fundamental space-time symmetry that has not been verified experimentally so far. In many supersymmetric extensions of the Standard Model of particle physics, the lightest supersymmetric particle cannot decay and is hence a promising dark matter candidate. The lightest neutralino, which appears already in the minimal supersymmetric model, can be identified as such a candidate in indirect and direct dark matter searches and at future colliders. As the superpartner of the graviton, the gravitino is another candidate for the lightest superparticle that provides a compelling explanation of dark matter. While it will neither be detected in indirect or direct searches nor be produced directly at accelerators, the analysis of late-decaying charged particles can allow for an experimental identification of the gravitino at future accelerators. In this way, the upcoming experiments at the CERN Large Hadron Collider may become a key to the understanding of our Universe. ",Supersymmetric Candidates for Dark Matter (in German) " Many powerful machine learning models are based on the composition of multiple processing layers, such as deep nets, which gives rise to nonconvex objective functions. A general, recent approach to optimise such ""nested"" functions is the method of auxiliary coordinates (MAC). MAC introduces an auxiliary coordinate for each data point in order to decouple the nested model into independent submodels. This decomposes the optimisation into steps that alternate between training single layers and updating the coordinates. It has the advantage that it reuses existing single-layer algorithms, introduces parallelism, and does not need to use chain-rule gradients, so it works with nondifferentiable layers. With large-scale problems, or when distributing the computation is necessary for faster training, the dataset may not fit in a single machine. It is then essential to limit the amount of communication between machines so it does not obliterate the benefit of parallelism. We describe a general way to achieve this, ParMAC. ParMAC works on a cluster of processing machines with a circular topology and alternates two steps until convergence: one step trains the submodels in parallel using stochastic updates, and the other trains the coordinates in parallel. Only submodel parameters, no data or coordinates, are ever communicated between machines. ParMAC exhibits high parallelism, low communication overhead, and facilitates data shuffling, load balancing, fault tolerance and streaming data processing. We study the convergence of ParMAC and propose a theoretical model of its runtime and parallel speedup. We develop ParMAC to learn binary autoencoders for fast, approximate image retrieval. We implement it in MPI in a distributed system and demonstrate nearly perfect speedups in a 128-processor cluster with a training set of 100 million high-dimensional points. ","ParMAC: distributed optimisation of nested functions, with application to learning binary autoencoders" " Analytical relations for the mechanical response of single polymer chains are valuable for modeling purposes, on both the molecular and the continuum scale. These relations can be obtained using statistical thermodynamics and an idealized single-chain model, such as the freely jointed chain model. To include bond stretching, the rigid links in the freely jointed chain model can be made extensible, but this almost always renders the model analytically intractable. Here, an asymptotically correct statistical thermodynamic theory is used to develop analytic approximations for the single-chain mechanical response of this model. The accuracy of these approximations is demonstrated using several link potential energy functions. This approach can be applied to other single-chain models, and to molecular stretching in general. ",Freely jointed chain models with extensible links " Language-enabled AI systems can answer complex, multi-hop questions to high accuracy, but supporting answers with evidence is a more challenging task which is important for the transparency and trustworthiness to users. Prior work in this area typically makes a trade-off between efficiency and accuracy; state-of-the-art deep neural network systems are too cumbersome to be useful in large-scale applications, while the fastest systems lack reliability. In this work, we integrate fast syntactic methods with powerful semantic methods for multi-hop explanation generation based on declarative facts. Our best system, which learns a lightweight operation to simulate multi-hop reasoning over pieces of evidence and fine-tunes language models to re-rank generated explanation chains, outperforms a purely syntactic baseline from prior work by up to 7% in gold explanation retrieval rate. ",Best of Both Worlds: A Hybrid Approach for Multi-Hop Explanation with Declarative Facts " Radiative diffusion damps acoustic modes at large comoving wavenumber (k) before decoupling (``Silk damping''). In a simple WKB analysis, neglecting moments of the temperature distribution beyond the quadrupole, damping appears in the acoustic mode as a term of order ik^2/(taudot) where taudot is the scattering rate per unit conformal time. Although the Jeans instability is stabilized on scales smaller than the adiabatic Jeans length, I show that the medium is linearly unstable to first order in (1/taudot) to a slow diffusive mode. At large comoving wavenumber, the characteristic growth rate becomes independent of spatial scale and constant: (t_{KH}a)^-1 ~ (128 pi G/9 kappa_T c)(rho_m/rho_b), where ""a"" is the scale factor, rho_m and rho_b are the matter and baryon energy density, respectively, and kappa_T is the Thomson opacity. This is the characteristic timescale for a fluid parcel to radiate away its thermal energy content at the Eddington limit, analogous to the Kelvin-Helmholz (KH) time for a massive star or the Salpeter time for black hole growth. Although this mode grows at all times prior to decoupling and on scales smaller than the horizon, the growth time is long, about 100 times the age of the universe at decoupling. Thus, it modifies the density and temperature perturbations on small scales only at the percent level. The physics of this mode is already accounted for in the popular codes CMBFAST and CAMB, but is typically neglected in analytic studies of the growth of primordial perturbations. This work clarifies the physics of this instability in the epoch before decoupling, and emphasizes that the universe is formally unstable on scales below the horizon, even in the limit of large taudot. Analogous instabilities at yet earlier epochs are also mentioned. (Abridged) ",Slow Diffusive Gravitational Instability Before Decoupling " Stabilizer states are extensively studied in quantum information theory for their structures based on the Pauli group. Calderbank-Shor-Steane (CSS) stabilizer states are of particular importance in their application to fault-tolerant quantum computation (FTQC). However, how to fault-tolerantly prepare arbitrary CSS stabilizer states for general CSS stabilizer codes is still unknown, and their preparation can be highly costly in computational resources. In this paper, we show how to prepare a large class of CSS stabilizer states useful for FTQC. We propose distillation protocols using syndrome encoding by classical codes or quantum CSS codes. Along the same lines, we show that classical coding techniques can reduce the ancilla consumption in Steane syndrome extraction by using additional transversal controlled-NOT gates and classical computing power. In the scenario of a fixed ancilla consumption rate, we can increase the frequency of quantum error correction and effectively lower the error rate. ",Fault-tolerant Preparation of Stabilizer States for Quantum CSS Codes by Classical Error-Correcting Codes " In this paper, important concepts from finite group theory are translated to localities, in particular to linking localities. Here localities are group-like structures associated to fusion systems which were introduced by Chermak. Linking localities (by Chermak also called proper localities) are special kinds of localities which correspond to linking systems. Thus they contain the algebraic information that is needed to study $p$-completed classifying spaces of fusion systems as generalizations of $p$-completed classifying spaces of finite groups. Because of the group-like nature of localities, there is a natural notion of partial normal subgroups. Given a locality $\mathcal{L}$ and a partial normal subgroup $\mathcal{N}$ of $\mathcal{L}$, we show that there is a largest partial normal subgroup $\mathcal{N}^\perp$ of $\mathcal{L}$ which, in a certain sense, commutes elementwise with $\mathcal{N}$ and thus morally plays the role of a ""centralizer"" of $\mathcal{N}$ in $\mathcal{L}$. This leads to a nice notion of the generalized Fitting subgroup $F^*(\mathcal{L})$ of a linking locality $\mathcal{L}$. Building on these results we define and study special kinds of linking localities called regular localities. It turns out that there is a theory of components of regular localities akin to the theory of components of finite groups. The main concepts we introduce and work with in the present paper (in particular $\mathcal{N}^\perp$ in the special case of linking localities, $F^*(\mathcal{L})$, regular localities and components of regular localities) were already introduced and studied in a preprint by Chermak. However, we give a different and self-contained approach to the subject where we reprove Chermak's theorems and also show several new results. ",Commuting partial normal subgroups and regular localities " We study a class of non-protected local composite operators which occur in the R symmetry singlet channel of the OPE of two stress-tensor multiplets in {\cal N}=4 SYM. At tree level these are quadrilinear scalar dimension four operators, two single-traces and two double-traces. In the presence of interaction, due to a non-trivial mixing under renormalization, they split into linear combinations of conformally covariant operators. We resolve the mixing by computing the one-loop two-point functions of all the operators in an {\cal N}=1 setup, then diagonalizing the anomalous dimension matrix and identifying the quasiprimary operators. We find one operator whose anomalous dimension is negative and suppressed by a factor of 1/N^2 with respect to the anomalous dimensions of the Konishi-like operators. We reveal the mechanism responsible for this suppression and argue that it works at every order in perturbation theory. In the context of the AdS/CFT correspondence such an operator should be dual to a multiparticle supergravity state whose energy is less than the sum of the corresponding individual single-particle states. ",Non-protected operators in N=4 SYM and multiparticle states of AdS_5 SUGRA " Objective: This work aims at providing a new method for the automatic detection of atrial fibrillation, other arrhythmia and noise on short single lead ECG signals, emphasizing the importance of the interpretability of the classification results. Approach: A morphological and rhythm description of the cardiac behavior is obtained by a knowledge-based interpretation of the signal using the \textit{Construe} abductive framework. Then, a set of meaningful features are extracted for each individual heartbeat and as a summary of the full record. The feature distributions were used to elucidate the expert criteria underlying the labeling of the 2017 Physionet/CinC Challenge dataset, enabling a manual partial relabeling to improve the consistency of the classification rules. Finally, state-of-the-art machine learning methods are combined to provide an answer on the basis of the feature values. Main results: The proposal tied for the first place in the official stage of the Challenge, with a combined $F_1$ score of 0.83, and was even improved in the follow-up stage to 0.85 with a significant simplification of the model. Significance: This approach demonstrates the potential of \textit{Construe} to provide robust and valuable descriptions of temporal data even with significant amounts of noise and artifacts. Also, we discuss the importance of a consistent classification criteria in manually labeled training datasets, and the fundamental advantages of knowledge-based approaches to formalize and validate that criteria. ",Abductive reasoning as the basis to reproduce expert criteria in ECG Atrial Fibrillation identification " Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility. ",Being Robust (in High Dimensions) Can Be Practical " Using the high-resolution observations obtained by the Hubble Space Telescope, we analyzed the blue straggler stars (BSSs) in the Large Magellanic Cloud cluster NGC 2213. We found that the radial distribution of BSSs is consistent with that of the normal giant stars in NGC 2213, showing no evidence of mass segregation. However, an analytic calculation carried out for these BSSs shows that they are already dynamically old, because the estimated half-mass relaxation time for these BSSs is significantly shorter than the isochronal age of the cluster. We also performed direct N-body simulations for a NGC 2213-like cluster to understand the dynamical processes that lead to this none-segregated radial distribution of BSSs. Our numerical simulation shows that the presence of black hole subsystems inside the cluster centre can significantly affect the dynamical evolution of BSSs. The combined effects of the delayed segregation, binary disruption and exchange interactions of BSS progenitor binaries may result in this none-segregated radial distribution of BSSs in NGC 2213. ",Blue straggler stars beyond the Milky Way: a non-segregated population in the Large Magellanic Cloud cluster NGC 2213 " Broad-band (0.8-70 keV) spectra of the persistent X-ray emission from 9 magnetars were obtained with Suzaku, including 3 objects in apparent outburst. The soft X-ray component was detected from all of them, with a typical blackbody temperature of kT ~ 0.5 keV, while the hard-tail component, dominating above ~10 keV, was detected at ~1 mCrab intensity from 7 of them. Therefore, the spectrum composed of a soft emission and a hard-tail component may be considered to be a common property of magnetars, both in their active and quiescent states. Wide-band spectral analyses revealed that the hard-tail component has a 1-60 keV flux, Fh, comparable to or even higher than that carried by the 1-60 keV soft component, Fs. The hardness ratio of these objects, defined as xi=Fh/Fs, was found to be tightly anti-correlated with their characteristic age tau as xi=(3.3+/-0.3)x(tau/1 kyr)^(-0.67+/-0.04) with a correlation coefficient of -0.989, over the range from xi~10 to xi~0.1. Magnetars in outburst states were found to lie on the same correlation as relatively quiescent ones. This hardness ratio is also positively correlated with their surface magnetic fields with a correlation coefficient of 0.873. In addition, the hard-tail component becomes harder towards sources with older characteristic ages, with the photon index changing from ~1.7 to ~0.4. ",Broad-band study with Suzaku of the magnetar class " We find a parametric resonance in the GHz range of the DNA dynamics, generated by pumping hypersound . There are localized phonon modes caused by the random structure of elastic modulii due to the sequence of base pairs. ",Acoustic Spectroscopy of the DNA in GHz range " Little is known about clouded leopards (Neofelis nebulosa), who have a vulnerable population that extends across southern Asia. We reviewed the literature and synthesized what is known about their ecology and behavior. Much of the published literature either note detections within and on the edges of their range, or are anecdotal observations, many of which are decades if not over a century old. Clouded leopards are a medium-sized felid, with distinctive cloud-shape markings, and notably long canines relative to skull size. Estimates for population densities range from 0.58 to 6.53 individuals per 100 km2. Only 7 clouded leopards have been tracked via radio-collars, and home range estimates range from 33.6-39.7 km2 for females and 35.5-43.5 km2 for males. Most accounts describe clouded leopards as nocturnal, but radio telemetry studies showed that clouded leopards have arrhythmic activity patterns, with highest activity in the morning followed by evening crepuscular hours. There has never been a targeted study of clouded leopard diet, but observations show that they consume a variety of animals, including ungulates, primates, and rodents. We encourage future study of their population density and range to inform conservation efforts, and ecological studies in order to understand the species and its ecological niche. ",A review of our current knowledge of clouded leopards (Neofelis nebulosa) " The aim of this paper is to study the asymptotic properties of a class of kernel conditional mode estimates whenever functional stationary ergodic data are considered. To be more precise on the matter, in the ergodic data setting, we consider a random element $(X, Z)$ taking values in some semi-metric abstract space $E\times F$. For a real function $\varphi$ defined on the space $F$ and $x\in E$, we consider the conditional mode of the real random variable $\varphi(Z)$ given the event $``X=x""$. While estimating the conditional mode function, say $\theta_\varphi(x)$, using the well-known kernel estimator, we establish the strong consistency with rate of this estimate uniformly over Vapnik-Chervonenkis classes of functions $\varphi$. Notice that the ergodic setting offers a more general framework than the usual mixing structure. Two applications to energy data are provided to illustrate some examples of the proposed approach in time series forecasting framework. The first one consists in forecasting the {\it daily peak} of electricity demand in France (measured in Giga-Watt). Whereas the second one deals with the short-term forecasting of the electrical {\it energy} (measured in Giga-Watt per Hour) that may be consumed over some time intervals that cover the peak demand. ",Rate of uniform consistency for a class of mode regression on functional stationary ergodic data. Application to electricity consumption " Monte Carlo simulations are performed to study structure and dynamics of a protein CoVE in random media generated by a random distribution of barriers at concentration c with a coarse-grained model in its native (low temperature) and denatured (high temperature) phase. The stochastic dynamics of the protein is diffusive in denature phase at low c, it slows down on increasing c and stops moving beyond a threshold (cth = 0.10). In native phase, the protein moves extremely slow at low c but speeds up on further increasing c in a characteristic range (c = 0.10 - 0.20) before getting trapped at high c (cth = 0.30). The radius of gyration (Rg) of CoVE shows different non-monotonic dependence on c (increase followed by decay) in native and denature phase with a higher and sharper rate of change in farmer. Effective dimension (D) of CoVE is estimated from the scaling of structure factor: in denatured phase, D = 2 (a random coil conformation) at low c (= 0.01 - 0.10) with appearance of some globularization i.e. D ? 2.3, 2.5 at higher c (= 0.2, 0.3). Increasing c seems to reduce the globularity (D = 3) of CoVE in native phase. ",A Monte Carlo simulation of a protein (CoVE) in a matrix of random barriers " We consider the problem of multiple agents or robots searching for a target in the plane. This is motivated by Search and Rescue operations (SAR) in the high seas which in the past were often performed with several vessels, and more recently by swarms of aerial drones and/or unmanned surface vessels. Coordinating such a search in an effective manner is a non trivial task. In this paper, we develop first an optimal strategy for searching with k robots starting from a common origin and moving at unit speed. We then apply the results from this model to more realistic scenarios such as differential search speeds, late arrival times to the search effort and low probability of detection under poor visibility conditions. We show that, surprisingly, the theoretical idealized model still governs the search with certain suitable minor adaptations. ",Optimal Distributed Searching in the Plane with and without Uncertainty " We present an extensive analysis on several string solutions in AdS_5 x Ypq and find some interesting properties of their energy-spin relations. Their energy depends always on the parameter a(p,q) which characterizes these manifolds. The range of this parameter for the string solutions is constrained by the Sasaki-Einstein constraints that the solutions should satisfy. Hence some string solutions we find are not valid for the whole class of Ypq manifolds. For some of our solutions, when the maximum allowed value of a(p,q) corresponds to the string approaching the poles of the squashed sphere in Ypq, their energy at this limit approaches the BPS one. Thus certain non-BPS string solutions in the whole class of Sasaki-Einstein manifolds, can become BPS in particular manifolds. For the solutions with this property we point out that this behavior is independent of the string motion in the other directions on the manifold. We expect that in the field theory the corresponding generic operators to these semi-classical strings, become BPS at certain quivers. ",On the non-BPS string solutions in Sasaki-Einstein gauge/gravity duality " An investigation of optimal feedback controllers' performance and robustness is carried out for vortex shedding behind a 2D cylinder at low Reynolds numbers. To facilitate controller design, we present an efficient modelling approach in which we utilise the resolvent operator to recast the linearised Navier-Stokes equations into an input-output form from which frequency responses can be computed. The difficulty of applying modern control design techniques to complex, high-dimensional flow systems is thus overcome by using low-order models identified from these frequency responses. The low-order models are used to design optimal control laws using $\mathcal{H}_{\infty}$ loop shaping. Two distinct control arrangements are considered, both of which employ a single-input and a single-output. In the first control arrangement, a velocity sensor located in the wake drives a pair of body forces near the cylinder. Complete suppression of shedding is observed up to a Reynolds number of $Re=110$. Due to the convective nature of vortex shedding and the corresponding time delays, we observe a fundamental trade-off: the sensor should be close enough to the cylinder to avoid any excessive time lag, but it should be kept sufficiently far from the cylinder to measure any unstable modes developing downstream. It is found that these two conflicting requirements become more difficult to satisfy for larger Reynolds numbers. In the second control arrangement, we consider a practical setup with a body-mounted force sensor and an actuator that oscillates the cylinder according to the lift measurement. It is shown that the system is stabilised only up to $Re=100$, and we demonstrate why the performance of the resulting feedback controllers deteriorates much more rapidly with increasing Reynolds number. The challenges of designing robust controllers for each control setup are also analysed and discussed. ",Feedback control of vortex shedding using a resolvent-based modelling approach " Stars form predominantly in groups usually denoted as clusters or associations. The observed stellar groups display a broad spectrum of masses, sizes and other properties, so it is often assumed that there is no underlying structure in this diversity. Here we show that the assumption of an unstructured multitude of cluster or association types might be misleading. Current data compilations of clusters show correlations between cluster mass, size, age, maximum stellar mass etc. In this first paper we take a closer look at the correlation of cluster mass and radius. We use literature data to explore relations in cluster and molecular core properties in the solar neighborhood. We show that for embedded clusters in the solar neighborhood there exists a clear correlation between cluster mass and half-mass radius of the form $M_c = C R_c^{\gamma}$ with gamma = 1.7 +/-0.2. This correlation holds for infra red K band data as well as X-ray sources and for clusters containing a hundred stars up to those consisting of a few tens of thousands of stars. The correlation is difficult to verify for clusters containing <30 stars due to low-number statistics. Dense clumps of gas are the progenitors of the embedded clusters. We find a similar slope for the mass-size relation of dense, massive clumps as for the embedded star clusters. This might point at a direct translation from gas to stellar mass: however, it is difficult to relate size measurements for clusters (stars) to those for gas profiles. Taking into account multiple paths for clump mass into cluster mass, we obtain an average star-formation efficiency of 18%{+9.3}{-5.7} for the embedded clusters in the solar neighborhood. The derived mass-radius relation gives constraints for the theory of clustered star formation. Analytical models and simulations of clustered star formation have to reproduce this relation in order to be realistic (abridged) ",Observational constraints on star cluster formation theory - I. The mass-radius relation " This report describes a pre-trained language model Erlangshen with propensity-corrected loss, the No.1 in CLUE Semantic Matching Challenge. In the pre-training stage, we construct a dynamic masking strategy based on knowledge in Masked Language Modeling (MLM) with whole word masking. Furthermore, by observing the specific structure of the dataset, the pre-trained Erlangshen applies propensity-corrected loss (PCL) in the fine-tuning phase. Overall, we achieve 72.54 points in F1 Score and 78.90 points in Accuracy on the test set. Our code is publicly available at: https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/hf-ds/fengshen/examples/clue_sim. ",Towards No.1 in CLUE Semantic Matching Challenge: Pre-trained Language Model Erlangshen with Propensity-Corrected Loss " Many large-scale machine learning problems involve estimating an unknown parameter $\theta_{i}$ for each of many items. For example, a key problem in sponsored search is to estimate the click through rate (CTR) of each of billions of query-ad pairs. Most common methods, though, only give a point estimate of each $\theta_{i}$. A posterior distribution for each $\theta_{i}$ is usually more useful but harder to get. We present a simple post-processing technique that takes point estimates or scores $t_{i}$ (from any method) and estimates an approximate posterior for each $\theta_{i}$. We build on the idea of calibration, a common post-processing technique that estimates $\mathrm{E}\left(\theta_{i}\!\!\bigm|\!\! t_{i}\right)$. Our method, second order calibration, uses empirical Bayes methods to estimate the distribution of $\theta_{i}\!\!\bigm|\!\! t_{i}$ and uses the estimated distribution as an approximation to the posterior distribution of $\theta_{i}$. We show that this can yield improved point estimates and useful accuracy estimates. The method scales to large problems - our motivating example is a CTR estimation problem involving tens of billions of query-ad pairs. ",Second Order Calibration: A Simple Way to Get Approximate Posteriors " In this work we give an estimate of the neutrino flux that can be expected from the microquasar Cyg X-3. We calculate the muon neutrino flux expected here on Earth as well as the corresponding number of neutrino events in the IceCube telescope based on the so-called hypersoft X-ray state of Cyg~X-3. If the average emission from Cyg~X-3 over a period of 5 yr were as high as during the used X-ray state, a total of 0.8 events should be observed by the full IceCube telescope. We also show that this conclusion holds by a factor of a few when we consider the other measured X-ray states. Using the correlation of AGILE data on the flaring episodes in 2009 June and July to the hypersoft X-ray state we calculate that the upper limits on the neutrino flux given by IceCube are starting to constrain the hadronic models, which have been introduced to interpret the high-energy emission detected by AGILE. ",Estimation of the neutrino flux and resulting constraints on hadronic emission models for Cyg X-3 using AGILE data " We prove an exponential upper bound for the number $f(m,n)$ of all maximal triangulations of the $m\times n$ grid: \[ f(m,n) < 2^{3mn}. \] In particular, this improves a result of S. Yu. Orevkov (1999). ",An Upper Bound for the Number of Planar Lattice Triangulations " Long-tailed datasets, where head classes comprise much more training samples than tail classes, cause recognition models to get biased towards the head classes. Weighted loss is one of the most popular ways of mitigating this issue, and a recent work has suggested that class-difficulty might be a better clue than conventionally used class-frequency to decide the distribution of weights. A heuristic formulation was used in the previous work for quantifying the difficulty, but we empirically find that the optimal formulation varies depending on the characteristics of datasets. Therefore, we propose Difficulty-Net, which learns to predict the difficulty of classes using the model's performance in a meta-learning framework. To make it learn reasonable difficulty of a class within the context of other classes, we newly introduce two key concepts, namely the relative difficulty and the driver loss. The former helps Difficulty-Net take other classes into account when calculating difficulty of a class, while the latter is indispensable for guiding the learning to a meaningful direction. Extensive experiments on popular long-tailed datasets demonstrated the effectiveness of the proposed method, and it achieved state-of-the-art performance on multiple long-tailed datasets. ",Difficulty-Net: Learning to Predict Difficulty for Long-Tailed Recognition " We discuss the resummation procedure of the superpotential in 4d $\mathcal{N}=2$ SYM theory without matter from the point of view of 2d Liouville conformal field theory, utilizing the AGT correspondence. We identify contributions of different descendants of the intermediate state, defining a conformal block in CFT, to the superpotential and answer the question, which descendants are responsible for the appearance of the branch cuts in the superpotential. ",On resummation of the irregular conformal block " Accurate traffic forecasting is vital to an intelligent transportation system. Although many deep learning models have achieved state-of-art performance for short-term traffic forecasting of up to 1 hour, long-term traffic forecasting that spans multiple hours remains a major challenge. Moreover, most of the existing deep learning traffic forecasting models are black box, presenting additional challenges related to explainability and interpretability. We develop Graph Pyramid Autoformer (X-GPA), an explainable attention-based spatial-temporal graph neural network that uses a novel pyramid autocorrelation attention mechanism. It enables learning from long temporal sequences on graphs and improves long-term traffic forecasting accuracy. Our model can achieve up to 35 % better long-term traffic forecast accuracy than that of several state-of-the-art methods. The attention-based scores from the X-GPA model provide spatial and temporal explanations based on the traffic dynamics, which change for normal vs. peak-hour traffic and weekday vs. weekend traffic. ",Explainable Graph Pyramid Autoformer for Long-Term Traffic Forecasting " The popularity of on-demand ride pooling is owing to the benefits offered to customers (lower prices), taxi drivers (higher revenue), environment (lower carbon footprint due to fewer vehicles) and aggregation companies like Uber (higher revenue). To achieve these benefits, two key interlinked challenges have to be solved effectively: (a) pricing -- setting prices to customer requests for taxis; and (b) matching -- assignment of customers (that accepted the prices) to taxis/cars. Traditionally, both these challenges have been studied individually and using myopic approaches (considering only current requests), without considering the impact of current matching on addressing future requests. In this paper, we develop a novel framework that handles the pricing and matching problems together, while also considering the future impact of the pricing and matching decisions. In our experimental results on a real-world taxi dataset, we demonstrate that our framework can significantly improve revenue (up to 17% and on average 6.4%) in a sustainable manner by reducing the number of vehicles (up to 14% and on average 10.6%) required to obtain a given fixed revenue and the overall distance travelled by vehicles (up to 11.1% and on average 3.7%). That is to say, we are able to provide an ideal win-win scenario for all stakeholders (customers, drivers, aggregator, environment) involved by obtaining higher revenue for customers, drivers, aggregator (ride pooling company) while being good for the environment (due to fewer number of vehicles on the road and lesser fuel consumed). ",Future Aware Pricing and Matching for Sustainable On-demand Ride Pooling " In the phenomenon of electromagnetically induced transparency1 (EIT) of a three-level atomic system, the linear susceptibility at the dipole-allowed transition is canceled through destructive interference of the direct transition and an indirect transition pathway involving a meta-stable level, enabled by optical pumping. EIT not only leads to light transmission at otherwise opaque atomic transition frequencies, but also results in the slowing of light group velocity and enhanced optical nonlinearity. In this letter, we report an analogous behavior, denoted as phonon-induced transparency (PIT), in AB-stacked bilayer graphene nanoribbons. Here, light absorption due to the plasmon excitation is suppressed in a narrow window due to the coupling with the infrared active {\Gamma}-point optical phonon, whose function here is similar to that of the meta-stable level in EIT of atomic systems. We further show that PIT in bilayer graphene is actively tunable by electrostatic gating, and estimate a maximum slow light factor of around 500 at the phonon frequency of 1580 cm-1, based on the measured spectra. Our demonstration opens an avenue for the exploration of few-photon non-linear optics and slow light in this novel two-dimensional material, without external optical pumping and at room temperature. ",Tunable phonon-induced transparency in bilayer graphene nanoribbons " The nematic twist-bend ($\mathrm{N_{TB}}$) phase, exhibited by certain thermotropic liquid crystalline (LC) dimers, represents a new orientationally ordered mesophase -- the first distinct nematic variant discovered in many years. The $\mathrm{N_{TB}}$ phase is distinguished by a heliconical winding of the average molecular long axis (director) with a remarkably short (nanoscale) pitch and, in systems of achiral dimers, with an equal probability to form right- and left-handed domains. The $\mathrm{N_{TB}}$ structure thus provides another fascinating example of spontaneous chiral symmetry breaking in nature. The order parameter driving the formation of the heliconical state has been theoretically conjectured to be a polarization field, deriving from the bent conformation of the dimers, that rotates helically with the same nanoscale pitch as the director field. It therefore presents a significant challenge for experimental detection. Here we report a second harmonic light scattering (SHLS) study on two achiral, $\mathrm{N_{TB}}$-forming LCs, which is sensitive to the polarization field due to micron-scale distortion of the helical structure associated with naturally-occurring textural defects. These defects are parabolic focal conics of smectic-like ""pseudo-layers"", defined by planes of equivalent phase in a coarse-grained description of the $\mathrm{N_{TB}}$ state. Our SHLS data are explained by a coarse-grained free energy density that combines a Landau-deGennes expansion of the polarization field, the elastic energy of a nematic, and a linear coupling between the two. ",Second harmonic light scattering induced by defects in the twist-bend nematic phase of liquid crystal dimers " Sparse linear regression is a central problem in high-dimensional statistics. We study the correlated random design setting, where the covariates are drawn from a multivariate Gaussian $N(0,\Sigma)$, and we seek an estimator with small excess risk. If the true signal is $t$-sparse, information-theoretically, it is possible to achieve strong recovery guarantees with only $O(t\log n)$ samples. However, computationally efficient algorithms have sample complexity linear in (some variant of) the condition number of $\Sigma$. Classical algorithms such as the Lasso can require significantly more samples than necessary even if there is only a single sparse approximate dependency among the covariates. We provide a polynomial-time algorithm that, given $\Sigma$, automatically adapts the Lasso to tolerate a small number of approximate dependencies. In particular, we achieve near-optimal sample complexity for constant sparsity and if $\Sigma$ has few ``outlier'' eigenvalues. Our algorithm fits into a broader framework of feature adaptation for sparse linear regression with ill-conditioned covariates. With this framework, we additionally provide the first polynomial-factor improvement over brute-force search for constant sparsity $t$ and arbitrary covariance $\Sigma$. ",Feature Adaptation for Sparse Linear Regression " We investigate the properties of the mixed-mode (RRd) RR Lyrae (RRL) variables in the Fornax dwarf spheroidal (dSph) galaxy by using $B$- and $V$-band time series collected over twenty-four years. We compare the properties of the RRds in Fornax with those in the Magellanic Clouds and in nearby dSphs, with special focus on Sculptor. We found that the ratio of RRds over the total number of RRLs decreases with metallicity. Typically, dSphs have very few RRds with 0.49$\ltsim P_0 \ltsim $0.53 days, but Fornax fills this period gap in the Petersen diagram (ratio between first overtone over fundamental period versus fundamental period). We also found that the distribution in the Petersen diagram of Fornax RRds is similar to SMC RRds, thus suggesting that their old stars have a similar metallicity distribution. We introduce the Period-Amplitude RatioS (PARS) diagram, a new pulsation diagnostics independent of distance and reddening. We found that LMC RRds in this plane are distributed along a short- and a long-period sequence that we identified as the metal-rich and the metal-poor component. These two groups are also clearly separated in the Petersen and Bailey (luminosity amplitude versus logarithmic period) diagrams. These circumstantial evidence indicates that the two groups have different evolutionary properties. All the pulsation diagnostics adopted in this investigation suggest that old stellar populations in Fornax and Sculptor dSphs underwent different chemical enrichment histories. Fornax RRds are similar to SMC RRds, while Sculptor RRds are more similar to the metal-rich component of the LMC RRds. ",On the Use of Field RR Lyrae as Galactic Probes. VI. Mixed mode RR Lyrae variables in Fornax and in nearby dwarf galaxies " In the recent past, there has been a concerted effort to develop mathematical models for real-world networks and to analyze various dynamics on these models. One particular problem of significant importance is to understand the effect of random edge lengths or costs on the geometry and flow transporting properties of the network. Two different regimes are of great interest, the weak disorder regime where optimality of a path is determined by the sum of edge weights on the path and the strong disorder regime where optimality of a path is determined by the maximal edge weight on the path. In the context of the stochastic mean-field model of distance, we provide the first mathematically tractable model of weak disorder and show that no transition occurs at finite temperature. Indeed, we show that for every finite temperature, the number of edges on the minimal weight path (i.e., the hopcount) is $\Theta(\log{n})$ and satisfies a central limit theorem with asymptotic means and variances of order $\Theta(\log{n})$, with limiting constants expressible in terms of the Malthusian rate of growth and the mean of the stable-age distribution of an associated continuous-time branching process. More precisely, we take independent and identically distributed edge weights with distribution $E^s$ for some parameter $s>0$, where $E$ is an exponential random variable with mean 1. Then the asymptotic mean and variance of the central limit theorem for the hopcount are $s\log{n}$ and $s^2\log{n}$, respectively. We also find limiting distributional asymptotics for the value of the minimal weight path in terms of extreme value distributions and martingale limits of branching processes. ",Weak disorder asymptotics in the stochastic mean-field model of distance " We investigate features of the lateral distribution function (LDF) of the radio signal emitted by cosmic ray air-showers with primary energies $> 0.1$~EeV and its connection to air-shower parameters such as energy and shower maximum using CoREAS simulations made for the configuration of the Tunka-Rex antenna array. Taking into account all significant contributions to the total radio emission, such as by the geomagnetic effect, the charge excess, and the atmospheric refraction we parameterize the radio LDF. This parameterization is two-dimensional and has several free parameters. The large number of free parameters is not suitable for experiments of sparse arrays operating at low SNR (signal-to-noise ratios). Thus, exploiting symmetries, we decrease the number of free parameters and reduce the LDF to a simple one-dimensional function. The remaining parameters can be fit with a small number of points, i.e. as few as the signal from three antennas above detection threshold. Finally, we present a method for the reconstruction of air-shower parameters, in particular, energy and $X_{\mathrm{max}}$ (shower maximum), which can be reached with a theoretical accuracy of better than 15\% and 30~g/cm$^2$, respectively. ",Reconstruction of air-shower parameters for large-scale radio detectors using the lateral distribution " Diffusion processes are central to human interactions. One common prediction of the current modeling frameworks is that initial spreading dynamics follow exponential growth. Here, we find that, ranging from mobile handsets to automobiles, from smart-phone apps to scientific fields, early growth patterns follow a power law with non-integer exponents. We test the hypothesis that mechanisms specific to substitution dynamics may play a role, by analyzing a unique data tracing 3.6M individuals substituting for different mobile handsets. We uncover three generic ingredients governing substitutions, allowing us to develop a minimal substitution model, which not only explains the power-law growth, but also collapses diverse growth trajectories of individual constituents into a single curve. These results offer a mechanistic understanding of power-law early growth patterns emerging from various domains and demonstrate that substitution dynamics are governed by robust self-organizing principles that go beyond the particulars of individual systems. ",Emergence of Scaling in Complex Substitutive Systems " We present a simple tight-binding model for the two-dimensional energy bands of polyacene field-effect transistors and for the coupling of these bands to lattice vibrations of their host molecular crystal. We argue that the strongest electron-phonon interactions in these systems originate from the dependence of inter-molecule hopping amplitudes on collective molecular motion, and introduce a generalized Su-Schrieffer-Heeger model that accounts for all vibrations and is parameter-free once the band mass has been specified. We compute the electron-phonon spectral function $\alpha^2F(\omega)$ as a function of two-dimensional hole density, and are able to explain the onset of superconductivity near 2D carrier density $n_{2D} \sim 10^{14} cm^{-2}$, discovered in recent experiments by Sch\""on {\it et al.} \onlinecite{Batlogg}. ",2D bands and electron-phonon interactions in polyacene plastic transistors " This is a short note on the spatiotemporal complexity of the dynamical state(s) of the universe at subhorizon scales (up to 300 Mpc). There are reasons, based mainly on infrared radiative divergences, to believe that one can encounter a flicker noise in the time domain, while in the space domain, the scaling laws are reflected in the (multi)fractal distribution of galaxies and their clusters. There exist recent suggestions on a unifying treatment of these two aspects within the concept of spatiotemporal complexity of dynamical systems driven out of equilibrium. Spatiotemporal complexity of the subhorizon dynamical state(s) of the universe is a conceptually nice idea and may lead to progress in our understanding of the material structures at large scales ",Spatiotemporal complexity of the universe at subhorizon scales " We introduce and study a new complexity function in combinatorics on words, which takes into account the smallest second occurrence time of a factor of an infinite word. We characterize the eventually periodic words and the Sturmian words by means of this function. Then, we establish a new result on repetitions in Sturmian words and show that it is best possible. Let $b \ge 2$ be an integer. We deduce a lower bound for the irrationality exponent of real numbers whose sequence of $b$-ary digits is a Sturmian sequence over $\{0,1,\ldots, b-1\}$ and we prove that this lower bound is best possible. As an application, we derive some information on the $b$-ary expansion of $\log(1+\frac{1}{a})$,for any integer $a \ge 34$. ","A new complexity function, repetitions in Sturmian words, and irrationality exponents of Sturmian numbers" " We characterize the coherent dynamics of a two-level quantum emitter driven by a pair of symmetrically-detuned phase-locked pulses. The promise of dichromatic excitation is to spectrally isolate the excitation laser from the quantum emission, enabling background-free photon extraction from the emitter. Paradoxically, we find that excitation is not possible without spectral overlap between the exciting pulse and the quantum emitter transition for ideal two-level systems due to cancellation of the accumulated pulse area. However, any additional interactions that interfere with cancellation of the accumulated pulse area may lead to a finite stationary population inversion. Our spectroscopic results of a solid-state two-level system show that while coupling to lattice vibrations helps to improve the inversion efficiency up to 50\% under symmetric driving, coherent population control and a larger amount of inversion are possible using asymmetric dichromatic excitation, which we achieve by adjusting the ratio of the intensities between the red and blue-detuned pulses. Our measured results, supported by simulations using a real-time path-integral method, offer a new perspective towards realising efficient, background-free photon generation and extraction. ",Coherent Dynamics in Quantum Emitters under Dichromatic Excitation " We discuss the prospects of observing double parton scattering (DPS) processes with purely leptonic final states at the LHC. We first study same-sign W pair production, which is particularly suited for studying momentum and valence number conservation effects, followed by discussions on double Drell-Yan and production of J/psi pairs. The effects of initial state and intrinsic transverse momentum smearing on pair-wise transverse momentum balance characteristic to DPS are studied quantitatively. We also present a new technique, based on rapidity differences, to extract the DPS component from a double J/psi sample recently studied at the LHCb. ",Probing double parton scattering with leptonic final states at the LHC " The last few years have seen the proliferation of low-power wide area networks like LoRa, Sigfox and 802.11ah, each of which use a different and sometimes proprietary coding and modulation scheme, work below the noise floor and operate on the same frequency band. We introduce DeepSense, which is the first carrier sense mechanism that enables random access and coexistence for low-power wide area networks even when signals are below the noise floor. Our key insight is that any communication protocol that operates below the noise floor has to use coding at the physical layer. We show that neural networks can be used as a general algorithmic framework that can learn the coding mechanisms being employed by such protocols to identify signals that are hidden within noise. Our evaluation shows that DeepSense performs carrier sense across 26 different LPWAN protocols and configurations below the noise floor and can operate in the presence of frequency shifts as well as concurrent transmissions. Beyond carrier sense, we also show that DeepSense can support multi bit-rate LoRa networks by classifying between 21 different LoRa configurations and flexibly adapting bitrates based on signal strength. In a deployment of a multi-rate LoRa network, DeepSense improves bit rate by 4x for nearby devices and provides a 1.7x increase in the number of locations that can connect to the campus-wide network. ",DeepSense: Enabling Carrier Sense in Low-Power Wide Area Networks Using Deep Learning " We present a three-dimensional theory of stimulated Raman scattering (SRS) or superradiance. In particular we address how the spatial and temporal properties of the generated SRS beam, or Stokes beam, of radiation depends on the spatial properties of the gain medium. Maxwell equations for the Stokes field operators and of the atomic operators are solved analytically and a correlation function for the Stokes field is derived. In the analysis we identify a superradiating part of the Stokes radiation that exhibit beam characteristics. We show how the intensity in this beam builds up in time and at some point largely dominates the total Stokes radiation of the gain medium. We show how the SRS depends on geometric factors such as the Fresnel number and the optical depth, and that in fact these two factors are the only factors describing the coherent radiation. ",Three-dimensional theory of stimulated Raman scattering " Accurate measurements of K, D and B meson mixing amplitudes provide stringent constraints in the Unitary Triangle analysis, as well as useful bounds on New Physics scales. Lattice QCD provides a non perturbative tool to compute the hadronic matrix elements entering in the effective weak Hamiltonian, with errors at a few percent level and systematic uncertainties under control. I review recent lattice results for these hadronic matrix element performed with $N_f=2$, $N_f=2+1$ and $N_f=2+1+1$ dynamical sea quarks. ",Neutral meson oscillations on the lattice " We study time-inconsistent recursive stochastic control problems. Since, for this class of problems, classical optimal controls may fail to exist or to be relevant in practice, we focus on sub-game perfect equilibrium policies. The approach followed in our work relies on the stochastic maximum principle and, indeed, we adapt the classical spike variation technique to obtain a characterisation of equilibrium strategies in terms of a generalised Hamiltonian function of second-order defined through a flow of pairs of BSDEs. We deal then time-inconsistent recursive stochastic control problems under a state constraint, defined by means of an additional recursive utility, by adapting the Ekeland variational principle to this more tricky situation. The theoretical results are applied to the financial field on finite horizon investment-consumption policies with non-exponential actualisation. The possibility that the constraint is a risk constraint is covered. ",Subgame-perfect equilibrium strategies in state-constrained recursive stochastic control problems " The vision of augmenting computing capabilities of mobile devices, especially smartphones with least cost is likely transforming to reality leveraging cloud computing. Cloud exploitation by mobile devices breeds a new research domain called Mobile Cloud Computing (MCC). However, issues like portability and interoperability should be addressed for mobile augmentation which is a non-trivial task using component-based approaches. Service Oriented Architecture (SOA) is a promising design philosophy embraced by mobile computing and cloud computing communities to stimulate portable, complex application using prefabricated building blocks called Services. Utilizing distant cloud resources to host and run Services is hampered by long WAN latency. Exploiting mobile devices in vicinity alleviates long WAN latency, while creates new set of issues like Service publishing and discovery as well as client-server security, reliability, and Service availability. In this paper, we propose a market-oriented architecture based on SOA to stimulate publishing, discovering, and hosting Services on nearby mobiles, which reduces long WAN latency and creates a business opportunity that encourages mobile owners to embrace Service hosting. Group of mobile phones simulate a nearby cloud computing platform. We create new role of \textit{Service host} by enabling unskilled mobile owners/users to host Services developed by skilled developers. Evidently, Service availability, reliability, and Service-oriented mobile application portability will increase towards green ubiquitous computing in our mobile cloud infrastructure. ",MOMCC: Market-Oriented Architecture for Mobile Cloud Computing Based on Service Oriented Architecture " After discussing the intrinsic ambiguity in determining the light quark mass ratio $m_u/m_d$, we reexamine the recent proposal that this ambiguity can be resolved by applying the QCD multipole expansion for the heavy quarkonium decays. It is observed that, due to instanton effects, some matrix elements which have been ignored in previous works can give a significant contribution to the decay amplitudes, which results in a large uncertainty in the value of $m_u/m_d$ deduced from quarkonium phenomenology. This uncertainty can be resolved only by a QCD calculation of some second order coefficients in the chiral expansion of the decay amplitudes. ",Light Quark Masses and Quarkonium Decays " In this paper, we consider a monic, centred, hyperbolic polynomial of degree $d \ge 2$, restricted on its Julia set and compute its Lyapunov exponents with respect to certain weighted Lyubich's measures. In particular, we show a certain well-behavedness of some coefficients of the Lyapunov exponents, that quantifies the non-well-behavedness in a system. ",Lyapunov exponents of polynomials with respect to certain weighted Lyubich's measures " In this note, we study linear determinantal representations of smooth plane cubics over finite fields. We give an explicit formula of linear determinantal representations corresponding to rational points. Using Schoof's formula, we count the number of projective equivalence classes of smooth plane cubics over a finite field admitting prescribed number of equivalence classes of linear determinantal representations. As an application, we determine isomorphism classes of smooth plane cubics over a finite field with 0, 1 or 2 equivalence classes of linear determinantal representations. ",Linear determinantal representations of smooth plane cubics over finite fields " In recent years, supervised learning using Convolutional Neural Networks (CNNs) has achieved great success in image classification tasks, and large scale labeled datasets have contributed significantly to this achievement. However, the definition of a label is often application dependent. For example, an image of a cat can be labeled as ""cat"" or perhaps more specifically ""Persian cat."" We refer to this as label granularity. In this paper, we conduct extensive experiments using various datasets to demonstrate and analyze how and why training based on fine-grain labeling, such as ""Persian cat"" can improve CNN accuracy on classifying coarse-grain classes, in this case ""cat."" The experimental results show that training CNNs with fine-grain labels improves both network's optimization and generalization capabilities, as intuitively it encourages the network to learn more features, and hence increases classification accuracy on coarse-grain classes under all datasets considered. Moreover, fine-grain labels enhance data efficiency in CNN training. For example, a CNN trained with fine-grain labels and only 40% of the total training data can achieve higher accuracy than a CNN trained with the full training dataset and coarse-grain labels. These results point to two possible applications of this work: (i) with sufficient human resources, one can improve CNN performance by re-labeling the dataset with fine-grain labels, and (ii) with limited human resources, to improve CNN performance, rather than collecting more training data, one may instead use fine-grain labels for the dataset. We further propose a metric called Average Confusion Ratio to characterize the effectiveness of fine-grain labeling, and show its use through extensive experimentation. Code is available at https://github.com/cmu-enyac/Label-Granularity. ",Understanding the Impact of Label Granularity on CNN-based Image Classification " Many researchers have performed cosmological-model-independent tests for the distance duality (DD) relation. Theoretical work has been conducted based on the results of these tests. However, we find that almost all of these tests were perhaps not cosmological-model-independent after all, because the distance moduli taken from a given type Ia supernovae (SNe Ia) compilation are dependent on a given cosmological model and Hubble constant. In this Letter, we overcome these defects and by creating a new cosmological-model-independent test for the DD relation. We use the original data from the Union2 SNe Ia compilation and the angular diameter distances from two galaxy cluster samples compiled by De Filippis et al. and Bonamente et al. to test the DD relation. Our results suggest that the DD relation is compatible with observations, and the spherical model is slightly better than the elliptical model at describing the intrinsic shape of galaxy clusters if the DD relation is valid. However, these results are different from those of previous work. ",An improved method to test the Distance--Duality relation " Recent remarkable progress in wave-front shaping has enabled control of light propagation inside linear media to focus and image through scattering objects. In particular, light propagation in multimode fibers comprises complex intermodal interactions and rich spatiotemporal dynamics. Control of physical phenomena in multimode fibers and its applications is in its infancy, opening opportunities to take advantage of complex mode interactions. In this work, we demonstrate a wave-front shaping approach for controlling nonlinear phenomena in multimode fibers. Using a spatial light modulator at the fiber input and a genetic algorithm optimization, we control a highly nonlinear stimulated Raman scattering cascade and its interplay with four wave mixing via a flexible implicit control on the superposition of modes that are coupled into the fiber. We show for the first time versatile spectrum manipulations including shifts, suppression, and enhancement of Stokes and anti-Stokes peaks. These demonstrations illustrate the power of wave-front shaping to control and optimize nonlinear wave propagation. ",Wave-front shaping in nonlinear multimode fibers " The polarization characteristics of zebra patterns (ZPs) in type IV solar bursts were studied. We analyzed 21 ZP events observed by the Assembly of Metric-band Aperture Telescope and Real-time Analysis System between 2010 and 2015 and identified the following characteristics: a degree of circular polarization (DCP) in the range of 0%-70%, a temporal delay of 0-70 ms between the two circularly polarized components (i.e., the right- and left-handed components), and dominant ordinary-mode emission in about 81% of the events. For most events, the relation between the dominant and delayed components could be interpreted in the framework of fundamental plasma emission and depolarization during propagation, though the values of DCP and delay were distributed across wide ranges. Furthermore, it was found that the DCP and delay were positively correlated (rank correlation coefficient R = 0.62). As a possible interpretation of this relationship, we considered a model based on depolarization due to reflections at sharp density boundaries assuming fundamental plasma emission. The model calculations of depolarization including multiple reflections and group delay during propagation in the inhomogeneous corona showed that the DCP and delay decreased as the number of reflections increased, which is consistent with the observational results. The dispersive polarization characteristics could be explained by the different numbers of reflections causing depolarization. ",Polarization Characteristics of Zebra Patterns in Type IV Solar Radio Bursts " Redshift-space distortions are generally considered in the plane parallel limit, where the angular separation between the two sources can be neglected. Given that galaxy catalogues now cover large fractions of the sky, it becomes necessary to consider them in a formalism which takes into account the wide angle separations. In this article we derive an operational formula for the matter correlators in the Newtonian limit to be used in actual data sets, both in configuration and in Fourier spaces without relying on a plane-parallel approximation. We then recover the plane-parallel limit not only in configuration space where the geometry is simpler, but also in Fourier space, and we exhibit the first corrections that should be included in large surveys as a perturbative expansion over the plane-parallel results. We finally compare our results to existing literature, and show explicitly how they are related. ",Redshift-space distortions with wide angular separations " In this paper, authors study the convexity and concavity properties of real-valued function with respect to the classical means, and prove a conjecture posed by Bruce Ebanks in \cite{e}. ",On the generalized convexity and concavity " The bacterial flagellar motor is an ion-powered transmembrane protein complex which drives swimming in many bacterial species. The motor consists of a cytoplasmic 'rotor' ring and a number of 'stator' units, which are bound to the cell wall of the bacterium. Recently, it has been shown that the number of functional torque-generating stator units in the motor depends on the external load, and suggested that mechanosensing in the flagellar motor is driven via a 'catch bond' mechanism in the motor's stator units. We present a method that allows us to measure -- on a single motor -- stator unit dynamics across a large range of external loads, including near the zero-torque limit. By attaching superparamagnetic beads to the flagellar hook, we can control the motor's speed via a rotating magnetic field. We manipulate the motor to four different speed levels in two different ion-motive force (IMF) conditions. This framework allows for a deeper exploration into the mechanism behind load-dependent remodelling by separating out motor properties, such as rotation speed and energy availability in the form of IMF, that affect the motor torque. ",Load-dependent adaptation near zero load in the bacterial flagellar motor " Unstable shear layers in environmental and industrial flows roll up into a series of vortices, which often form complex nonlinear merging patterns like pairs and triplets. These patterns crucially determine the subsequent turbulence, mixing and scalar transport. We show that the late-time, highly nonlinear merging patterns are predictable from the linearized initial state. The initial asymmetry between consecutive wavelengths of the vertical velocity field provides an effective measure of the strength and pattern of vortex merging. The predictions of this measure are substantiated using direct numerical simulations. We also show that this measure has significant implications in determining the route to turbulence and the ensuing turbulence characteristics. ",Predicting vortex merging and ensuing turbulence characteristics in shear layers from initial conditions " We prove variationally that at weak coupling in one, two, and three dimensions there exist correlated electron-phonon states below the approximate ground states characteristically found by adiabatic polaron theory. Besides differing non-trivially in quantitative aspects such as the value of the ground state energy, these improved ground states are found to differ significantly in qualitative aspects such as correlation structure and scaling behavior. These differences are sufficiently severe as to require a reevaluation of the physical meaning attached to such widely used terms as ""large polaron"" and ""self-trapping transition"". ","The breakdown of adiabatic polaron theory in one, two, and three dimensions and the reformation of the large polaron concept" " Using Arakelov geometry, we compute the partition function of the noncompact free boson at genus two. We begin by compiling a list of modular invariants which appear in the Arakelov theory of Riemann surfaces. Using these quantities, we express the genus two partition function as a product of modular forms, as in the well-known genus one case. We check that our result has the expected obstruction to holomorphic factorization and behavior under degeneration. ",The Genus Two Free Boson in Arakelov Geometry " We propose a continuous feedback control strategy that steers a point-mass vehicle safely to a destination, in a quasi-optimal manner, in sphere worlds. The main idea consists in avoiding each obstacle via the shortest path within the cone enclosing the obstacle and moving straight toward the target when the vehicle has a clear line of sight to the target location. In particular, almost global asymptotic stability of the target location is achieved in two-dimensional (2D) environments under a particular assumption on the obstacles configuration. We also propose a reactive (sensor-based) approach, suitable for real-time implementations in a priori unknown 2D environments with sufficiently curved convex obstacles, guaranteeing almost global asymptotic stability of the target location. Simulation results are presented to illustrate the effectiveness of the proposed approach. ",Safe and Quasi-Optimal Autonomous Navigation in Environments with Convex Obstacles " We study the local equivalence problem for real-analytic ($\mathcal{C}^\omega$) hypersurfaces $M^5 \subset \mathbb{C}^3$ which, in coordinates $(z_1, z_2, w) \in \mathbb{C}^3$ with $w = u+i\, v$, are rigid: \[ u \,=\, F\big(z_1,z_2,\overline{z}_1,\overline{z}_2\big), \] with $F$ independent of $v$. Specifically, we study the group ${\sf Hol}_{\sf rigid}(M)$ of rigid local biholomorphic transformations of the form: \[ \big(z_1,z_2,w\big) \longmapsto \Big( f_1(z_1,z_2), f_2(z_1,z_2), a\,w + g(z_1,z_2) \Big), \] where $a \in \mathbb{R} \backslash \{0\}$ and $\frac{D(f_1,f_2)}{D(z_1,z_2)} \neq 0$, which preserve rigidity of hypersurfaces. After performing a Cartan-type reduction to an appropriate $\{e\}$-structure, we find exactly two primary invariants $I_0$ and $V_0$, which we express explicitly in terms of the $5$-jet of the graphing function $F$ of $M$. The identical vanishing $0 \equiv I_0 \big( J^5F \big) \equiv V_0 \big( J^5F \big)$ then provides a necessary and sufficient condition for $M$ to be locally rigidly-biholomorphic to the known model hypersurface: \[ M_{\sf LC} \colon \ \ \ \ \ u \,=\, \frac{z_1\,\overline{z}_1 +\frac{1}{2}\,z_1^2\overline{z}_2 +\frac{1}{2}\,\overline{z}_1^2z_2}{ 1-z_2\overline{z}_2}. \] We establish that $\dim\, {\sf Hol}_{\sf rigid} (M) \leq 7 = \dim\, {\sf Hol}_{\sf rigid} \big( M_{\sf LC} \big)$ always. If one of these two primary invariants $I_0 \not\equiv 0$ or $V_0 \not\equiv 0$ does not vanish identically, we show that this rigid equivalence problem between rigid hypersurfaces reduces to an equivalence problem for a certain $5$-dimensional $\{e\}$-structure on $M$. ",Rigid equivalences of $5$-dimensional $2$-nondegenerate rigid real hypersurfaces $M^5 \subset \mathbb{C}^ 3$ of constant Levi rank $1$ " A non-equilibrium phase (Sr1-xLax)Fe2As2 was formed by epitaxial film-growth. The resulting films emerged superconductivity along with suppression of the resistivity anomaly that is associated with magnetic and structural phase transitions. The maximum critical temperature was 20.8 K, which is almost the same as that of directly electron-doped Sr(Fe1-xCox)2As2. Its electronic phase diagram is much similar to that of Sr(Fe1-xCox)2As2, indicating that the difference in the electron doping sites does not influence the superconducting properties of 122-type SrFe2As2. ",Superconducting Properties and Phase Diagram of Indirectly Electron-Doped (Sr1-xLax)Fe2As2 Epitaxial Films Grown by Pulsed Laser Deposition " We study a low-energy effective field theory (EFT) describing the NN system in which all exchanged particles are integrated out. We show that fitting the residue of the 3S1 amplitude at the deuteron pole, rather than the 3S1 effective range, dramatically improves the convergence of deuteron observables in this theory. Reproducing the residue ensures that the tail of the deuteron wave function, which is directly related to NN scattering data via analytic continuation, is correctly reproduced in the EFT at next-to-leading order. The role of multi-nucleon-electroweak operators which produce deviations from effective-range theory can then be explicitly separated from the physics of the wave function tail. Such an operator contributes to the deuteron quadrupole moment, mu_Q, at low order, indicating a sensitivity to short-distance physics. This is consistent with the failure of impulse approximation calculations in NN potential models to reproduce mu_Q. The convergence of NN phase shifts in the EFT is unimpaired by the use of this new expansion. ",Improving the Convergence of NN Effective Field Theory " There has been great interest in applying the results of statistical mechanics to single molecule experiements. Recent work has highlighted so-called non-equilibrium work-energy relations and Fluctuation Theorems which take on an equilibrium-like (time independent) form. Here I give a very simple heuristic example where an equilibrium result (the barometric law for colloidal particles) arises from theory describing the {\em thermodynamically} non-equilibrium phenomenon of a single colloidal particle falling through solution due to gravity. This simple result arises from the fact that the particle, even while falling, is in {\em mechanical} equilibrium (gravitational force equal the viscous drag force) at every instant. The results are generalized by appeal to the central limit theorem. The resulting time independent equations that hold for thermodynamically non-equilibrium (and even non-stationary) processes offer great possibilities for rapid determination of thermodynamic parameters from single molecule experiments. ",The unreasonable effectiveness of equilibrium-like theory for interpreting non-equilibrium experiments " In this article we prove that Fintushel-Stern's construction of Horikawa surface, which is obtained from an elliptic surface via a rational blow-down surgery in smooth category, can be performed in complex category. The main technique involved is Q-Gorenstein smoothings. ",A construction of Horikawa surface via Q-Gorenstein smoothings " The incommensurate magnetic response observed in normal-state cuprate perovskites is interpreted based on the projection operator formalism and the t-J model of Cu-O planes. In agreement with experiment the calculated dispersion of maxima in the susceptibility has the shape of two parabolas with upward and downward branches which converge at the antiferromagnetic wave vector. The maxima are located at the momenta $({1/2},{1/2}\pm\delta)$, $({1/2}\pm\delta,{1/2})$ and at $({1/2}\pm\delta,{1/2}\pm\delta)$, $({1/2}\pm\delta,{1/2}\mp\delta)$ in the lower and upper parabolas, respectively. The upper parabola reflects the dispersion of magnetic excitations of the localized Cu spins, while the lower parabola arises due to a dip in the spin-excitation damping at the antiferromagnetic wave vector. For moderate doping this dip stems from the weakness of the interaction between the spin excitations and holes near the hot spots. The frequency dependence of the susceptibility is shown to depend strongly on the hole bandwidth and damping and varies from the shape observed in YBa$_2$Cu$_3$O$_{7-y}$ to that inherent in La$_{2-x}$Sr$_x$CuO$_4$. ",Incommensurate spin dynamics in underdoped cuprate perovskites " This paper introduces SigMaNet, a generalized Graph Convolutional Network (GCN) capable of handling both undirected and directed graphs with weights not restricted in sign nor magnitude. The cornerstone of SigMaNet is the Sign-Magnetic Laplacian ($L^{\sigma}$), a new Laplacian matrix that we introduce ex novo in this work. $L^{\sigma}$ allows us to bridge a gap in the current literature by extending the theory of spectral GCNs to (directed) graphs with both positive and negative weights. $L^{\sigma}$ exhibits several desirable properties not enjoyed by other Laplacian matrices on which several state-of-the-art architectures are based, among which encoding the edge direction and weight in a clear and natural way that is not negatively affected by the weight magnitude. $L^{\sigma}$ is also completely parameter-free, which is not the case of other Laplacian operators such as, e.g., the Magnetic Laplacian. The versatility and the performance of our proposed approach is amply demonstrated via computational experiments. Indeed, our results show that, for at least a metric, SigMaNet achieves the best performance in 15 out of 21 cases and either the first- or second-best performance in 21 cases out of 21, even when compared to architectures that are either more complex or that, due to being designed for a narrower class of graphs, should -- but do not -- achieve a better performance. ",SigMaNet: One Laplacian to Rule Them All " A novel mesoscopic electron spectrometer allows for the probing of relaxation processes in quantum Hall edge channels. The device is composed of an emitter quantum dot that injects energy-resolved electrons into the channel closest to the sample edge, to be subsequently probed downstream by a detector quantum dot of the same type. In addition to inelastic processes in the sample that stem from interactions inside the region between the quantum dot energy filters (inner region), anomalous signals are measured when the detector energy exceeds the emitter energy. Considering finite range Coulomb interactions in the sample, we find that energy exchange between electrons in the current inducing source channel and the inner region, similar to Auger recombination processes, is responsible for such anomalous currents. In addition, our perturbative treatment of interactions shows that electrons emitted from the source, which dissipate energy to the inner region before entering the detector, contribute to the current most strongly when emitter-detector energies are comparable. Charge transfer in which the emitted electron is exchanged for a charge carrier from the Fermi sea, on the other hand, preferentially occurs close to the Fermi level. ",Interaction-induced charge transfer in a mesoscopic electron spectrometer " Static and dynamical properties of the magnetic moment system of pyrochlore compound Tb2Ti2O7 with strong magnetic frustration, have been investigated down to the temperature T=0.4 K by neutron scattering on a single crystal sample. The scattering vector (Q)-dependence of the magnetic scattering intensity becomes appreciable with decreasing T at around 30 K, indicating the development of the magnetic correlation. From the observed energy profiles, the elastic, quasi elastic and inelastic components have been separately obtained. The quasi elastic component corresponds to the diffusive motion of the magnetic moments within the lowest states, which are formed of the lowest energy levels of Tb3+ ions. Magnetic correlation pattern which can roughly reproduce the Q-dependence of the scattering intensities of the elastic and quasi elastic component is discussed based on the trial calculations for clusters of 7 moments belonging to two corner-sharing tetrahedra. A possible origin of the glassy state, which develops at around 1.5 K with decreasing T is discussed. ",Static Correlation and Dynamical Properties of Tb3+-moments in Tb2Ti2O7 -Neutron Scattering Study- " We study homotopy theory of the wheeled prop controlling Poisson structures on arbitrary formal graded finite-dimensional manifolds and prove, in particular, that Grothendieck-Teichmueller group acts on that wheeled prop faithfully and homotopy non-trivially. Next we apply this homotopy theory to the study of the deformation complex of an arbitrary Maxim Kontsevich formality map and compute the full cohomology group of that deformation complex in terms of the cohomology of a certain graph complex introduced earlier by Maxim Kontsevich in [K1] and studied by Thomas Willwacher in [W1]. ",From deformation theory of wheeled props to classification of Kontsevich formality maps " In present paper we develop the deformation theory of operads and algebras over operads. Free resolutions (constructed via Boardman-Vogt approach) are used in order to describe formal moduli spaces of deformations. We apply the general theory to the proof of Deligne's conjecture. The latter says that the Hochschild complex of an associative algebra carries a canonical structure of a dg-algebra over the chain operad of the little discs operad. In the course of the proof we construct an operad of geometric nature which acts on the Hochschild complex. It seems to be different from the brace operad (the latter was used in the previous approaches to the Deligne's conjecture). It follows from our results that the Grothendieck-Teichm\""uller group acts (homotopically) on the moduli space of structures of 2-algebras on the Hochschild complex. In the Appendix we develop a theory of piecewise algebraic chains and forms. It is suitable for real semialgebraic manifolds with corners (like Fulton-Macpherson compactifications of the configuration spaces of points). ",Deformations of algebras over operads and Deligne's conjecture " Cross sections for mid-rapidity production of direct photons in p+p collisions at the Relativistic Heavy Ion Collider (RHIC) are reported for 3 < p_T < 16 GeV/c. Next-to-leading order (NLO) perturbative QCD (pQCD) describes the data well for p_T > 5 GeV/c, where the uncertainties of the measurement and theory are comparable. We also report on the effect of requiring the photons to be isolated from parton jet energy. The observed fraction of isolated photons is well described by pQCD for p_T > 7 GeV/c. ",Measurement of direct photon production in p + p collisions at sqrt(s) = 200 GeV " We perform a theoretical investigation on the time evolution of spin pulses in an $n$-type GaAs (001) quantum well with and without external electric field at high temperatures by constructing and numerically solving the kinetic spin Bloch equations and the Poisson equation, with the electron-phonon, electron-impurity and electron-electron Coulomb scattering explicitly included. The effect of the Coulomb scattering, especially the effect of the Coulomb drag on the spin diffusion/transport is investigated and it is shown that the spin oscillations and spin polarization reverse along the direction of spin diffusion in the absence of the applied magnetic field, which were originally predicted in the absence of the Coulomb scattering by Weng and Wu [J. Appl. Phys. {\bf 93}, 410 (2003)], can sustain the Coulomb scattering at high temperatures ($\sim 200$ K). The results obtained are consistent with a recent experiment in bulk GaAs but at a very low temperature (4 K) by Crooker and Smith [Phys. Rev. Lett. {\bf 94}, 236601 (2005)]. ",Diffusion and transport of spin pulses in an $n$-type semiconductor quantum well " As the title indicates, this chapter presents a brief, self-contained introduction to five fundamental problems in Quantum Information Science (QIS) that are especially well-suited to be formulated as Semi-definite Programs (SDP). We have in mind two audiences. The primary audience comprises of Operations Research (and Computer Science) graduate students who have familiarity with SDPs, but have found it daunting to become even minimally conversant with pre-requisites of QIS. The second audience consists of Physicists (and Electrical Engineers) already knowledgeable with modeling of QIS via SDP but interested in computational tools that are applicable more generally. For both audiences, we strive for rapid access to the unfamiliar material. For the first, we provide just enough required background material (from Quantum Mechanics, treated via matrices, and mapping them in Dirac notation) and simultaneously for the second audience we recreate, computationally in Jupyter notebooks, known closed-form solutions. We hope you will enjoy this introduction and gain understanding of the marvelous connection between SDP and QIS by self-study, or as a short seminar course. Ultimately, we hope this disciplinary outreach will fuel advances in QIS through their fruitful study via SDPs. ",Five Starter Pieces: Quantum Information Science via Semi-definite Programs " The electronic structure of Ba$_2$Ti$_2$Fe$_2$As$_4$O, a newly discovered superconductor, is investigated using first-principles calculations based on local density approximations. Multiple Fermi surface sheets originating from Ti-3$d$ and Fe-3$d$ states are present corresponding to the conducting Ti$_2$As$_2$O and Fe$_2$As$_2$ layers respectively. Compared with BaFe$_2$As$_2$, sizeable changes in the related Fermi surface sheets indicate significant electron transfer (about 0.12$e$) from Ti to Fe, which suppresses the stripe-like antiferromagnetism at the Fe sites and simultaneously induces superconductivity. Our calculations also suggest that an additional N\'{e}el-type antiferromagnetic instability at the Ti sites is relatively robust against the electron transfer, which accounts for the anomaly at 125 K in the superconducting Ba$_2$Ti$_2$Fe$_2$As$_4$O. ",Self-doping effect and possible antiferromagnetism at titanium-layers in the iron-based superconductor Ba$_2$Ti$_2$Fe$_2$As$_4$O " Linear spaces with an Euclidean metric are ubiquitous in mathematics, arising both from quadratic forms and inner products. Operators on such spaces also occur naturally. In recent years, the study of multivariate operator theory has made substantial progress. Although the study of self-adjoint operators goes back a few decades, the non self-adjoint theory has developed at a slower pace. While several approaches to this topic has been developed, the one that has been most fruitful is clearly the study of Hilbert spaces that are modules over natural function algebras like $\mathcal A({\Omega})$, where $\Omega \subseteq \mathbb C^m$ is a bounded domain, consisting of complex valued functions which are holomorphic on some open set $U$ containing $\overline{\Omega}$, the closure of $\Omega$. The book, ''Hilbert Modules over function algebra'', R. G. Douglas and V. I. Paulsen showed how to recast many of the familiar theorems of operator theory in the language of Hilbert modules. The book, ''Spectral decomposition of analytic sheaves'', J. Eschmeier and M. Putinar and the book, ''Analytic Hilbert modules'', X. Chen and K. Guo, provide an account of the achievements from the recent past. The impetus for much of what is described below comes from the interplay of operator theory with other areas of mathematics like complex geometry and representation theory of locally compact groups. ",Operators in the Cowen-Douglas class and related topics " We describe the growth of the naturally defined argument of a bounded analytic function in the unit disk in terms of the complete measure introduced by A.Grishin. As a consequence, we characterize the local behavior of a logarithm of an analytic function. We also find necessary and sufficient conditions for closeness of $\log f(z)$, $f\in H^\infty$, and the local concentration of the zeros of $f$. ",Argument of bounded analytic functions and Frostman's type conditions " Atomically thin van der Waals materials stacked with an interlayer twist have proven to be an excellent platform towards achieving gate-tunable correlated phenomena linked to the formation of flat electronic bands. In this work we demonstrate the formation of emergent correlated phases in multilayer rhombohedral graphene - a simple material that also exhibits a flat electronic band but without the need of having a moir\'e superlattice induced by twisted van der Waals layers. We show that two layers of bilayer graphene that are twisted by an arbitrary tiny angle host large (micron-scale) regions of uniform rhombohedral four-layer (ABCA) graphene that can be independently studied. Scanning tunneling spectroscopy reveals that ABCA graphene hosts an unprecedentedly sharp flat band of 3-5 meV half-width. We demonstrate that when this flat band straddles the Fermi level, a correlated many-body gap emerges with peak-to-peak value of 9.5 meV at charge neutrality. Mean field theoretical calculations indicate that the two primary candidates for the appearance of this broken symmetry state are a charge transfer excitonic insulator and a ferrimagnet. Finally, we show that ABCA graphene hosts surface topological helical edge states at natural interfaces with ABAB graphene which can be turned on and off with gate voltage, implying that small angle twisted double bilayer graphene is an ideal programmable topological quantum material. ",Moir\'e-less Correlations in ABCA Graphene " Recently, for spin $S=5/2$ impurities quite different size dependence of the Kondo contribution to the resistivity was found experimentally than for S=2. Therefore previous calculation about the effect of the spin-orbit-induced magnetic anisotropy on the Kondo amplitude of the resistivity is extended to the case of $S=5/2$ impurity spin which differs from the integer spin case as the ground state is degenerated. In this case the Kondo contribution remains finite when the sample size goes to zero and the thickness dependence in the Kondo resistivity is much weaker for Cu(Mn). The behavior of the Kondo coefficient as a function of the thickness depends on the Kondo temperature, that is somewhat stronger for larger $T_K$. Comparing our results with a recent experiment in thin Cu(Mn) films, we find a good agreement. ",Spin-Orbit-Induced Kondo Size Effect in Thin Films with 5/2-spin Impurities " Context: Sodium laser guide stars (LGS) are about to enter a new range of laser powers. Previous theoretical and numerical methods are inadequate for accurate computations of the return flux and hence for the design of the next-generation LGS systems. Aims: We numerically optimize the cw (continuous wave) laser format, in particular the light polarization and spectrum. Methods: Using Bloch equations, we simulate the mesospheric sodium atoms, including Doppler broadening, saturation, collisional relaxation, Larmor precession, and recoil, taking into account all 24 sodium hyperfine states and on the order of 100 velocity groups. Results: LGS return flux is limited by ""three evils"": Larmor precession due to the geomagnetic field, atomic recoil due to radiation pressure, and transition saturation. We study their impacts and show that the return flux can be boosted by repumping (simultaneous excitation of the sodium D2a and D2b lines with 10-20% of the laser power in the latter). Conclusions: We strongly recommend the use of circularly polarized lasers and repumping. As a rule of thumb, the bandwidth of laser radiation in MHz (at each line) should approximately equal the launched laser power in Watts divided by six, assuming a diffraction-limited spot size. ",Optimization of cw sodium laser guide star efficiency " The Hubbard model is studied in the external magnetic field. The analysis is carried out phenomenologically within the framework of the Ginzburg-Landau theory with the order parameter describing the opposite spin electrons. The study is performed for the nearly half-filled lower Hubbard band in the metallic state. The final equations are the Pauli-like ones for the opposite spins and nonlinear as a result of interaction between electrons with the opposite spins. The equations can analytically be solved for the spatially homogeneous distributions in a number of most interesting cases. In particular, the problem on the metal-insulator transition is analyzed for the nearly half-filled Hubbard sub-bands. The critical magnetic field at which the transition from the metallic state to the insulator one takes place is found under the paramagnetic spin effect. ",Ginzburg-Landau approximation for the Hubbard model in the external magnetic field " The microscopic theory of current carrying states in the ballistic superconducting microchannel is presented. The effects of the contact length L on the Josephson current are investigated. For the temperatures T close to the critical temperature T_c the problem is treated selfconsistently, with taking into account the distribution of the order parameter $\Delta (r)$ inside the contact. The closed integral equation for $\Delta $ in strongly inhomogeneous microcontact geometry ($L\lesssim \xi_{0}, \xi_{0}$ is the coherence length at T=0) replaces the differential Ginzburg-Landau equation. The critical current $I_{c}(L)$ is expressed in terms of solution of this integral equation. The limiting cases of $L\ll \xi_{0}$ and $L\gg \xi_{0}$ are considered. With increasing length L the critical current decreases, although the ballistic Sharvin resistance of the contact remains the same as at L=0. For ultra short channels with $L\lesssim a_{D}$ ($a_{D}\sim v_{F}/\omega_{D}, \omega_{D}$ is the Debye frequency) the corrections to the value of critical current I_c(L=0) are sensitive to the strong coupling effects. ",On the Selfconsistent Theory of Josephson Effect in Ballistic Superconducting Microconstrictions " This paper is concerned with the problem of finding a zero of a tangent vector field on a Riemannian manifold. We first reformulate the problem as an equivalent Riemannian optimization problem. Then we propose a Riemannian derivative-free Polak-Ribi\'ere-Polyak method for solving the Riemannian optimization problem, where a non-monotone line search is employed. The global convergence of the proposed method is established under some mild assumptions. To further improve the efficiency, we also provide a hybrid method, which combines the proposed geometric method with the Riemannian Newton method. Finally, some numerical experiments are reported to illustrate the efficiency of the proposed method. ",A Riemannian Derivative-Free Polak-Ribiere-Polyak Method for Tangent Vector Field " We study Gaussian random fields on certain Banach spaces and investigate conditions for their existence. Our results apply inter alia to spaces of Radon measures and H\""older functions. In the former case, we are able to define Gaussian white noise on the space of measures directly, avoiding, e.g., an embedding into a negative-order Sobolev space. In the latter case, we demonstrate how H\""older regularity of the samples is controlled by that of the covariance kernel and, thus, show a connection to the Theorem of Kolmogorov-Chentsov. ",Gaussian random fields on non-separable Banach spaces " A spectroscopic method is applied to measure the inelastic quasi-particle relaxation rate in a disordered Fermi liquid. The quasi-particle relaxation rate, $\gamma$ is deduced from the magnitude of fluctuations in the local density of states, which are probed using resonant tunneling through a localized impurity state. We study its dependence on the excitation energy $E$ measured from the Fermi level. In a disordered metal (heavily doped GaAs) we find $\gamma \propto E^{3/2}$ within the experimentally accessible energy interval, in agreement with the Altshuler-Aronov theory for electron-electron interactions in diffusive conductors. ",Energy Dependence of Quasi-Particle Relaxation in a Disordered Fermi Liquid " The predictions of $U_{e3}$ are discussed. Typical models which lead to the large, the seizable and the tiny $U_{e3}$ are also studied. ",Implications of the Measurements of Ue3 to Theory " The recent observations of neutron star mergers have changed our perspective on scalar- tensor theories of gravity, favouring models where gravitational waves travel at the speed of light. In this work we consider a scalar-tensor set-up with such a property, belonging to a beyond Horndeski system, and we numerically investigate the physics of locally asymptotically flat black holes and relativistic stars. We first determine regular black hole solutions equipped with horizons: they are characterized by a deficit angle at infinity, and by large contributions of the scalar to the geometry in the near horizon region. We then study configurations of incompressible relativistic stars. We show that their compactness can be much higher than stars with the same energy density in General Relativity, and the scalar field profile imposes stringent constraints on the star properties. These results can suggest new ways to probe the efficiency of screening mechanisms in strong gravity regimes, and can help to build specific observational tests for scalar-tensor gravity models with unit speed for gravitational waves. ",Compact objects in scalar-tensor theories after GW170817 The generalized uncertainty principle of string theory is derived in the framework of Quantum Geometry by taking into account the existence of an upper limit on the acceleration of massive particles. ,Generalized Uncertainty Principle from Quantum Geometry " In this paper we study the regularity properties of solutions to the Davey-Stewartson system. It is shown that for initial data in a Sobolev space, the nonlinear part of the solution flow resides in a smoother space than the initial data for all times. We also obtain that the Sobolev norm of the nonlinear part of the evolution grows at most polynomially. As an application of the smoothing estimate, we study the long term dynamics of the forced and weakly damped Davey-Stewartson system. In this regard we give a new proof for the existence and smoothness of the global attractors in the energy space. ",Global Smoothing for the Davey-Stewartson System on $\mathbb{R}^2$ " We present high signal-to-noise ratio Keck ESI spectra of the two quasars known to have Gunn-Peterson absorption troughs, SDSS J1030+0524 (z=6.28) and SDSS J1148+5251 (z=6.37). The Ly alpha and Ly beta troughs for SDSS J1030+0524 are very black and show no evidence for any emission over a redshift interval of ~0.2 starting at z=6. On the other hand, SDSS J1148+5251 shows a number of emission peaks in the Ly beta Gunn-Peterson trough along with a single weak peak in the Ly alpha trough. The Ly alpha emission has corresponding Ly beta emission, suggesting that it is indeed a region of lower optical depth in the intergalactic medium at z=6.08. The stronger Ly beta peaks in the spectrum of SDSS J1148+5251 could conceivably also be the result of ""leaks"" in the IGM, but we suggest that they are instead Ly alpha emission from an intervening galaxy at z=4.9. This hypothesis gains credence from a strong complex of C IV absorption at the same redshift and from the detection of continuum emission in the Ly alpha trough at the expected brightness. If this proposal is correct, the quasar light has probably been magnified through gravitational lensing by the intervening galaxy. The Stromgren sphere observed in the absorption spectrum of SDSS J1148+5251 is significantly smaller than expected based on its brightness, which is consistent with the hypothesis that the quasar is lensed. If our argument for lensing is correct, the optical depths derived from the troughs of SDSS J1148+5251 are only lower limits (albeit still quite strong, with tau(LyA)>16 inferred from the Ly beta trough.) The Ly beta absorption trough of SDSS J1030+0524 gives the single best measurement of the IGM transmission at z>6, with an inferred optical depth tau(LyA)>22. ",Probing the Ionization State of the Universe at z>6 " This paper considers affine analogues of the isoperimetric inequality in the sense of piecewise linear topology. Given a closed polygon P embedded in R^d having n edges, we give upper and lower bounds for the minimal number of triangles needed to forma triangulated embedded orientable surface in R^d having P as its geometric boundary. The most interesting case is dimension dimension 3, where we give an upper bound of 7 n^2 triangles, and a lower bound for some polygons P that require at least 1/2 n^2 triangles. In dimension 2 and dimensions 5 and above one needs only O(n) triangles. The case of dimension 4 is not completely resolved. ",Affine isoperimetric inequalities for piecewise linear surfaces " Identification of input data points relevant for the classifier (i.e. serve as the support vector) has recently spurred the interest of researchers for both interpretability as well as dataset debugging. This paper presents an in-depth analysis of the methods which attempt to identify the influence of these data points on the resulting classifier. To quantify the quality of the influence, we curated a set of experiments where we debugged and pruned the dataset based on the influence information obtained from different methods. To do so, we provided the classifier with mislabeled examples that hampered the overall performance. Since the classifier is a combination of both the data and the model, therefore, it is essential to also analyze these influences for the interpretability of deep learning models. Analysis of the results shows that some interpretability methods can detect mislabels better than using a random approach, however, contrary to the claim of these methods, the sample selection based on the training loss showed a superior performance. ",Interpreting Deep Models through the Lens of Data " We study an inhomogeneous three-orbital Hubbard model for the Cu-substituted iron pnictides using an extended real-space Green's function method combined with density functional calculations. We find that the onsite interactions of the Cu ions are the principal determinant of whether an electron dopant or a hole dopant is caused by the Cu substitution. It is found that the Cu substitution could lead to a hole doping when its onsite interactions are smaller than a critical value, as opposed to an electron doping when the interactions of Cu ions are larger than the critical value, which may explain why the effects of Cu substitution on the carrier density are entirely different in NaFe$_{1-x}$Cu$_x$As and Ba(Fe$_{1-x}$Cu$_x$)$_2$As$_2$. We also find that the effect of a doping-induced disorder is considerable in the Cu-substituted iron pnictides, and its cooperative effect with electron correlations contributes to the orbital-selective insulating phases in NaFe$_{1-x}$Cu$_x$As and Ba(Fe$_{1-x}$Cu$_x$)$_2$As$_2$. ",Localization and Orbital Selectivity in Iron-Based Superconductors with Cu Substitution " Generative Adversarial Networks (GAN) have gained a lot of popularity from their introduction in 2014 till present. Research on GAN is rapidly growing and there are many variants of the original GAN focusing on various aspects of deep learning. GAN are perceived as the most impactful direction of machine learning in the last decade. This paper focuses on the application of GAN in autonomous driving including topics such as advanced data augmentation, loss function learning, semi-supervised learning, etc. We formalize and review key applications of adversarial techniques and discuss challenges and open problems to be addressed. ","Yes, we GAN: Applying Adversarial Techniques for Autonomous Driving" We relate the endomorphism rings of certain $D$-elliptic sheaves of finite characteristic to hereditary orders in central division algebras over function fields. ,Endomorphisms of exceptional $D$-elliptic sheaves " In this paper, we introduce a novel parallel contact algorithm designed to run efficiently in High-Performance Computing based supercomputers. Particular emphasis is put on its computational implementation in a multiphysics finite element code. The algorithm is based on the method of partial Dirichlet-Neumann boundary conditions and is capable to solve numerically a nonlinear contact problem between rigid and deformable bodies in a whole parallel framework. Its distinctive characteristic is that the contact is tackled as a coupled problem, in which the contacting bodies are treated separately, in a staggered way. Then, the coupling is performed through the exchange of boundary conditions at the contact interface following a Gauss-Seidel strategy. To validate this algorithm we conducted several benchmark tests by comparing the proposed solution against theoretical and other numerical solutions. Finally, we evaluated the parallel performance of the proposed algorithm in a real impact test to show its capabilities for large-scale applications. ",A parallel algorithm for unilateral contact problems " The quantum thermal bath (QTB) has been presented as analternative to path-integral-based methods to introduce nuclear quantumeffects in molecular dynamics simulations. The method has proved to beefficient, yielding accurate results for various systems. However, the QTBmethod is prone to zero-point energy leakage (ZPEL) in highly anharmonicsystems. This is a well-known problem in methods based on classicaltrajectories where part of the energy of the high-frequency modes istransferred to the low-frequency modes leading to a wrong energydistribution. In some cases, the ZPEL can have dramatic consequences onthe properties of the system. Thus, we investigate the ZPEL by testing theQTB method on selected systems with increasing complexity in order to studythe conditions and the parameters that influence the leakage. We also analyze the consequences of the ZPEL on the structuraland vibrational properties of the system. We find that the leakage is particularly dependent on the damping coefficient and thatincreasing its value can reduce and, in some cases, completely remove the ZPEL. When using sufficiently high values for thedamping coefficient, the expected energy distribution among the vibrational modes is ensured. In this case, the QTB methodgives very encouraging results. In particular, the structural properties are well-reproduced. The dynamical properties should beregarded with caution although valuable information can still be extracted from the vibrational spectrum, even for large values ofthe damping term. ",Zero-Point Energy Leakage in Quantum Thermal Bath Molecular Dynamics Simulations " A previous work establishing a connection between a quark model, with relativistic kinematics and a $Y$-confinement plus one gluon exchange, and the $1/N_c$ expansion mass formula is extended to strange baryons. Both methods predict values for the SU(3)-breaking mass terms which are in good agreement with each other. Strange and nonstrange baryons are shown to exhibit Regge trajectories with an equal slope, but with an intercept depending on the strangeness. Both approaches agree on the value of the slope and of the intercept and on the existence of a single good quantum number labeling the baryons within a given Regge trajectory. ",Mass formula for strange baryons in large $N_c$ QCD versus quark model " In this paper we describe a parallel algorithm for generating all non-isomorphic rank $3$ simple matroids with a given multiplicity vector. We apply our implementation in the HPC version of GAP to generate all rank $3$ simple matroids with at most $14$ atoms and a splitting characteristic polynomial. We have stored the resulting matroids alongside with various useful invariants in a publicly available, ArangoDB-powered database. As a byproduct we show that the smallest divisionally free rank $3$ arrangement which is not inductively free has $14$ hyperplanes and exists in all characteristics distinct from $2$ and $5$. Another database query proves that Terao's freeness conjecture is true for rank $3$ arrangements with $14$ hyperplanes in any characteristic. ",On the generation of rank 3 simple matroids with an application to Terao's freeness conjecture " In this article we study the topology of a family of real analytic germs $F \colon (\mathbb{C}^3,0) \to (\mathbb{C},0)$ with isolated critical point at 0, given by $F(x,y,z)=f(x,y)\bar{g(x,y)}+z^r$, where $f$ and $g$ are holomorphic, $r \in \mathbb{Z}^+$ and $r \geq 2$. We describe the link $L_F$ as a graph manifold using its natural open book decomposition, related to the Milnor fibration of the map-germ $f \bar{g}$ and the description of its monodromy as a quasi-periodic diffeomorphism through its Nielsen invariants. Furthermore, such a germ $F$ gives rise to a Milnor fibration $\frac{F}{|F|} \colon \mathbb{S}^5 \setminus L_F \to \mathbb{S}^1$. We present a join theorem, which allows us to describe the homotopy type of the Milnor fibre of $F$ and we show some cases where the open book decomposition of $\mathbb{S}^5$ given by the Milnor fibration of $F$ cannot come from the Milnor fibration of a complex singularity in $\mathbb{C}^3$. ",The topology of real suspension singularities of type $f \bar{g}+z^n$ " Sense embedding learning methods learn different embeddings for the different senses of an ambiguous word. One sense of an ambiguous word might be socially biased while its other senses remain unbiased. In comparison to the numerous prior work evaluating the social biases in pretrained word embeddings, the biases in sense embeddings have been relatively understudied. We create a benchmark dataset for evaluating the social biases in sense embeddings and propose novel sense-specific bias evaluation measures. We conduct an extensive evaluation of multiple static and contextualised sense embeddings for various types of social biases using the proposed measures. Our experimental results show that even in cases where no biases are found at word-level, there still exist worrying levels of social biases at sense-level, which are often ignored by the word-level bias evaluation measures. ",Sense Embeddings are also Biased--Evaluating Social Biases in Static and Contextualised Sense Embeddings " The ability to detect weak distributed activation patterns in networks is critical to several applications, such as identifying the onset of anomalous activity or incipient congestion in the Internet, or faint traces of a biochemical spread by a sensor network. This is a challenging problem since weak distributed patterns can be invisible in per node statistics as well as a global network-wide aggregate. Most prior work considers situations in which the activation/non-activation of each node is statistically independent, but this is unrealistic in many problems. In this paper, we consider structured patterns arising from statistical dependencies in the activation process. Our contributions are three-fold. First, we propose a sparsifying transform that succinctly represents structured activation patterns that conform to a hierarchical dependency graph. Second, we establish that the proposed transform facilitates detection of very weak activation patterns that cannot be detected with existing methods. Third, we show that the structure of the hierarchical dependency graph governing the activation process, and hence the network transform, can be learnt from very few (logarithmic in network size) independent snapshots of network activity. ",Detecting Weak but Hierarchically-Structured Patterns in Networks " We investigate the evolution of extreme ultraviolet (XUV) spectral lineshapes in an optically-thick helium gas under near-infrared (IR) perturbation. In our experimental and theoretical work, we systematically vary the IR intensity, time-delay, gas density and IR polarization parameters to study lineshape modifications induced by collective interactions, in a regime beyond the single atom response of a thin, dilute gas. In both experiment and theory, we find that specific features in the frequency-domain absorption profile, and their evolution with propagation distance, can be attributed to the interplay between resonant attosecond pulse propagation and IR induced phase shifts. Our calculations show that this interplay also manifests itself in the time domain, with the IR pulse influencing the reshaping of the XUV pulse propagating in the resonant medium. ",Attosecond Transient Absorption in Dense Gases: Exploring the Interplay between Resonant Pulse Propagation and Laser-Induced Line Shape Control " The swelling equilibrium of Olympic gels, which are composed of entangled cyclic polymers, is studied by Monte Carlo Simulations. In contrast to chemically crosslinked polymer networks, we observe that Olympic gels made of chains with a \emph{larger} degree of polymerization, $N$, exhibit a \emph{smaller} equilibrium swelling degree, $Q\propto N^{-0.28}\phi_{0}^{-0.72}$, at the same polymer volume fraction $\phi_{0}$ at network preparation. This observation is explained by a desinterspersion process of overlapping non-concatenated rings upon swelling. ",The Swelling of Olympic Gels " Object detection when provided image-level labels instead of instance-level labels (i.e., bounding boxes) during training is an important problem in computer vision, since large scale image datasets with instance-level labels are extremely costly to obtain. In this paper, we address this challenging problem by developing an Expectation-Maximization (EM) based object detection method using deep convolutional neural networks (CNNs). Our method is applicable to both the weakly-supervised and semi-supervised settings. Extensive experiments on PASCAL VOC 2007 benchmark show that (1) in the weakly supervised setting, our method provides significant detection performance improvement over current state-of-the-art methods, (2) having access to a small number of strongly (instance-level) annotated images, our method can almost match the performace of the fully supervised Fast RCNN. We share our source code at https://github.com/ZiangYan/EM-WSD. ",Weakly- and Semi-Supervised Object Detection with Expectation-Maximization Algorithm " We define the notion of basic set data for finite groups (building on the notion of basic set, but including an order on the irreducible characters as part of the structure), and we prove that the Springer correspondence provides basic set data for Weyl groups. Then we use this to determine explicitly the modular Springer correspondence for classical types (defined over a base field of odd characteristic $p$, and with coefficients in a field of odd characteristic $\ell\neq p$): the modular case is obtained as a restriction of the ordinary case to a basic set. In order to do so, we compare the order on bipartitions introduced by Dipper and James with the order induced by the Springer correspondence. We also provide a quicker proof, by sorting characters according to the dimension of the corresponding Springer fiber, an invariant which is directly computable from symbols. ",Springer basic sets and modular Springer correspondence for classical types " The SciBooNE experiment (Fermilab) recently published results of a search for charged current coherent pion production in neutrino mode: muon neutrinos scattering on carbon. The results of this study are that no evidence for coherent pion production is observed, and SciBooNE set 90% confidence level upper limits on the cross section ratio of charged current coherent pion production to the total charged current cross section. Recently proposed new coherent pion models predict a production of charged current coherent pion events just below the SciBooNE's upper limit. Motivated by this, we performed a search for charged current coherent pion production using SciBooNE's collected antineutrino data since antineutrino data are expected to be more sensitive to look at coherent pion production than neutrino data. This paper describes preliminary results of a search for antineutrino charged current coherent pion production at the SciBooNE experiment. ",Search for Antineutrino Charged Current Coherent Pion Production at SciBooNE " Using numerical hydrodynamics simulations we studied the gravitational collapse of pre-stellar cores of sub-solar mass embedded into a low-density external environment. Four models with different magnitude and direction of rotation of the external environment with respect to the central core were studied and compared with an isolated model. We found that the infall of matter from the external environment can significantly alter the disk properties as compared to those seen in the isolated model. Depending on the magnitude and direction of rotation of the external environment, a variety of disks can form including compact (<= 200 AU) ones shrinking in size due to infall of external matter with low angular momentum, as well as extended disks forming due to infall of external matter with high angular momentum. The former are usually stable against gravitational fragmentation, while the latter are prone to fragmentation and formation of stellar systems with sub-stellar/very-low-mass companions. In the case of counterrotating external environment, very compact (< 5 AU) and short-lived (<= a few * 10^5 yr) disks can form when infalling material has low angular momentum. The most interesting case is found for the infall of counterrotating external material with high angular momentum, leading to the formation of counterrotating inner and outer disks separated by a deep gap at a few tens AU. The gap migrates inward due to accretion of the inner disk onto the protostar, turns into a central hole, and finally disappears giving way to the outer strongly gravitationally unstable disk. This model may lead to the emergence of a transient stellar system with sub-stellar/very-low-mass components counterrotating with respect to that of the star. ",The effect of external environment on the evolution of protostellar disks " We consider the phase-integral method applied to an arbitrary linear ordinary second-order differential equation with non-analytical coefficients. We propose a universal technique based on the Frobenius method which allows to obtain new exact relation between connection matrices associated with its general solution. The technique allows the reader to write an exact algebraic equation for the Stokes constants provided the differential equation has at most one regular singular point in a finite area of the complex plane. We also propose a way to write approximate relations between Stokes constants in case of multiple regular singular points located far away from each other. The well-known Budden problem is solved with help of this technique as an illustration of its usage. To access the HTML version of the paper & discuss it with the author, visit https://enabla.com/pub/607. ",On a new exact relation for the connection matrices in case of a linear second-order ODE with non-analytic coefficients " We report optical, infrared, and X-ray light curves for the outburst, in 2000, of the black hole candidate XTE J1550-564. We find that the start of the outburst in the H and V bands precedes that seen in the RXTE All Sky Monitor by 11.5 +/- 0.9 and 8.8 +/- 0.6 days, respectively; a similar delay has been observed in two other systems. About 50 days after the primary maxima in the VIH light curves, we find secondary maxima, most prominently in H. This secondary peak is absent in the X-ray light curve, but coincides with a transition to the low/hard state. We suggest that this secondary peak may be due to non-thermal emission associated with the formation of a jet. ",Multiwavelength Observations of the Black Hole Candidate XTE J1550-564 during the 2000 Outburst " Let the bielliptic locus be the closure in the moduli space of stable curves of the locus of smooth curves that are double covers of genus 1 curves. In this paper we compute the class of the bielliptic locus in \bar{M}_3 in terms of a standard basis of the rational Chow group of codimension-2 classes in the moduli space. Our method is to test the class on the hyperelliptic locus: this gives the desired result up to two free parameters, which are then determined by intersecting the locus with two surfaces in \bar{M}_3. ",The class of the bielliptic locus in genus 3 " The dc Josephson effect in a one-dimensional Tomonaga-Luttinger (TL) liquid is studied on the basis of two bosonized models. We first consider a TL liquid sandwiched between two superconductors with a strong barrier at each interface. Both the interfaces are assumed to be perfect if the barrier potential is absent. We next consider a TL liquid with open boundaries, weakly coupled with two superconductors. Without putting strong barriers, we instead assume that the coupling at each interface is described by a tunnel junction. We calculate the Josephson current in each model, and find that the two models yield same results. The Josephson current is suppressed by repulsive electron-electron interactions. It is shown that the suppression is characterized by only the correlation exponent for the charge degrees of freedom. This result is inconsistent with a previously reported result, where the spin degrees of freedom also affects the suppression. The reason of this inconsistency is discussed. ",DC Josephson Effect in a Tomonaga-Luttinger Liquid " Model quantization helps to reduce model size and latency of deep neural networks. Mixed precision quantization is favorable with customized hardwares supporting arithmetic operations at multiple bit-widths to achieve maximum efficiency. We propose a novel learning-based algorithm to derive mixed precision models end-to-end under target computation constraints and model sizes. During the optimization, the bit-width of each layer / kernel in the model is at a fractional status of two consecutive bit-widths which can be adjusted gradually. With a differentiable regularization term, the resource constraints can be met during the quantization-aware training which results in an optimized mixed precision model. Further, our method can be naturally combined with channel pruning for better computation cost allocation. Our final models achieve comparable or better performance than previous quantization methods with mixed precision on MobilenetV1/V2, ResNet18 under different resource constraints on ImageNet dataset. ",FracBits: Mixed Precision Quantization via Fractional Bit-Widths " In this paper, we consider the global regularity for Monge-Amp\`ere type equations with the Neumann boundary conditions on Riemannian manifolds. It is known that the classical solvability of the Neumann boundary value problem is obtained under some necessary assumptions. Our main result extends the main theorem from the case of Euclidean space $R^n$ in [11] to Riemannian manifolds. ",Monge-Amp\`ere type equations with Neumann boundary conditions on Riemannian manifolds " This article addresses two topics of significant mathematical and practical interest in the theory of kernel approximation: the existence of local and stable bases and the L_p--boundedness of the least squares operator. The latter is an analogue of the classical problem in univariate spline theory, known there as the ""de Boor conjecture"". A corollary of this work is that for appropriate kernels the least squares projector provides universal near-best approximations for functions f\in L_p, 1\le p\le \infty. ",Kernel Approximation on Manifolds II: The $L_{\infty}$-norm of the $L_2$-projector " A model for planar phenomena introduced by Jackiw and Pi and described by a Lagrangian including a Chern-Simons term is considered. The associated equations of motion, among which a 2+1 gauged nonlinear Schr\""odinger equation, are rewritten into a gauge independent form involving the modulus of the matter field. Application of a Painlev\'e analysis, as adapted to partial differential equations by Weiss, Tabor and Carnevale, shows up resonance values that are all integer. However, compatibility conditions need be considered which cannot be satisfied consistently in general. Such a result suggests that the examined equations are not integrable, but provides tools for the investigation of the integrability of different reductions. This in particular puts forward the familiar integrable Liouville and 1+1 nonlinear Schr\""odinger equations. ",Painlev\'e analysis and integrability properties of a $2+1$ nonrelativistic field theory. " All the possible schemes of neutrino mixing with four massive neutrinos inspired by the existing experimental indications in favor of neutrino mixing are considered in a model independent way. Assuming that in short-baseline experiments only one mass-squared difference is relevant, it is shown that the scheme with a neutrino mass hierarchy is not compatible with the experimental results. Only two schemes with two pairs of neutrinos with close masses separated by a mass difference of the order of 1 eV are in agreement with the results of all experiments. One of these schemes leads to possibly observable effects in Tritium and neutrinoless double-beta decay experiments. ",Neutrino mass spectrum from the results of neutrino oscillation experiments " We present a robust approach for detecting intrinsic sentence importance in news, by training on two corpora of document-summary pairs. When used for single-document summarization, our approach, combined with the ""beginning of document"" heuristic, outperforms a state-of-the-art summarizer and the beginning-of-article baseline in both automatic and manual evaluations. These results represent an important advance because in the absence of cross-document repetition, single document summarizers for news have not been able to consistently outperform the strong beginning-of-article baseline. ",Detecting (Un)Important Content for Single-Document News Summarization " We introduce a class of quantum adiabatic evolutions that we claim may be interpreted as the equivalents of the unitary gates of the quantum gate model. We argue that these gates form a universal set and may therefore be used as building blocks in the construction of arbitrary `adiabatic circuits', analogously to the manner in which gates are used in the circuit model. One implication of the above construction is that arbitrary classical boolean circuits as well as gate model circuits may be directly translated to adiabatic algorithms with no additional resources or complexities. We show that while these adiabatic algorithms fail to exhibit certain aspects of the inherent fault tolerance of traditional quantum adiabatic algorithms, they may have certain other experimental advantages acting as quantum gates. ",Quantum Gates with Controlled Adiabatic Evolutions Observation is reported for a structure near the $J/\psi\phi$ threshold in $B^+\rightarrow J/\psi\phi K^+$ decays produced in $\bar{p} p $ collisions at $\sqrt{s}=1.96 \TeV$ with a statistical significance of beyond 5 standard deviations. ,Observation of a Narrow Near-Threshold Structure in the $J/\psi\phi$ Mass Spectrum in $B^+\to J/\psi\phi K^+$ Decays " The origin of non-Poissonian or bursty temporal patterns observed in various datasets for human social dynamics has been extensively studied, yet its understanding still remains incomplete. Considering the fact that humans are social beings, a fundamental question arises: Is the bursty human dynamics dominated by individual characteristics or by interaction between individuals? In this paper we address this question by analyzing the Wikipedia edit history to see how spontaneous individual editors are in initiating bursty periods of editing, i.e., individual-driven burstiness, and to what extent such editors' behaviors are driven by interaction with other editors in those periods, i.e., interaction-driven burstiness. We quantify the degree of initiative (DoI) of an editor of interest in each Wikipedia article by using the statistics of bursty periods containing the editor's edits. The integrated value of the DoI over all relevant timescales reveals which is dominant between individual-driven and interaction-driven burstiness. We empirically find that this value tends to be larger for weaker temporal correlations in the editor's editing behavior and/or stronger editorial correlations. These empirical findings are successfully confirmed by deriving an analytic form of the DoI from a model capturing the essential features of the edit sequence. Thus our approach provides a deeper insight into the origin and underlying mechanisms of bursts in human social dynamics. ",Individual-driven versus interaction-driven burstiness in human dynamics: The case of Wikipedia edit history " We derive general structure and rigidity theorems for submetries $f: M \to X$, where $M$ is a Riemannian manifold with sectional curvature $\sec M \ge 1$. When applied to a non-trivial Riemannian submersion, it follows that $diam X \leq \pi/2 $. In case of equality, there is a Riemannian submersion $\mathbb{S} \to M$ from a unit sphere, and as a consequence, $f$ is known up to metric congruence. A similar rigidity theorem also holds in the general context of Riemannian foliations. ",Rigidity theorems for submetries in positive curvature " We derive a computationally efficient expression of the photon counting distribution for a uniformly illuminated array of single photon detectors. The expression takes the number of single detectors, their quantum efficiency, and their dark-count rate into account. Using this distribution we compute the error of the array detector by comparing the output to that of a ideal detector. We conclude from the error analysis that the quantum efficiency must be very high in order for the detector to resolve a hand-full of photons with high probability. Furthermore, we conclude that in the worst-case scenario the required array size scales quadratically with the number of photons that should be resolved. We also simulate a temporal array and investigate how large the error is for different parameters and we compute optimal size of the array that yields the smallest error. ",Photon Counting Distribution for Arrays of Single-Photon Detectors " We prove polynomial upper bounds of geometric Ramsey numbers of pathwidth-2 outerplanar triangulations in both convex and general cases. We also prove that the geometric Ramsey numbers of the ladder graph on $2n$ vertices are bounded by $O(n^{3})$ and $O(n^{10})$, in the convex and general case, respectively. We then apply similar methods to prove an $n^{O(\log(n))}$ upper bound on the Ramsey number of a path with $n$ ordered vertices. ",On the Geometric Ramsey Number of Outerplanar Graphs " As Internet streaming of live content has gained on traditional cable TV viewership, we have also seen significant growth of free live streaming services which illegally provide free access to copyrighted content over the Internet. Some of these services draw millions of viewers each month. Moreover, this viewership has continued to increase, despite the consistent coupling of this free content with deceptive advertisements and user-hostile tracking. In this paper, we explore the ecosystem of free illegal live streaming services by collecting and examining the behavior of a large corpus of illegal sports streaming websites. We explore and quantify evidence of user tracking via third-party HTTP requests, cookies, and fingerprinting techniques on more than $27,303$ unique video streams provided by $467$ unique illegal live streaming domains. We compare the behavior of illegal live streaming services with legitimate services and find that the illegal services go to much greater lengths to track users than most legitimate services, and use more obscure tracking services. Similarly, we find that moderated sites that aggregate links to illegal live streaming content fail to moderate out sites that go to significant lengths to track users. In addition, we perform several case studies which highlight deceptive behavior and modern techniques used by some domains to avoid detection, monetize traffic, or otherwise exploit their viewers. Overall, we find that despite recent improvements in mechanisms for detecting malicious browser extensions, ad-blocking, and browser warnings, users of free illegal live streaming services are still exposed to deceptive ads, malicious browser extensions, scams, and extensive tracking. We conclude with insights into the ecosystem and recommendations for addressing the challenges highlighted by this study. ",The Price of Free Illegal Live Streaming Services " We consider the behavior of gradient flow and of discrete and noisy gradient descent. It is commonly noted that the addition of noise to the process of discrete gradient descent can affect the trajectory of gradient descent. In previous work, we observed such effects. There, we considered the case where the minima had codimension 1. In this note, we do some computer experiments and observe the behavior of noisy gradient descent in the more complex setting of minima of higher codimension. ",Gradient descent in higher codimension We give an alternative proof of the characterization of Besov spaces with negative exponents by means of integrability of harmonic functions with a weight depending on the distance to the boundary. ,On a new characterisation of Besov spaces with negative exponents " Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on $L_1$ norm or even sub-linear potentials corresponding to quasinorms $L_p$ ($095% during the last 12 years. We will compare the GOLF data with some of the above-mentioned solar activity indexes. ",GOLF: a new proxy for solar magnetism " A complete well-defined sample of ultracool dwarfs is one of the key science programs of the Pan-STARRS 1 optical survey telescope (PS1). Here we combine PS1 commissioning data with 2MASS to conduct a proper motion search (0.1--2.0\arcsec/yr) for nearby T dwarfs, using optical+near-IR colors to select objects for spectroscopic followup. The addition of sensitive far-red optical imaging from PS1 enables discovery of nearby ultracool dwarfs that cannot be identified from 2MASS data alone. We have searched 3700 sq. deg. of PS1 y-band (0.95--1.03 um) data to y$\approx$19.5 mag (AB) and J$\approx$16.5 mag (Vega) and discovered four previously unknown bright T dwarfs. Three of the objects (with spectral types T1.5, T2 and T3.5) have photometric distances within 25 pc and were missed by previous 2MASS searches due to more restrictive color selection criteria. The fourth object (spectral type T4.5) is more distant than 25 pc and is only a single-band detection in 2MASS. We also examine the potential for completing the census of nearby ultracool objects with the PS1 3$\pi$ survey. ",Four new T dwarfs identified in PanSTARRS 1 commissioning data " We extend the Polyakov-loop improved Nambu--Jona-Lasinio (PNJL) model to 2+1 flavor case to study the chiral and deconfinement transitions of strongly interacting matter at finite temperature and nonzero chemical potential. The Polyakov-loop, the chiral susceptibility of light quarks (u and d) and the strange quark number susceptibility as functions of temperature at zero chemical potential are determined and compared with the recent results of Lattice QCD simulations. We find that there is always an inflection point in the curve of strange quark number susceptibility accompanying the appearance of the deconfinement phase, which is consistent with the result of Lattice QCD simulations. Predictions for the case at nonzero chemical potential and finite temperature are made as well. We give the phase diagram in terms of the chemical potential and temperature and find that the critical endpoint (CEP) moves down to low temperature and finally disappears with the decrease of the strength of the 't Hooft flavor-mixing interaction. ",2+1 Flavor Polyakov--Nambu--Jona-Lasinio Model at Finite Temperature and Nonzero Chemical Potential " Mortality prediction in intensive care units is considered one of the critical steps for efficiently treating patients in serious condition. As a result, various prediction models have been developed to address this problem based on modern electronic healthcare records. However, it becomes increasingly challenging to model such tasks as time series variables because some laboratory test results such as heart rate and blood pressure are sampled with inconsistent time frequencies. In this paper, we propose several deep learning models using the same features as the SAPS II score. To derive insight into the proposed model performance. Several experiments have been conducted based on the well known clinical dataset Medical Information Mart for Intensive Care III. The prediction results demonstrate the proposed model's capability in terms of precision, recall, F1 score, and area under the receiver operating characteristic curve. ",Building Deep Learning Models to Predict Mortality in ICU Patients " In this paper, we introduce a discontinuous Finite Element formulation on simplicial unstructured meshes for the study of free surface flows based on the fully nonlinear and weakly dispersive Green-Naghdi equations. Working with a new class of asymptotically equivalent equations, which have a simplified analytical structure, we consider a decoupling strategy: we approximate the solutions of the classical shallow water equations supplemented with a source term globally accounting for the non-hydrostatic effects and we show that this source term can be computed through the resolution of scalar elliptic second-order sub-problems. The assets of the proposed discrete formulation are: (i) the handling of arbitrary unstructured simplicial meshes, (ii) an arbitrary order of approximation in space, (iii) the exact preservation of the motionless steady states, (iv) the preservation of the water height positivity, (v) a simple way to enhance any numerical code based on the nonlinear shallow water equations. The resulting numerical model is validated through several benchmarks involving nonlinear wave transformations and run-up over complex topographies. ",A discontinuous Galerkin method for a new class of Green-Naghdi equations on simplicial unstructured meshes " Non-technical end-users are silent and invisible users of the state-of-the-art explainable artificial intelligence (XAI) technologies. Their demands and requirements for AI explainability are not incorporated into the design and evaluation of XAI techniques, which are developed to explain the rationales of AI decisions to end-users and assist their critical decisions. This makes XAI techniques ineffective or even harmful in high-stakes applications, such as healthcare, criminal justice, finance, and autonomous driving systems. To systematically understand end-users' requirements to support the technical development of XAI, we conducted the EUCA user study with 32 layperson participants in four AI-assisted critical tasks. The study identified comprehensive user requirements for feature-, example-, and rule-based XAI techniques (manifested by the end-user-friendly explanation forms) and XAI evaluation objectives (manifested by the explanation goals), which were shown to be helpful to directly inspire the proposal of new XAI algorithms and evaluation metrics. The EUCA study findings, the identified explanation forms and goals for technical specification, and the EUCA study dataset support the design and evaluation of end-user-centered XAI techniques for accessible, safe, and accountable AI. ",Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals " Ultraluminous infrared galaxies (ULIRGs) are outstanding due to their huge luminosity output in the infrared, which is predominantly powered by super starbursts and/or hidden active galactic nuclei (AGN). NGC 6240 is one of the nearest ULIRGs and is considered a key representative of its class. Here, we report the first high-resolution imaging spectroscopy of NGC 6240 in X-rays. The observation, performed with the ACIS-S detector aboard the Chandra X-ray observatory, led to the discovery of two hard nuclei, coincident with the optical-IR nuclei of NGC 6240. The AGN character of both nuclei is revealed by the detection of absorbed hard, luminous X-ray emission and two strong neutral Fe_K_alpha lines. In addition, extended X-ray emission components are present, changing their rich structure in dependence of energy. The close correlation of the extended emission with the optical Halpha emission of NGC 6240, in combination with the softness of its spectrum, clearly indicates its relation to starburst-driven superwind activity. ",Discovery of a binary AGN in the ultraluminous infrared galaxy NGC 6240 using Chandra " The notion of generalized rank invariant in the context of multiparameter persistence has become an important ingredient for defining interesting homological structures such as generalized persistence diagrams. Naturally, computing these rank invariants efficiently is a prelude to computing any of these derived structures efficiently. We show that the generalized rank over a finite interval $I$ of a $\mathbb{Z}^2$-indexed persistence module $M$ is equal to the generalized rank of the zigzag module that is induced on a certain path in $I$ tracing mostly its boundary. Hence, we can compute the generalized rank over $I$ by computing the barcode of the zigzag module obtained by restricting the bifiltration inducing $M$ to that path. If the bifiltration and $I$ have at most $t$ simplices and points respectively, this computation takes $O(t^\omega)$ time where $\omega\in[2,2.373)$ is the exponent of matrix multiplication. Among others, we apply this result to obtain an improved algorithm for the following problem. Given a bifiltration inducing a module $M$, determine whether $M$ is interval decomposable and, if so, compute all intervals supporting its summands. ",Computing Generalized Rank Invariant for 2-Parameter Persistence Modules via Zigzag Persistence and its Applications " We analyze the second order QCD corrections to the fragmentation functions F_^H_k(x,Q^2) (k=T,L,A) which are measured in e^+~e^- annihilation From these fragmentation functions one can derive the integrated transverse (sigma_T), longitudinal (sigma_L) and asymmetric (sigma_A) cross sections. The sum sigma_{tot}=sigma_T+sigma_L corrected up to order alpha_s^2 agrees with the well known result in the literature. It turns out that the order alpha_s^2 corrections to the transverse and asymmetric quantities are small in contrast to our findings for F_L^H(x,Q^2) and sigma_L where they turn out to be large. Therefore in the latter case one gets a better agreement between the theoretical predictions and the data obtained from he LEP experiments. ",Higher order QCD Corrections to Fragmentation Functions in e^+~e^- Annihilation " This work considers the secure and reliable information transmission in two-hop relay wireless networks without the information of both eavesdropper channels and locations. While the previous work on this problem mainly studied infinite networks and their asymptotic behavior and scaling law results, this papers focuses on a more practical network with finite number of system nodes and explores the corresponding exact results on the number of eavesdroppers the network can tolerant to ensure a desired secrecy and reliability. For achieving secure and reliable information transmission in a finite network, two transmission protocols are considered in this paper, one adopts an optimal but complex relay selection process with less load balance capacity while the other adopts a random but simple relay selection process with good load balance capacity. Theoretical analysis is further provided to determine the exact and maximum number of independent and also uniformly distributed eavesdroppers one network can tolerate to satisfy a specified requirement in terms of the maximum secrecy outage probability and maximum transmission outage probability allowed. ",Secure and Reliable Transmission with Cooperative Relays in Two-Hop Wireless Networks " We reevaluate the hadronic part of the electromagnetic vacuum expectation value using the standard dispersion integral approach that utilizes the hadronic cross section measured in $\ee$ experiments as input. Previous analyses are based upon point-by-point trapezoidal integration which does not treat experimental errors in an optimal way. We use a technique that weights the experimental inputs by their stated uncertainties, includes correlations, and incorporates some refinements. We find the five-flavor hadronic contribution to the fractional change in the electromagnetic coupling constant at $q^2=M_Z^2$, $\Delta\alpha(MZ)$, to be $0.02752\pm0.00046$, which leads to a value of the electromagnetic coupling constant, $\alpha^{-1}(M_Z^2) = 128.96\pm0.06$. [This is an updated version of SLAC-PUB-6710 (hep-ph/9411353) which fixes a small bias in the fitting procedure (1/3 of the change) and incorporates a new and precise cross section measurement near charm threshold (2/3 of the change).] ",Reevaluation of the Hadronic Contribution to $\alpha(M_Z^2)$ " In this paper, we prove that the net of transition chain is $\delta$-dense for nearly integrable positive definite Hamiltonian systems with 3 degrees of freedom in the cusp-residual generic sense in $C^r$-topology, $r\ge 6$. The main ingredients of the proof existed in \cite{CZ,C17a,C17b}. As an immediate consequence, Arnold diffusion exists among this class of Hamiltonian systems. The question of \cite{C17c} is answered in Section 9 of the paper. ",The genericity of Arnold diffusion in nearly integrable Hamiltonian systems " Radio jets can play multiple roles in the feedback loop by regulating the accretion of the gas, by enhancing gas turbulence, and by driving gas outflows. Numerical simulations are beginning to make detailed predictions about these processes. Using high resolution VLBI observations we test these predictions by studying how radio jets of different power and in different phases of evolution affect the properties and kinematics of the surrounding HI gas. Consistent with predictions, we find that young (or recently restarted) radio jets have stronger impact as shown by the presence of HI outflows. The outflowing medium is clumpy {with clouds of with sizes up to a few tens of pc and mass ~10^4 m_sun) already in the region close to the nucleus ($< 100$ pc), making the jet interact strongly and shock the surrounding gas. We present a case of a low-power jet where, as suggested by the simulations, the injection of energy may produce an increase in the turbulence of the medium instead of an outflow. ",The parsec-scale structure of jet-driven HI outflows in radio galaxies " Superradiance has been extensively studied in the 1970s and 1980s in the regime of superfluores-cence, where a large number of atoms are initially excited. Cooperative scattering in the linear-optics regime, or ""single-photon superradiance"" , has been investigated much more recently, and superra-diant decay has also been predicted, even for a spherical sample of large extent and low density, where the distance between atoms is much larger than the wavelength. Here, we demonstrate this effect experimentally by directly measuring the decay rate of the off-axis fluorescence of a large and dilute cloud of cold rubidium atoms after the sudden switch-off of a low-intensity laser driving the atomic transition. We show that, at large detuning, the decay rate increases with the on-resonance optical depth. In contrast to forward scattering, the superradiant decay of off-axis fluorescence is suppressed near resonance due to attenuation and multiple-scattering effects. ",Superradiance in a Large and Dilute Cloud of Cold Atoms in the Linear-Optics Regime " Autonomous race cars require perception, estimation, planning, and control modules which work together asynchronously while driving at the limit of a vehicle's handling capability. A fundamental challenge encountered in designing these software components lies in predicting the vehicle's future state (e.g. position, orientation, and speed) with high accuracy. The root cause is the difficulty in identifying vehicle model parameters that capture the effects of lateral tire slip. We present a model-based planning and control framework for autonomous racing that significantly reduces the effort required in system identification and control design. Our approach alleviates the gap induced by simulation-based controller design by learning from on-board sensor measurements. A major focus of this work is empirical, thus, we demonstrate our contributions by experiments on validated 1:43 and 1:10 scale autonomous racing simulations. ",BayesRace: Learning to race autonomously using prior experience " Transmission of hydrogen detonation wave (DW) in an inert particle curtain is simulated using the Eulerian-Lagrangian approach with gas-particle two-way coupling. A detailed chemical mechanism is used for hydrogen detonative combustion and parametric studies are conducted based on a two-dimensional computational domain. A detonation map of propagation and extinction corresponding to various particle sizes, concentrations, and curtain thicknesses is plotted. It is shown that the critical curtain thickness decreases considerably when the particle concentration is less than the critical value. The effects of curtain thickness on the trajectories of peak pressure, shock front speed, and heat release rate are examined. Three propagation modes of the DW in particle curtain are found: detonation transmission, partial extinction and detonation reinitiation, and detonation extinction. The chemical explosive mode analysis confirms that a detonation re-initiation event is caused by a re-initiation point with high pressure and explosive propensity, resulting from transverse shock focusing. The influence of particle dimeter and concentration, and curtain thickness on the DW are also examined with peak pressure trajectories, shock speed, and interphase exchange rates of energy and momentum. Furthermore, the evolutions of curtain morphologies are analyzed by the particle velocity, volume fraction, Stokes drag and Archimedes force. This analysis confirms the importance of the drag force in influencing the change of curtain morphologies. Different curtain evolution regimes are found: quasi-stationary regime, shrinkage regime, constant-thickness regime, and expansion regime. Finally, the influences of the curtain thickness on the characteristic time of curtain evolutions are studied. ",Transmission of hydrogen detonation across a curtain of dilute inert particles " Single image reflection separation (SIRS), as a representative blind source separation task, aims to recover two layers, $\textit{i.e.}$, transmission and reflection, from one mixed observation, which is challenging due to the highly ill-posed nature. Existing deep learning based solutions typically restore the target layers individually, or with some concerns at the end of the output, barely taking into account the interaction across the two streams/branches. In order to utilize information more efficiently, this work presents a general yet simple interactive strategy, namely $\textit{your trash is my treasure}$ (YTMT), for constructing dual-stream decomposition networks. To be specific, we explicitly enforce the two streams to communicate with each other block-wisely. Inspired by the additive property between the two components, the interactive path can be easily built via transferring, instead of discarding, deactivated information by the ReLU rectifier from one stream to the other. Both ablation studies and experimental results on widely-used SIRS datasets are conducted to demonstrate the efficacy of YTMT, and reveal its superiority over other state-of-the-art alternatives. The implementation is quite simple and our code is publicly available at $\href{https://github.com/mingcv/YTMT-Strategy}{\textit{https://github.com/mingcv/YTMT-Strategy}}$. ",Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation " We study the boundedness of commutators of bi-parameter singular integrals between mixed spaces $$ [b,T]: L^{p_1}L^{p_2} \to L^{q_1}L^{q_2} $$ in the off-diagonal situation $q_i,p_i\in(1,\infty)$ where we also allow $q_i\not= p_i.$ Boundedness is fully characterized for several arrangements of the integrability exponents with some open problems presented. ",Off-diagonal estimates for commutators of bi-parameter singular integrals " Point clouds obtained from photogrammetry are noisy and incomplete models of reality. We propose an evolutionary optimization methodology that is able to approximate the underlying object geometry on such point clouds. This approach assumes a priori knowledge on the 3D structure modeled and enables the identification of a collection of primitive shapes approximating the scene. Built-in mechanisms that enforce high shape diversity and adaptive population size make this method suitable to modeling both simple and complex scenes. We focus here on the case of cylinder approximations and we describe, test, and compare a set of mutation operators designed for optimal exploration of their search space. We assess the robustness and limitations of this algorithm through a series of synthetic examples, and we finally demonstrate its general applicability on two real-life cases in vegetation and industrial settings. ",Fitting 3D Shapes from Partial and Noisy Point Clouds with Evolutionary Computing " Let $M$ be a topological $G_2$-manifold. We prove that the space of infinitesimal associative deformations of a compact associative submanifold $Y$ with boundary in a coassociative submanifold $X$ is the solution space of an elliptic problem. For a connected boundary $\partial Y$ of genus $g$, the index is given by $\int_{\partial Y}c_1(\nu_X)+1-g$, where $\nu_X$ denotes the orthogonal complement of $T\partial Y$ in $TX_{|\partial Y}$ and $c_1(\nu_X)$ the first Chern class of $\nu_X$ with respect to its natural complex structure. Further, we exhibit explicit examples of non-trivial index. ",Deformations of associative submanifolds with boundary " If $G$ is a group, a virtual retract of $G$ is a subgroup which is a retract of a finite index subgroup. Most of the paper focuses on two group properties: property (LR), that all finitely generated subgroups are virtual retracts, and property (VRC), that all cyclic subgroups are virtual retracts. We study the permanence of these properties under commensurability, amalgams over retracts, graph products and wreath products. In particular, we show that (VRC) is stable under passing to finite index overgroups, while (LR) is not. The question whether all finitely generated virtually free groups satisfy (LR) motivates the remaining part of the paper, studying virtual free factors of such groups. We give a simple criterion characterizing when a finitely generated subgroup of a virtually free group is a free factor of a finite index subgroup. We apply this criterion to settle a conjecture of Brunner and Burns. ",Virtual retraction properties in groups " We determine the 2-loop effective gauge coupling of QCD at high temperatures, defined as a matching coefficient appearing in the dimensionally reduced effective field theory. The result allows to improve on one of the classic non-perturbative probes for the convergence of the weak-coupling expansion at high temperatures, the comparison of full and effective theory determinations of an observable called the spatial string tension. We find surprisingly good agreement almost down to the critical temperature of the deconfinement phase transition. We also determine one new contribution of order O(g^6T^4) to the pressure of hot QCD. ",Two-loop QCD gauge coupling at high temperatures " Anisotropy data analysis leaves a significant degeneracy between primeval spectral index (n_s) and cosmic opacity to CMB photons (\tau). Low--l polarization measures, in principle, can remove it. We perform a likelihood analysis to see how cosmic variance possibly affects such a problem. We find that, for a sufficiently low noise level (\sigma_{pix}) and if \tau is not negligibly low, the degeneracy is greatly reduced, while the residual impact of cosmic variance on n_s and \tau determinations is under control. On the contrary, if \sigma_{pix} is too high, cosmic variance effects appear to be magnified. We apply general results to specific experiments and find that, if favorable conditions occur, it is possible that a 2--\sigma detection of a lower limit on \tau is provided by the SPOrt experiment. Furthermore, if the PLANCK experiment will measure polarization with the expected precision, the error on low--l harmonics is adequate to determine \tau, without significant magnification of the cosmic variance. This however indicates that high sensitivity might be more important than high resolution in \tau determinations. We also outline that a determination of \tau is critical to perform detailed analyses on the nature of dark energy and/or on the presence of primeval gravitational waves. ",Cosmic opacity to CMB photons and polarization measurements " The Nielsen - Olesen flux tube in the SU(2) Yang - Mills - Higgs theory dressed with the color electric $E^a_{\rho,\phi}$ and magnetic $H^a_{\rho,\phi}$ fields is derived. On the next step it is argued that this flux tube can be considered as a result of the nonperturbatuve analytical calculations in the SU(3) quantum theory. ","Flux tube dressed with color electric $E^a_{\rho,\phi}$ and magnetic $H^a_{\rho,\phi}$ fields" " Identifying communities (or clusters), namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. Yet, there is a lack of formal criteria for defining communities and for testing their significance. We propose a sharp definition which is based on a significance threshold. By means of a lumped Markov chain model of a random walker, a quality measure called ""persistence probability"" is associated to a cluster. Then the cluster is defined as an ""$\alpha$-community"" if such a probability is not smaller than $\alpha$. Consistently, a partition composed of $\alpha$-communities is an ""$\alpha$-partition"". These definitions turn out to be very effective for finding and testing communities. If a set of candidate partitions is available, setting the desired $\alpha$-level allows one to immediately select the $\alpha$-partition with the finest decomposition. Simultaneously, the persistence probabilities quantify the significance of each single community. Given its ability in individually assessing the quality of each cluster, this approach can also disclose single well-defined communities even in networks which overall do not possess a definite clusterized structure. ",Finding and testing network communities by lumped Markov chains " With assumptions that the violation of the distance-duality (DD) relation entirely arises from non-conservation of the photon number and the absorption is frequency independent in the observed frequency range, we perform cosmological-model-independent tests for the cosmic opacity. The observational data include the largest Union2.1 SN Ia sample, which is taken for observed $D_\mathrm{L}$, and galaxy cluster samples compiled by De Filippis {\it et al.} and Bonamente {\it et al.}, which are responsible for providing observed $D_\mathrm{A}$. Two parameterizations, $\tau(z)=2\epsilon z$ and $\tau(z)=(1+z)^{2\epsilon}-1$ are adopted for the optical depth associated to the cosmic absorption. We find that, an almost transparent universe is favored by Filippis {\it et al.} sample but it is only marginally accommodated by Bonomente {\it et al.} samples at 95.4% confidence level (C. L.) (even at 99.7% C. L. when the $r<100 \mathrm{kpc}$-cut spherical $\beta$ model is considered). Taking the possible cosmic absorption (in 68.3% C. L. range) constrained from the model-independent tests into consideration, we correct the distance modulus of SNe Ia and then use them to study their cosmological implications. The constraints on the $\Lambda$CDM show that a decelerating expanding universe with $\Omega_\Lambda=0$ is only allowed at 99.7% C. L. by observations when the Bonamente {\it et al.} sample is considered. Therefore, our analysis suggests that an accelerated cosmic expansion is still needed to account for the dimming of SNe and the standard cosmological scenario remains to be supported by current observations. ",Cosmic opacity: cosmological-model-independent tests and their impacts on cosmic acceleration " Approximate Bayesian deep learning methods hold significant promise for addressing several issues that occur when deploying deep learning components in intelligent systems, including mitigating the occurrence of over-confident errors and providing enhanced robustness to out of distribution examples. However, the computational requirements of existing approximate Bayesian inference methods can make them ill-suited for deployment in intelligent IoT systems that include lower-powered edge devices. In this paper, we present a range of approximate Bayesian inference methods for supervised deep learning and highlight the challenges and opportunities when applying these methods on current edge hardware. We highlight several potential solutions to decreasing model storage requirements and improving computational scalability, including model pruning and distillation methods. ",Challenges and Opportunities in Approximate Bayesian Deep Learning for Intelligent IoT Systems " In this paper, we propose a transfer learning (TL)-enabled edge-CNN framework for 5G industrial edge networks with privacy-preserving characteristic. In particular, the edge server can use the existing image dataset to train the CNN in advance, which is further fine-tuned based on the limited datasets uploaded from the devices. With the aid of TL, the devices that are not participating in the training only need to fine-tune the trained edge-CNN model without training from scratch. Due to the energy budget of the devices and the limited communication bandwidth, a joint energy and latency problem is formulated, which is solved by decomposing the original problem into an uploading decision subproblem and a wireless bandwidth allocation subproblem. Experiments using ImageNet demonstrate that the proposed TL-enabled edge-CNN framework can achieve almost 85% prediction accuracy of the baseline by uploading only about 1% model parameters, for a compression ratio of 32 of the autoencoder. ",A Joint Energy and Latency Framework for Transfer Learning over 5G Industrial Edge Networks " We study non-equilibrium quantum dynamics of the single-component scalar field theory in 1+1 space-time dimensions on the basis of the Kadanoff-Baym equation including the next-to-leading-order (NLO) skeleton diagrams. As an extension of the non-relativistic case, we derive relativistic kinetic entropy at the first order in the gradient expansion of the Kadanoff-Baym equations. The derived entropy satisfies the H theorem. Next we perform numerical simulations in spatially homogeneous configurations to investigate thermalization properties of the system by evaluating the system entropy. We find that at later times the kinetic entropy increases approaching the equilibrium value, although the limited time interval in the early stage invalidates the use of it. ",Entropy production in 2D $\lambda \phi^4$ theory in the Kadanoff-Baym approach This paper concerns the topology of isospectral real manifolds of certain Jacobi elements associated with real split semisimple Lie algebras. The manifolds are related to the compactified level sets of the generalized (nonperiodic) Toda lattice equations defined on the semisimple Lie algebras. We then give a cellular decomposition and the associated chain complex of the manifold by introducing colored Dynkin diagrams which parametrize the cells in the decomposition. We also discuss the Morse chain complex of the manifold. ,Topology of the iso-spectral real manifolds associated with the generalized Toda lattices on semisimple Lie algebras " We report detections of X-rays from HH 80 and HH 81 with the ACIS instrument on the Chandra X-ray Observatory. These are among the most luminous HH sources in the optical and they are now the most luminous known in X-rays. These X-rays arise from the strong shocks that occur when the southern extension of this bipolar outflow slams into the ambient material. There is a one-to-one correspondence between regions of high X-ray emission and high H? emission. The X-ray luminosities of HH 80 and HH 81 are 4.5 and 4.3 x 1031 erg s-1, respectively, assuming the measured low-energy absorption is not in the sources. The measured temperature of the HH plasma is not as large as that expected from the maximum velocities seen in the extended tails of the optical emission lines. Rather it is consistent with the ~106 K temperature of the ?narrow? core of the optical lines. There is no observed emission from HH 80 North, the northern extension of the bipolar flow, based upon a measurement of lower sensitivity. We imaged the central region of the bipolar flow revealing a complex of X-ray sources including one near, but not coincident with the putative power source in the radio and infrared. This source, CXOPTM J181912.4-204733, has no counterparts at other wavelengths and is consistent in luminosity and spectrum with a massive star with AV ~ 90 mag. It may contribute significantly to the power input to the complex. Alternatively, this emission might be extended X-rays from outflows close to the power source. We detect 94 X-ray sources overall in this area of star formation. ","X-rays from HH 80, HH 81, and the Central Region" " We have analyzed a sample of LBGs from z =3.5 to z=6 selected from the GOODS-S field as B,V and i-dropouts, and with spectroscopic observations showing that they have the Lyalpha line in emission. Our main aim is to investigate their physical properties and their dependence on the emission line characteristics, to shed light on the relation between galaxies with Lyalpha emission and the general LBG population.The objects were selected from their continuum colors and then spectroscopically confirmed by the GOODS collaboration and other campaigns. From the spectra we derived the line flux and EW. We then used U-band to mid-IR photometry from GOODS-MUSIC to derive the physical properties of the galaxies, such as total stellar mass, age and so on, through standard SED fitting techniques.Although most galaxies are fit by young stellar populations, a small but non negligible fraction has SEDs that require considerably older stellar component, up to 1 Gyr. There is no apparent relation between age and EW: some of the oldest galaxies have large EW, and should be also selected in narrow band surveys. Therefore not all Lyalpha emitters are primeval galaxies in the very early stages of formation,as is commonly assumed. We also find a large range of stellar populations, with masses from 5x10^8 Msol to 5x10^10 Msol and SFR from few to 60 Msol/yr. Although there is no correlation between mass and EW, we find a significant lack of massive galaxies with high EW, which could be explained if the most massive galaxies were more dusty and/or contained more neutral gas than less massive objects. Finally we find that more than half of the galaxies contain small but non negligible amounts of dust: the mean E(B-V) and the EW are well correlated, although with a large scatter, as already found at lower redshift ",The physical properties of Lyalpha emitting galaxies: not just primeval galaxies? " This article is intended to provide a pedagogical account of issues related to, and recent work on, gravitational waves from coalescing compact binaries (composed of neutron stars and/or black holes). These waves are the most promising for kilometer-size interferometric detectors such as LIGO and VIRGO. Topics discussed include: interferometric detectors and their noise; coalescing compact binaries and their gravitational waveforms; the technique of matched filtering for signal detection and measurement; waveform calculations in post-Newtonian theory and in the black-hole perturbation approach; and the accuracy of the post-Newtonian expansion. ",Gravitational waves from coalescing compact binaries " The motive of this paper is to discuss the local convergence of a two-step Newton type method of convergence rate three for solving nonlinear equations in Banach spaces. It is assumed that the first order derivative of nonlinear operator satisfies the generalized Lipschitz i.e. $L$-average condition. Also, some results on convergence of the same method in Banach spaces are established under the assumption that the derivative of the operators satisfies the radius or center Lipschitz condition with a weak $L$-average particularly it is assumed that $L$ is positive integrable function but not necessarily non-decreasing. ",On the Local convergence of two-step Newton type Method in Banach Spaces under generalized Lipschitz Conditions " We investigate how the Fritzsch ansatz for the quark mass matrices can be modified in the least possible way to accommodate the observed large top quark mass and the measured values of the CKM elements. As one of the solutions, we find that the \{23\} and the \{32\} elements of the up quark mass matrix are unequal. The rest of the assumptions are same as in Fritzsch ansatz. In this formalism we have an extra parameter i.e. the ratio of the \{{23\}} and the \{{32\}} element, which gets fixed by the large top quark mass. The predicted values for $\frac{V_{ub}}{V_{cb}}$ , $\frac{V_{td}}{V_{ts}}$ from this new ansatz are in the correct experimental range even for the smaller values of $\tan \beta $. In the end, we write down the $SO(10)$ motivated superpotential for these new mass matrices. ",A new ansatz: Fritzsch Mass Matrices with least modification " This paper describes an architecture for controlling non-player characters (NPC) in the First Person Shooter (FPS) game Unreal Tournament 2004. Specifically, the DRE-Bot architecture is made up of three reinforcement learners, Danger, Replenish and Explore, which use the tabular Sarsa({\lambda}) algorithm. This algorithm enables the NPC to learn through trial and error building up experience over time in an approach inspired by human learning. Experimentation is carried to measure the performance of DRE-Bot when competing against fixed strategy bots that ship with the game. The discount parameter, {\gamma}, and the trace parameter, {\lambda}, are also varied to see if their values have an effect on the performance. ",DRE-Bot: A Hierarchical First Person Shooter Bot Using Multiple Sarsa({\lambda}) Reinforcement Learners " Let $\overline{p}(n)$ denote the overpartition function. Liu and Zhang showed that $\overline{p}(a) \overline{p}(b)>\overline{p}(a+b)$ for all integers $a,b>1$ by using an analytic result of Engle. We offer in this paper a combinatorial proof to the Liu-Zhang inequaity. More precisely, motivated by the polynomials $P_{n}(x)$ , which generalize the $k$-colored partitions function $p_{-k}(n)$, we introduce the polynomials $\overline{P}_{n}(x)$, which take the number of $k$-colored overpartitions of $n$ as their special values. And by combining combinatorial and analytic approaches, we obtain that $\overline{P}_{a}(x) \overline{P}_{b}(x)>\overline{P}_{a+b}(x)$ for all positive integers $a,b$ and real numbers $x \ge 1$ , except for $(a,b,x)=(1,1,1),(2,1,1),(1,2,1)$. ",Polynomization of the Liu-Zhang inequality for overpartition function " We investigate the zero temperature quantum phases of a Bose-Bose mixture on a triangular lattice using Bosonic Dynamical Mean Field Theory (BDMFT). We consider the case of total filling one where geometric frustration arises for asymmetric hopping. We map out a rich ground state phase diagram including xy-ferromagnetic, spin-density wave, superfluid, and supersolid phases. In particular, we identify a stripe spin-density wave phase for highly asymmetric hopping. On top of the spin-density wave, we find that the system generically shows weak charge (particle) density wave order. ",Quantum phases of Bose-Bose mixtures on a triangular lattice " Adjoint functors between the categories of crossed modules of dialgebras and Leibniz algebras are constructed. The well-known relations between the categories of Lie, Leibniz, associative algebras and dialgebras are extended to the respective categories of crossed modules. ","More on crossed modules of Lie, Leibniz, associative and diassociative algebras" " Superconductors are of type I or II depending on whether they form an Abrikosov vortex lattice. Although bulk lead (Pb) is classified as a prototypical type-I superconductor, we observe single-flux-quantum and multiple-flux-quanta vortices in the intermediate state using mK scanning tunneling microscopy. We show that the winding number of individual vortices can be determined from the real space wave function of its Caroli-de Gennes-Matricon bound states. This generalizes the index theorem put forward by Volovik for isotropic electronic states to realistic electronic structures. In addition, the bound states due to the two superconducting bands of Pb can be separately detected. This yields strong evidence for low inter-band coupling and an independent closure of the gaps inside vortices. ",Identification of multiple-flux-quanta vortices by core states in the two-band superconductor Pb " As COVID-19 has been spreading across the world since early 2020, a growing number of malicious campaigns are capitalizing the topic of COVID-19. COVID-19 themed cryptocurrency scams are increasingly popular during the pandemic. However, these newly emerging scams are poorly understood by our community. In this paper, we present the first measurement study of COVID-19 themed cryptocurrency scams. We first create a comprehensive taxonomy of COVID-19 scams by manually analyzing the existing scams reported by users from online resources. Then, we propose a hybrid approach to perform the investigation by: 1) collecting reported scams in the wild; and 2) detecting undisclosed ones based on information collected from suspicious entities (e.g., domains, tweets, etc). We have collected 195 confirmed COVID-19 cryptocurrency scams in total, including 91 token scams, 19 giveaway scams, 9 blackmail scams, 14 crypto malware scams, 9 Ponzi scheme scams, and 53 donation scams. We then identified over 200 blockchain addresses associated with these scams, which lead to at least 330K US dollars in losses from 6,329 victims. For each type of scams, we further investigated the tricks and social engineering techniques they used. To facilitate future research, we have released all the well-labelled scams to the research community. ",Don't Fish in Troubled Waters! Characterizing Coronavirus-themed Cryptocurrency Scams " Despite being similar in structure, functioning, and size viral pathogens enjoy very different mostly well-defined ways of life. They occupy their hosts for a few days (influenza), for a few weeks (measles), or even lifelong (HCV), which manifests in acute or chronic infections. The various transmission routes (airborne, via direct contact, etc.), degrees of infectiousness (referring to the load required for transmission), antigenic variation/immune escape and virulence define further pathogenic lifestyles. To survive pathogens must infect new hosts; the success determines their fitness. Infection happens with a certain likelihood during contact of hosts, where contact can also be mediated by vectors. Besides structural aspects of the host-contact network, three parameters/concepts appear to be key: the contact rate and the infectiousness during contact, which encode the mode of transmission, and third the immunity of susceptible hosts. From here, what can be concluded about the evolutionary strategies of viral pathogens? This is the biological question addressed in this paper. The answer extends earlier results (Lange & Ferguson 2009, PLoS Comput Biol 5 (10): e1000536) and makes explicit connection to another basic work on the evolution of pathogens (Grenfell et al. 2004, Science 303: 327-332). A mathematical framework is presented that models intra- and inter-host dynamics in a minimalistic but unified fashion covering a broad spectrum of viral pathogens, including those that cause flu-like infections, childhood diseases, and sexually transmitted infections. These pathogens turn out as local maxima of the fitness landscape. The models involve differential- and integral equations, agent-based simulation, networks, and probability. ",A mathematical framework for predicting lifestyles of viral pathogens " We establish, in spacetime dimensions $n\geq 3$, the nonlinear stability in the contracting direction of Friedmann-Lema\^itre-Robertson-Walker (FLRW) solutions to the Einstein-Euler-scalar field equations with linear equations of state $P=c_s^2 \rho$ for sounds speeds $c_s$ satisfying $1/(n-1)d$ associated with an Hermite subdivision operator of order $d$ does not imply that the spectral condition of order $\ell$ is satisfied, while it is known that these two concepts are equivalent in the case $\ell=d$. ",A note on spectral properties of Hermite subdivision operators " We consider discrete Poisson interface problems resulting from linear unfitted finite elements, also called cut finite elements (CutFEM). Three of these unfitted finite element methods known from the literature are studied. All three methods rely on Nitsche s method to incorporate the interface conditions. The main topic of the paper is the development of a multigrid method, based on a novel prolongation operator for the unfitted finite element space and an interface smoother that is designed to yield robustness for large jumps in the diffusion coefficients. Numerical results are presented which illustrate efficiency of this multigrid method and demonstrate its robustness properties with respect to variation of the mesh size, location of the interface and contrast in the diffusion coefficients. ",A Multigrid Method for Unfitted Finite Element Discretizations of Elliptic Interface Problems " given two minimal surfaces embedded in $\S3$ of genus $g$ we prove the existence of a sequence of non-congruent compact minimal surfaces embedded in $\S3$ of genus $g$ that converges in $C^{2,\alpha}$ to a compact embedded minimal surface provided some conditions are satisfied. These conditions also imply that, if any of these two surfaces is embedded by the first eigenvalue, so is the other. ",Convergent sequences of closed minimal surfaces embedded in $\S3$ " Considerable progress has recently been made in the development of techniques to exactly determine two-point resistances in networks of various topologies. In particular, two types of method have emerged. One is based on potentials and the evaluation of eigenvalues and eigenvectors of the Laplacian matrix associated with the network or its minors. The second method is based on a recurrence relation associated with the distribution of currents in the network. Here, these methods are compared and used to determine the resistance distances between any two nodes of a network with topology of a hammock. ",Comparison of methods to determine point-to-point resistance in nearly rectangular networks with application to a hammock network " We present a new theoretical approach to the kinetics of micelle formation in surfactant solutions, in which the various stages of aggregation are treated as constrained paths on a single free-energy landscape. Three stages of well-separated time scales are distinguished. The first and longest stage involves homogeneous nucleation of micelles, for which we derive the size of the critical nuclei, their concentration, and the nucleation rate. Subsequently, a much faster growth stage takes place, which is found to be diffusion-limited for surfactant concentrations slightly above the critical micellar concentration ({\it cmc}), and either diffusion-limited or kinetically limited for higher concentrations. The time evolution of the growth is derived for both cases. At the end of the growth stage the micelle size may be either larger or smaller than its equilibrium value, depending on concentration. A final stage of equilibration follows, during which the micelles relax to their equilibrium size through fission or fusion. Both cases of fixed surfactant concentration (closed system) and contact with a reservoir of surfactant monomers (open system) are addressed and found to exhibit very different kinetics. In particular, we find that micelle formation in an open system should be kinetically suppressed over macroscopic times and involve two stages of micelle nucleation rather than one. ",Kinetics of Surfactant Micellization: a Free Energy Approach " We present a numerical scheme for the solution of nonlinear mixed-dimensional PDEs describing coupled processes in embedded tubular network system in exchange with a bulk domain. Such problems arise in various biological and technical applications such as in the modeling of root-water uptake, heat exchangers, or geothermal wells. The nonlinearity appears in form of solution-dependent parameters such as pressure-dependent permeability or temperature-dependent thermal conductivity. We derive and analyse a numerical scheme based on distributing the bulk-network coupling source term by a smoothing kernel with local support. By the use of local analytical solutions, interface unknowns and fluxes at the bulk-network interface can be accurately reconstructed from coarsely resolved numerical solutions in the bulk domain. Numerical examples give confidence in the robustness of the method and show the results in comparison to previously published methods. The new method outperforms these existing methods in accuracy and efficiency. In a root water uptake scenario, we accurately estimate the transpiration rate using only a few thousand 3D mesh cells and a structured cube grid whereas other state-of-the-art numerical schemes require millions of cells and local grid refinement to reach comparable accuracy. ",Nonlinear mixed-dimension model for embedded tubular networks with application to root water uptake " In this article we present the cut Fock space approach to the D=d+1=2, Supersymmetric Yang-Mills Quantum Mechanics (SYMQM). We start by briefly introducing the main features of the framework. We concentrate on those properties of the method which make it a convenient set up not only for numerical calculations but also for analytic computations. In the main part of the article a sample of results are discussed, namely, analytic and numerical analysis of the D=2, SYMQM systems with SU(2) and SU(3) gauge symmetry. ",Exact solutions to D=2 Supersymmetric Yang-Mills Quantum Mechanics with SU(3) gauge group " We present an open-source accessory for the NAO robot, which enables to test computationally demanding algorithms in an external platform while preserving robot's autonomy and mobility. The platform has the form of a backpack, which can be 3D printed and replicated, and holds an ODROID XU4 board to process algorithms externally with ROS compatibility. We provide also a software bridge between the B-Human's framework and ROS to have access to the robot's sensors close to real-time. We tested the platform in several robotics applications such as data logging, visual SLAM, and robot vision with deep learning techniques. The CAD model, hardware specifications and software are available online for the benefit of the community: https://github.com/uchile-robotics/nao-backpack ",The NAO Backpack: An Open-hardware Add-on for Fast Software Development with the NAO Robot " We give the definition of a kind of building I for a symmetrizable Kac-Moody group over a field K endowed with a dicrete valuation and with a residue field containing C. Due to some bad properties, we call this I a hovel. Nevertheless I has some good properties, for example the existence of retractions with center a sector-germ. This enables us to generalize many results proved in the semi-simple case by S. Gaussent and P. Littelmann [Duke Math. J; 127 (2005), 35-88]. In particular, if K= C((t)), the geodesic segments in I, with a given special vertex as end point and a good image under some retraction, are parametrized by a Zariski open subset P of C^N. This dimension N is maximum when this image is a LS path and then P is closely related to some Mirkovic-Vilonen cycle. ","Kac-Moody groups, hovels and Littelmann's paths" " A locally truncated geometry with diagram of type affine E7 is studied. One considers a parapolar space, locally of type A_{7,4}, which is subject to an extra axiom. A covering of this space is constructed; it is proved that this covering space is a rank 6, residually connected, locally truncated diagram geometry which is a homomorphic image of a truncated building of affine type E7. Consequently, the initial parapolar space is also a homomorphic image of a truncated building. ",On a space related to the affine building of type E7 " Consider the problem of covertly controlling a linear system. In this problem, Alice desires to control (stabilize or change the behavior of) a linear system, while keeping an observer, Willie, unable to decide if the system is indeed being controlled or not. We formally define the problem, under a model where Willie can only observe the system's output. Focusing on AR(1) systems, we show that when Willie observes the system's output through a clean channel, an inherently unstable linear system can not be covertly stabilized. However, an inherently stable linear system can be covertly controlled, in the sense of covertly changing its parameter or resetting its memory. Moreover, we give positive and negative results for two important controllers: a minimal-information controller, where Alice is allowed to use only $1$ bit per sample, and a maximal-information controller, where Alice is allowed to view the real-valued output. Unlike covert communication, where the trade-off is between rate and covertness, the results reveal an interesting \emph{three--fold} trade--off in covert control: the amount of information used by the controller, control performance and covertness. ",Covertly Controlling a Linear System " Leibniz algebras are a non-anticommutative version of Lie algebras. They play an important role in different areas of mathematics and physics and have attracted much attention over the last thirty years. In this paper we investigate whether conditions such as being a Lie algebra, cyclic, simple, semisimple, solvable, supersolvable or nilpotent in such an algebra are preserved by lattice isomorphisms. ",Lattice isomorphisms of Leibniz algebras " We study two polynomial counting questions in arithmetic statistics via a combination of Fourier analytic and arithmetic methods. First, we obtain new quantitative forms of Hilbert's Irreducibility Theorem for degree $n$ polynomials $f$ with $\mathrm{Gal}(f) \subseteq A_n$. We study this both for monic polynomials and non-monic polynomials. Second, we study lower bounds on the number of degree $n$ monic polynomials with almost prime discriminants, as well as the closely related problem of lower bounds on the number of degree $n$ number fields with almost prime discriminants. ",Quantitative Hilbert irreducibility and almost prime values of polynomial discriminants " The photoproduction of $\omega$ mesons off the proton has been studied in the reaction $\gamma p\to p\,\omega$ using the CEBAF Large Acceptance Spectrometer (CLAS) and the frozen-spin target (FROST) in Hall B at the Thomas Jefferson National Accelerator Facility. For the first time, the target asymmetry, $T$, has been measured in photoproduction from the decay $\omega\to\pi^+\pi^-\pi^0$, using a transversely-polarized target with energies ranging from just above the reaction threshold up to 2.8 GeV. Significant non-zero values are observed for these asymmetries, reaching about 30-40% in the third-resonance region. New measurements for the photon-beam asymmetry, $\Sigma$, are also presented, which agree well with previous CLAS results and extend the world database up to 2.1 GeV. These data and additional $\omega$-photoproduction observables from CLAS were included in a partial-wave analysis within the Bonn-Gatchina framework. Significant contributions from $s$-channel resonance production were found in addition to $t$-channel exchange processes. ",Measurement of the beam asymmetry $\Sigma$ and the target asymmetry $T$ in the photoproduction of $\omega$ mesons off the proton using CLAS at Jefferson Laboratory " Amorphous solids yield at a critical value of the strain (in strain controlled experiments); for larger strains the average stress can no longer increase - the system displays an elasto-plastic steady state. A long standing riddle in the materials community is what is the difference between the microscopic states of the material before and after yield. Explanations in the literature are material specific, but the universality of the phenomenon begs a universal answer. We argue here that there is no fundamental difference in the states of matter before and after yield, but the yield is a bona-fide first order phase transition between a highly restricted set of possible configurations residing in a small region of phase space to a vastly rich set of configurations which include many marginally stable ones. To show this we employ an order parameter of universal applicability, independent of the microscopic interactions, that is successful in quantifying the transition in an unambiguous manner. ",Mechanical Yield in Amorphous Solids: a First-Order Phase Transition " Recently a new class of quantum magnets, the so-called breathing pyrochlore spin systems, have attracted much attention due to their potential to host exotic emergent phenomena. Here, we present magnetometry, heat capacity, thermal conductivity, Muon-spin relaxation, and polarized inelastic neutron scattering measurements performed on high-quality single-crystal samples of breathing pyrochlore compound Ba3Yb2Zn5O11. We interpret these results using a simplified toy model and provide a new insight into the low-energy physics of this system beyond the single-tetrahedron physics proposed previously. ",Beyond Single Tetrahedron Physics of Breathing Pyrochlore Compound Ba3Yb2Zn5O11 " In this paper we prove Homological Projective Duality for crepant categorical resolutions of several classes of linear determinantal varieties. By this we mean varieties that are cut out by the minors of a given rank of a n x m matrix of linear forms on a given projective space. As applications, we obtain pairs of derived-equivalent Calabi-Yau manifolds, and address a question by A. Bondal asking whether the derived category of any smooth projective variety can be fully faithfully embedded in the derived category of a smooth Fano variety. Moreover we discuss the relation between rationality and categorical representability in codimension two for determinantal varieties. ",Homological Projective Duality for Determinantal Varieties " In this study, we tested the robustness of three communication networks extracted from the online forums included in the intranet platforms of three large companies. For each company we analyzed the communication among employees both in terms of network structure and content (language used). Over a period of eight months, we analyzed more than 52,000 messages posted by approximately 12,000 employees. Specifically, we tested the network robustness and the stability of a set of structural and semantic metrics, while applying several different node removal strategies. We removed the forum moderators, the spammers, the overly connected nodes and the nodes lying at the network periphery, also testing different combinations of these selections. Results indicate that removing spammers and very peripheral nodes can be a relatively low impact strategy in this context; accordingly, it could be used to clean the noise generated by these types of social actor and to reduce the computation complexity of the analysis. On the other hand, the removal of moderators seems to have a significant impact on the network connectivity and the shared content. The most affected variables are closeness centrality and contribution index. We also found that the removal of overly connected nodes can significantly change the network structure. Lastly, we compared the behavior of moderators with the other users, finding distinctive characteristics by which moderators can be identified when their list is unknown. Our findings can help online community managers to understand the role of moderators within intranet forums and can be useful for social network analysts who are interested in evaluating the effects of graph simplification techniques. ",Robustness and stability of enterprise intranet social networks: The impact of moderators " Distributed machine learning (DML) techniques, such as federated learning, partitioned learning, and distributed reinforcement learning, have been increasingly applied to wireless communications. This is due to improved capabilities of terminal devices, explosively growing data volume, congestion in the radio interfaces, and increasing concern of data privacy. The unique features of wireless systems, such as large scale, geographically dispersed deployment, user mobility, and massive amount of data, give rise to new challenges in the design of DML techniques. There is a clear gap in the existing literature in that the DML techniques are yet to be systematically reviewed for their applicability to wireless systems. This survey bridges the gap by providing a contemporary and comprehensive survey of DML techniques with a focus on wireless networks. Specifically, we review the latest applications of DML in power control, spectrum management, user association, and edge cloud computing. The optimality, scalability, convergence rate, computation cost, and communication overhead of DML are analyzed. We also discuss the potential adversarial attacks faced by DML applications, and describe state-of-the-art countermeasures to preserve privacy and security. Last but not least, we point out a number of key issues yet to be addressed, and collate potentially interesting and challenging topics for future research. ","Distributed Machine Learning for Wireless Communication Networks: Techniques, Architectures, and Applications" We demonstrate that commuting quasilinear systems of Jordan block type are parametrised by solutions of the modified KP hierarchy. Systems of this form naturally occur as hydrodynamic reductions of multi-dimensional linearly degenerate dispersionless integrable PDEs. ,Quasilinear systems of Jordan block type and the mKP hierarchy " The recent experimental data obtained by the OBELIX group on total $\bar{p}p$ annihilation cross section are analysed; the low energy spin averaged parameters of the $\bar{p}p$ scattering amplitude (the imaginary parts of the S-wave scattering length and P-wave scattering volume) are extracted from the data. Their values are found to be equal to $Im a_{sc} = - 0.69 \pm 0.01 (stat) \pm 0.03 (sys) fm, Im A_{sc} = - 0.76 \pm 0.05 (stat) \pm 0.04 (sys) fm^3$. The results are in very good agreement with existing atomic data. ",\bar{p}p low energy parameters from annihilation cross section data " High-Performance Big Data Analytics (HPDA) applications are characterized by huge volumes of distributed and heterogeneous data that require efficient computation for knowledge extraction and decision making. Designers are moving towards a tight integration of computing systems combining HPC, Cloud, and IoT solutions with artificial intelligence (AI). Matching the application and data requirements with the characteristics of the underlying hardware is a key element to improve the predictions thanks to high performance and better use of resources. We present EVEREST, a novel H2020 project started on October 1st, 2020 that aims at developing a holistic environment for the co-design of HPDA applications on heterogeneous, distributed, and secure platforms. EVEREST focuses on programmability issues through a data-driven design approach, the use of hardware-accelerated AI, and an efficient runtime monitoring with virtualization support. In the different stages, EVEREST combines state-of-the-art programming models, emerging communication standards, and novel domain-specific extensions. We describe the EVEREST approach and the use cases that drive our research. ",EVEREST: A design environment for extreme-scale big data analytics on heterogeneous platforms " A central question in algorithmic game theory is to measure the inefficiency (ratio of costs) of Nash equilibria (NE) with respect to socially optimal solutions. The two established metrics used for this purpose are price of anarchy (POA) and price of stability (POS), which respectively provide upper and lower bounds on this ratio. A deficiency of these metrics, however, is that they are purely existential and shed no light on which of the equilibrium states are reachable in an actual game, i.e., via natural game dynamics. This is particularly striking if these metrics differ significantly, such as in network design games where the exponential gap between the best and worst NE states originally prompted the notion of POS in game theory (Anshelevich et al., FOCS 2002). In this paper, we make progress toward bridging this gap by studying network design games under natural game dynamics. First we show that in a completely decentralized setting, where agents arrive, depart, and make improving moves in an arbitrary order, the inefficiency of NE attained can be polynomially large. This implies that the game designer must have some control over the interleaving of these events in order to force the game to attain efficient NE. We complement our negative result by showing that if the game designer is allowed to execute a sequence of improving moves to create an equilibrium state after every batch of agent arrivals or departures, then the resulting equilibrium states attained by the game are exponentially more efficient, i.e., the ratio of costs compared to the optimum is only logarithmic. Overall, our two results establish that in network games, the efficiency of equilibrium states is dictated by whether agents are allowed to join or leave the game in arbitrary states, an observation that might be useful in analyzing the dynamics of other classes of games with divergent POS and POA bounds. ",Timing Matters: Online Dynamics in Broadcast Games " The random sequential adsorption (RSA) model is modified to describe damage and crack accumulation. The exclusion for object deposition (for damaged region formation) is not for the whole object, as in the standard RSA, but only for the initial point (or higher-dimensional defect) from which the damaged region or crack initiates. The one-dimensional variant of the model is solved exactly. ",Random sequential adsorption model of damage and crack accumulation: Exact one-dimensional results " The results of a survey of 63 galactic star-forming regions in the 6_K-5_K and 5_K-4_K methyl acetylene lines at 102 and 85 GHz, respectively, are presented. Fourty-three sources were detected at 102 GHz, and twenty-five at 85 GHz. Emission was detected towards molecular clouds with kinetic temperatures 20-60 K (so-called ``warm clouds''). The CH3CCH abundances in these clouds appeared to be about several units X 10^(-9). Five mapped sources were analyzed using the maximum entropy method. The sizes of the mapped clouds fall within the range between 0.1 and 1.7 pc, virial masses - between 90-6200 Msun, and densities - between 6 X 10^4 and 6 X 10^5 cm^(-3). The CH3CCH sources spatially coincide with the CO and CS sources. Chemical evolution simulations showed that the typical methyl acetylene abundance in the observed clouds corresponds to an age of ~ 6 X 10^4 years. ",Parameters of Warm Molecular Clouds from Methyl Acetylene Observations " We study experimentally and theoretically the effects of disorder, nonlinear screening, and magnetism in semiconductor heterostructures containing a $\delta$-layer of Mn, where the charge carriers are confined within a quantum well and hence both ferromagnetism and transport are two-dimensional (2D) and differ qualitatively from their bulk counterparts. Anomalies in the electrical resistance observed in both metallic and insulating structures can be interpreted as a signature of significant ferromagnetic correlations. The insulating samples turn out to be the most interesting as they can give us valuable insights into the mechanisms of ferromagnetism in these heterostructures. At low charge carrier densities, we show how the interplay of disorder and nonlinear screening can result in the organization of the carriers in the 2D transport channel into charge droplets separated by insulating barriers. Based on such a droplet picture and including the effect of magnetic correlations, we analyze the transport properties of this set of droplets, compare it with experimental data, and find a good agreement between the model calculations and experiment. Our analysis shows that the peak or shoulder-like features observed in temperature dependence of resistance of 2D heterostructures $\delta$-doped by Mn lie significantly below the Curie temperature $T_{C}$ unlike the three-dimensional case, where it lies above and close to $T_{C}$. We also discuss the consequences of our description for understanding the mechanisms of ferromagnetism in the heterostructures under study. ",Charge inhomogeneities and transport in semiconductor heterostructures with a manganese $\delta$-layer " Let G be a random subgraph of the n-cube where each edge appears randomly and independently with probability p. We prove that the largest eigenvalue of the adjacency matrix of G is almost surely \lambda_1(G)= (1+o(1)) max(\Delta^{1/2}(G),np), where \Delta(G) is the maximum degree of G and o(1) term tends to zero as max (\Delta^{1/2}(G), np) tends to infinity. ",On the Largest Eigenvalue of a Random Subgraph of the Hypercube This is a brief account of the approach to superbranes based upon the concept of Partial Breaking of Global Supersymmetry (PBGS). ,Diverse PBGS Patterns and Superbranes " The stability number alpha(G) of a graph G is the cardinality of a maximum stable set in G, xi(G) denotes the size of core(G), where core(G) is the intersection of all maximum stable sets of G. In this paper we prove that for a graph G without isolated vertices, the following assertions are true: (i) if xi(G)< 2, then G is quasi-regularizable; (ii) if G is of order n and alpha(G) > (n+k-1)/2, for some k > 0, then xi(G) > k, and xi(G) > k+1, whenever n+k-1 is even. The last finding is a strengthening of a result of Hammer, Hansen, and Simeone, which states that alpha(G) > n/2 implies xi(G) > 0. G is a Koenig-Egervary graph if n equals the sum of its stability number and the cardinality of a maximum matching. For Koenig-Egervary graphs, we prove that alpha(G) > n/2 holds if and only if xi(G) is greater than the size of the neighborhood of core(G). Moreover, for bipartite graphs without isolated vertices, alpha(G) > n/2 is equivalent to xi(G) > 1. We also show that Hall's marriage Theorem is valid for Koenig-Egervary graphs, and it is sufficient to check Hall's condition only for one specific stable set, namely, for core(G). ",Combinatorial Properties of the Family of Maximum Stable Sets of a Graph We give a procedure to determine equations for the modular curves $X_0(N)$ which are bielliptic and equations for the 30 values of $N$ such that $X_0(N)$ is bielliptic and nonhyperelliptic are presented. ,Equations of Bielliptic Modular Curves " A computable structure $\mathcal{A}$ has degree of categoricity $\mathbf{d}$ if $\mathbf{d}$ is exactly the degree of difficulty of computing isomorphisms between isomorphic computable copies of $\mathcal{A}$. Fokina, Kalimullin, and Miller showed that every degree d.c.e. in and above $\mathbf{0}^{(n)}$, for any $n < \omega$, and also the degree $\mathbf{0}^{(\omega)}$, are degrees of categoricity. Later, Csima, Franklin, and Shore showed that every degree $\mathbf{0}^{(\alpha)}$ for any computable ordinal $\alpha$, and every degree d.c.e. in and above $\mathbf{0}^{(\alpha)}$ for any successor ordinal $\alpha$, is a degree of categoricity. We show that every degree c.e. in and above $\mathbf{0}^{(\alpha)}$, for $\alpha$ a limit ordinal, is a degree of categoricity. We also show that every degree c.e. in and above $\mathbf{0}^{(\omega)}$ is the degree of categoricity of a prime model, making progress towards a question of Bazhenov and Marchuk. ",Degrees of Categoricity Above Limit Ordinals " The objective of the change-point detection is to discover the abrupt property changes lying behind the time-series data. In this paper, we firstly summarize the definition and in-depth implication of the changepoint detection. The next stage is to elaborate traditional and some alternative model-based changepoint detection algorithms. Finally, we try to go a bit further in the theory and look into future research directions. ",A Review of Changepoint Detection Models " Solutions to scalar theories with derivative self-couplings often have regions where non-linearities are important. Given a classical source, there is usually a region, demarcated by the Vainshtein radius, inside of which the classical non-linearities are dominant, while quantum effects are still negligible. If perturbation theory is used to find such solutions, the expansion generally breaks down as the Vainshtein radius is approached from the outside. Here we show that it is possible, by integrating in certain auxiliary fields, to reformulate these theories in such a way that non-linearities become small inside the Vainshtein radius, and large outside it. This provides a complementary, or classically dual, description of the same theory -- one in which non-perturbative regions become accessible perturbatively. We consider a few examples of classical solutions with various symmetries, and find that in all the cases the dual formulation makes it rather simple to study regimes in which the original perturbation theory fails to work. As an illustration, we reproduce by perturbative calculations some of the already known non-perturbative results, for a point-like source, cosmic string, and domain wall, and derive a new one. The dual formulation may be useful for developing the PPN formalism in the theories of modified gravity that give rise to such scalar theories. ",Classical Duals of Derivatively Self-Coupled Theories " In this paper we consider the numerical approximation of nonlocal integro differential parabolic equations via neural networks. These equations appear in many recent applications, including finance, biology and others, and have been recently studied in great generality starting from the work of Caffarelli and Silvestre. Based in the work by Hure, Pham and Warin, we generalize their Euler scheme and consistency result for Backward Forward Stochastic Differential Equations to the nonlocal case. We rely on L\`evy processes and a new neural network approximation of the nonlocal part to overcome the lack of a suitable good approximation of the nonlocal part of the solution. ",Deep Learning Schemes For Parabolic Nonlocal Integro-Differential Equations " We present a simple model of an overdamped particle moving on a two dimensional symmetric periodic substrate with a dc drive in the longitudinal direction and additional ac drives in both the longitudinal and transverse directions. For certain regimes we find that a finite longitudinal dc force produces a net dc response only in the transverse direction, which we term absolute transverse mobility. Additionally we find regimes exhibiting a ratchet effect in the absence of an applied dc drive. ",Absolute Transverse Mobility and Ratchet Effect on Periodic 2D Symmetric Substrates " We consider the Ising model with invisible states on scale-free networks. Our goal is to investigate the interplay between the entropic and topological influence on a phase transition. The former is manifest through the number of invisible states $r$, while the latter is controlled by the network node-degree distribution decay exponent $\lambda$. We show that the phase diagram, in this case, is characterised by two marginal values $r_{c1}(\lambda)$ and $r_{c2}(\lambda)$, which separate regions with different critical behaviours. Below the $r_{c1}(\lambda)$ line the system undergoes only second order phase transition; above the $r_{c2}(\lambda)$ - only a first order phase transition occurs; and in-between the lines both of these phase transitions occur at different temperatures. This behaviour differs from the one, observed on the lattice, where the Ising model with invisible states is only characterised with one marginal value $r_{c}\simeq 3.62$ separating the first and second order regimes. ",Ising model with invisible states on scale-free networks " Pitch-angle scattering rates for cosmic-ray particles in magnetohydrodynamic (MHD) simulations with imbalanced turbulence are calculated for fully evolving electromagnetic turbulence. We compare with theoretical predictions derived from the quasilinear theory of cosmic-ray diffusion for an idealized slab spectrum and demonstrate how cross helicity affects the shape of the pitch-angle diffusion coefficient. Additional simulations in evolving magnetic fields or static field configurations provide evidence that the scattering anisotropy in imbalanced turbulence is not primarily due to coherence with propagating Alfven waves, but an effect of the spatial structure of electric fields in cross-helical MHD turbulence. ",Cosmic-ray pitch-angle scattering in imbalanced MHD turbulence simulations " In this work we theoretically study properties of electric current driven by a temperature gradient through a quantum dot/molecule coupled to the source and drain charge reservoirs. We analyze the effect of Coulomb interactions between electrons on the dot/molecule and of thermal environment on the thermocurrent. The environment is simulated by two thermal baths associated with the reservoirs and kept at different temperatures. The scattering matrix formalism is employed to compute electron transmission through the system. This approach is further developed and combined with nonequilibrium Green's functions formalism, so that scattering probabilities are expressed in terms of relevant energies including the thermal energy, strengths of coupling between the dot/molecule and charge reservoirs and characteristic energies of electron-phonon interactions. It is shown that one may bring the considered system into regime favorable for heat-to-electric energy conversion by varying the applied bias and gate voltages. ",Scattering theory of thermocurrent in quantum dots and molecules " We explore the possibility for generalized electromagnetism on flat spacetime. For a single copy of $U(1)$ gauge theory, we show that the Galileon-type generalization of electromagnetism is forbidden. Given that the equations of motion for the vector field are gauge invariant and Lorentz invariant, follow from an action and contain no more than second derivative on $A_\mu$, the equations of motion are at most linear with respect to second derivative of $A_\mu$. ",A no-go theorem for generalized vector Galileons on flat spacetime " One of the great challenges of quantum foundations and quantum information theory is the characterisation of the relationship between entanglement and the violation of Bell inequalities. It is well known that in specific scenarios these two can behave differently, from local hidden-variable models for entangled quantum states in restricted Bell scenarios, to maximal violations of Bell inequalities not concurring with maximal entanglement. In this paper we put forward a simple proof that there exist quantum states, whose entanglement content, as measured by the Schmidt number, cannot be device-independently certified for all possible sequential measurements on any number of copies. While the bigger question: \textit{can the presence of entanglement always be device-independently certified?} remains open, we provide proof that quantifying entanglement device-independently is not always possible, even beyond the standard Bell scenario. ",The Schmidt number of a quantum state cannot always be device-independently certified " In this note, we will give proofs of two congruences involving broken 3-diamond partitions and broken 5-diamond partitions which were conjectured by Peter Paule and Silviu Radu. ",Two congruences involving Andrews-Paule's broken 3-diamond partitions and 5-diamond partitions " We examine the predictions of the core accretion - gas capture model concerning the efficiency of planet formation around stars with various masses. First, we follow the evolution of gas and solids from the moment when all solids are in the form of small grains to the stage when most of them are in the form of planetesimals. We show that the surface density of the planetesimal swarm tends to be higher around less massive stars. Then, we derive the minimum surface density of the planetesimal swarm required for the formation of a giant planet both in a numerical and in an approximate analytical approach. We combine these results by calculating a set of representative disk models characterized by different masses, sizes, and metallicities, and by estimating their capability of forming giant planets. Our results show that the set of protoplanetary disks capable of giant planet formation is larger for less massive stars. Provided that the distribution of initial disk parameters does not depend too strongly on the mass of the central star, we predict that the percentage of stars with giant planets should increase with decreasing stellar mass. Furthermore, we identify the radial redistribution of solids during the formation of planetesimal swarms as the key element in explaining these effects. ",Formation of giant planets around stars with various masses " The dynamics of infinite, asymptotically uniform, distributions of self-gravitating particles in one spatial dimension provides a simple toy model for the analogous three dimensional problem. We focus here on a limitation of such models as treated so far in the literature: the force, as it has been specified, is well defined in infinite point distributions only if there is a centre of symmetry (i.e. the definition requires explicitly the breaking of statistical translational invariance). The problem arises because naive background subtraction (due to expansion, or by ""Jeans' swindle"" for the static case), applied as in three dimensions, leaves an unregulated contribution to the force due to surface mass fluctuations. Following a discussion by Kiessling, we show that the problem may be resolved by defining the force in infinite point distributions as the limit of an exponentially screened pair interaction. We show that this prescription gives a well defined (finite) force acting on particles in a class of perturbed infinite lattices, which are the point processes relevant to cosmological N-body simulations. For identical particles the dynamics of the simplest toy model is equivalent to that of an infinite set of points with inverted harmonic oscillator potentials which bounce elastically when they collide. We discuss previous results in the literature, and present new results for the specific case of this simplest (static) model starting from ""shuffled lattice"" initial conditions. These show qualitative properties (notably its ""self-similarity"") of the evolution very similar to those in the analogous simulations in three dimensions, which in turn resemble those in the expanding universe. ",1-d gravity in infinite point distributions " A modified non-linear time series analysis technique, which computes the correlation dimension $D_2$, is used to analyze the X-ray light curves of the black hole system GRS 1915+105 in all twelve temporal classes. For four of these temporal classes $D_2 $ saturates to $\approx 4-5$ which indicates that the underlying dynamical mechanism is a low dimensional chaotic system. Of the other eight classes, three show stochastic behavior while five show deviation from randomness. The light curves for four classes which depict chaotic behavior have the smallest ratio of the expected Poisson noise to the variability ($ < 0.05$) while those for the three classes which depict stochastic behavior is the highest ($ > 0.2$). This suggests that the temporal behavior of the black hole system is governed by a low dimensional chaotic system, whose nature is detectable only when the Poisson fluctuations are much smaller than the variability. ",The chaotic behavior of the black hole system GRS 1915+105 " This research aims to develop a new approach toward a consistent coupling of electromagnetic and gravitational fields by using an electron that couples with a weak gravitational potential by means of its electromagnetic field. We find the value of the tiny coupling constant of gravity with electromagnetic fields, which depends on the speed of light and a universal minimum speed that represents the lowest limit of speed for any particle. Such a minimum speed, unattainable by particles, represents a preferred reference frame associated with a background field that breaks the Lorentz symmetry. The metric of the flat spacetime shall include the presence of a uniform vacuum energy density, which leads to a negative pressure at cosmological scales, i.e., the cosmological anti-gravity. The tiny values of the cosmological constant and the vacuum energy density will be successfully obtained in agreement with the observational data. ",On the electrodynamics of moving particles in a quasi flat spacetime with Lorentz violation and its cosmological implications " In this paper we present an abstraction-refinement approach to Satisfiability Modulo the theory of transcendental functions, such as exponentiation and trigonometric functions. The transcendental functions are represented as uninterpreted in the abstract space, which is described in terms of the combined theory of linear arithmetic on the rationals with uninterpreted functions, and are incrementally axiomatized by means of upper- and lower-bounding piecewise-linear functions. Suitable numerical techniques are used to ensure that the abstractions of the transcendental functions are sound even in presence of irrationals. Our experimental evaluation on benchmarks from verification and mathematics demonstrates the potential of our approach, showing that it compares favorably with delta-satisfiability /interval propagation and methods based on theorem proving. ",Satisfiability Modulo Transcendental Functions via Incremental Linearization " Representation learning approaches require a massive amount of discriminative training data, which is unavailable in many scenarios, such as healthcare, smart city, education, etc. In practice, people refer to crowdsourcing to get annotated labels. However, due to issues like data privacy, budget limitation, shortage of domain-specific annotators, the number of crowdsourced labels is still very limited. Moreover, because of annotators' diverse expertise, crowdsourced labels are often inconsistent. Thus, directly applying existing supervised representation learning (SRL) algorithms may easily get the overfitting problem and yield suboptimal solutions. In this paper, we propose \emph{NeuCrowd}, a unified framework for SRL from crowdsourced labels. The proposed framework (1) creates a sufficient number of high-quality \emph{n}-tuplet training samples by utilizing safety-aware sampling and robust anchor generation; and (2) automatically learns a neural sampling network that adaptively learns to select effective samples for SRL networks. The proposed framework is evaluated on both one synthetic and three real-world data sets. The results show that our approach outperforms a wide range of state-of-the-art baselines in terms of prediction accuracy and AUC. To encourage reproducible results, we make our code publicly available at \url{https://github.com/tal-ai/NeuCrowd_KAIS2021}. ",NeuCrowd: Neural Sampling Network for Representation Learning with Crowdsourced Labels " We present robust, model-marginalized limits on both the total neutrino mass ($\sum m_\nu$) and abundance ($N_{\rm eff}$) to minimize the role of parameterizations, priors and models when extracting neutrino properties from cosmology. The cosmological observations we consider are CMB temperature fluctuation and polarization measurements, Supernovae Ia luminosity distances, BAO observations and determinations of the growth rate parameter from the Data Release 16 of the Sloan Digital Sky Survey IV. The degenerate neutrino mass spectrum (which implies $\sum m_\nu>0$) is weakly (moderately) preferred over the normal and inverted hierarchy possibilities, which imply the priors $\sum m_\nu>0.06$ and $\sum m_\nu>0.1$ eV respectively. Concerning the underlying cosmological model, the $\Lambda$CDM minimal scenario is almost always strongly preferred over the possible extensions explored here. The most constraining $95\%$ CL bound on the total neutrino mass in the $\Lambda$CDM+$\sum m_\nu$ picture is $\sum m_\nu< 0.087$ eV. The parameter $N_{\rm eff}$ is restricted to $3.08\pm 0.17$ ($68\%$ CL) in the $\Lambda$CDM+$N_{\rm eff}$ model. These limits barely change when considering the $\Lambda$CDM+$\sum m_\nu$+$N_{\rm eff}$ scenario. Given the robustness and the strong constraining power of the cosmological measurements employed here, the model-marginalized posteriors obtained considering a large spectra of non-minimal cosmologies are very close to the previous bounds, obtained within the $\Lambda$CDM framework in the degenerate neutrino mass spectrum. Future cosmological measurements may improve the current Bayesian evidence favouring the degenerate neutrino mass spectra, challenging therefore the consistency between cosmological neutrino mass bounds and oscillation neutrino measurements, and potentially suggesting a more complicated cosmological model and/or neutrino sector. ",Model marginalized constraints on neutrino properties from cosmology " Tau leptons play an important role in the physics program at the LHC. They are used in searches for new phenomena like the Higgs boson or Supersymmetry and in electroweak measurements. Identifying hadronically decaying tau leptons with good performance is an essential part of these analyses. We present the current status of the tau reconstruction and identification at the LHC with the ATLAS detector. The tau identification efficiencies and their systematic uncertainties are measured using W to tau nu and Z to tau tau events, and compared with the predictions from Monte Carlo simulations. ",Tau Lepton Reconstruction and Identification at ATLAS " This paper describes the background of smart information infrastructure and the needs for smart grid information security. It introduces the conceptual analysis to the methodology with the application of hermeneutic circle and information security functional requirement identification. Information security for the grid market cover matters includes automation and communications industry that affects the operation of electric power systems and the functioning of the utilities that manage them and its awareness of this information infrastructure has become critical to the reliability of the power system. Community benefits from of cost savings, flexibility and deployment along with the establishment of wireless communications. However, concern revolves around the security protections for easily accessible devices such as the smart meter and the related communications hardware. On the other hand, the changing points between traditional versus smart grid networking trend and the information security importance on the communication field reflects the criticality of grid information security functional requirement identification. The goal of this paper is to identify the functional requirement and relate its significance addresses to the consumer requirement of an information security of a smart grid. Vulnerabilities may bring forth possibility for an attacker to penetrate a network, make headway admission to control software, alter it to load conditions that destabilize the grid in unpredictable ways. Focusing on the grid information security functional requirement is stepping ahead in developing consumer trust and satisfaction toward smart grid completeness. ",Grid Information Security Functional Requirement - Fulfilling Information Security of a Smart Grid System " Nonlinear evolution causes the galaxy power spectrum to become broadly correlated over different wavenumbers. It is shown that prewhitening the power spectrum - transforming the power spectrum in such a way that the noise covariance becomes proportional to the unit matrix - greatly narrows the covariance of power. The eigenfunctions of the covariance of the prewhitened nonlinear power spectrum provide a set of almost uncorrelated nonlinear modes somewhat analogous to the Fourier modes of the power spectrum itself in the linear, Gaussian regime. These almost uncorrelated modes make it possible to construct a near minimum variance estimator and Fisher matrix of the prewhitened nonlinear power spectrum analogous to the Feldman-Kaiser-Peacock estimator of the linear power spectrum. The paper concludes with summary recipes, in gourmet, fine, and fastfood versions, of how to measure the prewhitened nonlinear power spectrum from a galaxy survey in the FKP approximation. An Appendix presents FFTLog, a code for taking the fast Fourier or Hankel transform of a periodic sequence of logarithmically spaced points, which proves useful in some of the manipulations. ",Uncorrelated Modes of the Nonlinear Power Spectrum " The size-dependent structures and optical properties of CdSeS nanoclusters in water medium are investigated. The stability of different size-dependent Cd$_n$Se$_m$S$_p$ nanoclusters (up to n=6) is studied using density functional theory/time-dependent density functional theory (DFT/TDDFT). The computed results for ground (S$_0$) and excited (S$_1$, S$_2$, S$_3$) states are experimentally verified through UV-Vis spectroscopy. Computed ab initio results suggest that CdSeS clusters are significantly more hyperpolarizable compared to CdX (X= S, Se, Te) clusters. Structure dependent response properties are also observed, especially for n>3. Larger hyperpolarizabilities ($\beta$ and$\gamma$), charge variation and orbital analysis establish Cd$_4$Se$_m$S$_p$ clusters, as nonlinear optically active quantum dots (QDs). ",Studies on Size Dependent Structures and Optical Properties of CdSeS Clusters " Extracting multiple relations from text sentences is still a challenge for current Open Relation Extraction (Open RE) tasks. In this paper, we develop several Open RE models based on the bidirectional LSTM-CRF (BiLSTM-CRF) neural network and different contextualized word embedding methods. We also propose a new tagging scheme to solve overlapping problems and enhance models' performance. From the evaluation results and comparisons between models, we select the best combination of tagging scheme, word embedder, and BiLSTM-CRF network to achieve an Open RE model with a remarkable extracting ability on multiple-relation sentences. ",Explore BiLSTM-CRF-Based Models for Open Relation Extraction " Stochastic inflation describes the global structure of the inflationary universe by modeling the super-Hubble dynamics as a system of matter fields coupled to gravity where the sub-Hubble field fluctuations induce a stochastic force into the equations of motion. The super-Hubble dynamics are ultralocal, allowing us to neglect spatial derivatives and treat each Hubble patch as a separate universe. This provides a natural framework in which to discuss probabilities on the space of solutions and initial conditions. In this article we derive an evolution equation for this probability for an arbitrary class of matter systems, including DBI and k-inflationary models, and discover equilibrium solutions that satisfy detailed balance. Our results are more general than those derived assuming slow roll or a quasi-de Sitter geometry, and so are directly applicable to models that do not satisfy the usual slow roll conditions. We discuss in general terms the conditions for eternal inflation to set in, and we give explicit numerical solutions of highly stochastic, quasi-stationary trajectories in the relativistic DBI regime. Finally, we show that the probability for stochastic/thermal tunneling can be significantly enhanced relative to the Hawking-Moss instanton result due to relativistic DBI effects. ",Stochastic Inflation Revisited: Non-Slow Roll Statistics and DBI Inflation " After a brief discussion of the computational complexity of Clifford algebras, we present a new basis for even Clifford algebra Cl(2m) that simplifies greatly the actual calculations and, without resorting to the conventional matrix isomorphism formulation, obtains the same complexity. In the last part we apply these results to the Clifford algebra formulation of the NP-complete problem of the maximum clique of a graph introduced in a previous paper. ",On Computational Complexity of Clifford Algebra " Multicomponent lipid mixtures exhibit complex phase behavior, including coexistence of nanoscopic fluid phases in ternary mixtures mimicking the composition of the outer leaflet of mammalian plasma membrane. The physical mechanisms responsible for the small size of phase domains are unknown, due in part to the difficulty of determining the size and lifetime distributions of small, fleeting domains. Steady-state FRET provides information about the spatial distribution of lipid fluorophores in a membrane, and with an appropriate model can be used to determine the size of phase domains. Starting from a radial distribution function for a binary hard disk fluid, we develop a domain size-dependent model for stimulated acceptor emission. We compare the results of the model to two similar, recently published models. ",Finite Phase-separation FRET I: A quantitative model valid for bilayer nanodomains " Traditional 2D animation is labor-intensive, often requiring animators to manually draw twelve illustrations per second of movement. While automatic frame interpolation may ease this burden, 2D animation poses additional difficulties compared to photorealistic video. In this work, we address challenges unexplored in previous animation interpolation systems, with a focus on improving perceptual quality. Firstly, we propose SoftsplatLite (SSL), a forward-warping interpolation architecture with fewer trainable parameters and better perceptual performance. Secondly, we design a Distance Transform Module (DTM) that leverages line proximity cues to correct aberrations in difficult solid-color regions. Thirdly, we define a Restricted Relative Linear Discrepancy metric (RRLD) to automate the previously manual training data collection process. Lastly, we explore evaluation of 2D animation generation through a user study, and establish that the LPIPS perceptual metric and chamfer line distance (CD) are more appropriate measures of quality than PSNR and SSIM used in prior art. ",Improving the Perceptual Quality of 2D Animation Interpolation " In this note, we show that the operator theoretic concept of Kolmogorov numbers and the number of degrees of freedom at level $\epsilon$ of a communication channel are closely related. Linear communication channels may be modeled using linear compact operators on Banach or Hilbert spaces and the number of degrees of freedom of such channels is defined to be the number of linearly independent signals that may be communicated over this channel, where the channel is restricted by a threshold noise level. Kolmogorov numbers are a particular example of $s$-numbers, which are defined over the class of bounded operators between Banach spaces. We demonstrate that these two concepts are closely related, namely that the Kolmogorov numbers correspond to the ""jump points"" in the function relating numbers of degrees of freedom with the noise level $\epsilon$. We also establish a useful numerical computation result for evaluating Kolmogorov numbers of compact operators. ",Degrees of Freedom of a Communication Channel and Kolmogorov numbers " The fusion of independently obtained stochastic maps by collaborating mobile agents is considered. The proposed approach includes two parts: matching of stochastic maps and maximum likelihood alignment. In particular, an affine invariant hypergraph is constructed for each stochastic map, and a bipartite matching via a linear program is used to establish landmark correspondence between stochastic maps. A maximum likelihood alignment procedure is proposed to determine rotation and translation between common landmarks in order to construct a global map within a common frame of reference. A main feature of the proposed approach is its scalability with respect to the number of landmarks: the matching step has polynomial complexity and the maximum likelihood alignment is obtained in closed form. Experimental validation of the proposed fusion approach is performed using the Victoria Park benchmark dataset. ",Maximum Likelihood Fusion of Stochastic Maps " Social, supervised, learning from others might amplify individual, possibly unsupervised, learning by individuals, and might underlie the development and evolution of culture. We studied a minimal model of the interaction of individual unsupervised and social supervised learning by interacting agents. Agents attempted to learn to track a hidden fluctuation ""source"", which, linearly mixed with other masking fluctuations, generated observable input vectors. Learning was driven either solely by direct observation of inputs (unsupervised, Hebbian) or, in addition, by observation of another agent's output (supervised, Delta rule). To enhance biological realism, the learning rules were made slightly connection-inspecific, so that incorrect learning sometimes occurs. We found that social interaction can foster both correct and incorrect learning. Useful social learning therefore presumably involves additional factors some of which we outline. ",A Minimal Model of the Interaction of Social and Individual learning " Long observational series for bipolar active regions (ARs) provide significant information about the mutual transformation of the poloidal and toroidal components of the global solar magnetic field. The direction of the toroidal field determines the polarity of leading sunspots in ARs in accordance with the Hale's polarity law. The vast majority of bipolar ARs obey this regularity, whereas a few percent of ARs have the opposite sense of polarity (anti-Hale ARs). However, the study of these ARs is hampered by their poor statistics. The data for five 11-year cycles (16-18 and 23,24) were combined here to compile a synthetic cycle of unique time length and latitudinal width. The synthetic cycle comprises data for 14838 ARs and 367 of them are the anti-Hale ARs. A specific routine to compile the synthetic cycle was demonstrated. We found that, in general, anti-Hale ARs follow the solar cycle and are spread throughout the time-latitude diagram evenly, which implies their fundamental connection with the global dynamo mechanism and the toroidal flux system. The increase in their number and percentage occurs in the second part of the cycle, which is in favour of their contribution to the polar field reversal. The excess in the anti-Hale ARs percentage at the edges of the butterfly diagram and near an oncoming solar minimum (where the toroidal field weakens) might be associated with strengthening of the influence of turbulent convection and magnetic field fluctuations on the arising flux tubes. The evidence of the misalignment between the magnetic and heliographic equators is also found. ",Synthetic solar cycle for active regions violating the Hale's polarity law " We use 1+1/2 dimensional particle-in-cell plasma simulations to study the interaction of a relativistic, strongly magnetized wind with an ambient medium. Such an interaction is a plausible mechanism which leads to generation of cosmological gamma-ray bursts. We confirm the idea of Meszaros and Rees (1992) that an essential part (about 20%) of the energy that is lost by the wind in the process of its deceleration may be transferred to high-energy electrons and then to high-frequency (X-ray and gamma-ray) emission. We show that in the wind frame the spectrum of electrons which are accelerated at the wind front and move ahead of the front is nearly a two-dimensional relativistic Maxwellian with a relativistic temperature $T=6*10^9\Gamma_T$ K, where $\Gamma_T=200\Gamma_0$ with the accuracy of ~20%, and $\Gamma_0$ is the Lorentz factor of the wind, $\Gamma_0>100$ for winds outflowing from cosmological gamma-ray bursters. Our simulations point to an existence of a high-energy tail of accelerated electrons with a Lorentz factor of more than $700\Gamma_0$. Large-amplitude electromagnetic waves are generated by the oscillating currents at the wind front. The mean field of these waves ahead of the wind front is an order of magnitude less than the magnetic field of the wind. High-energy electrons which are accelerated at the wind front and injected into the region ahead of the front generate synchro-Compton radiation in the fields of large-amplitude electromagnetic waves. This radiation closely resembles synchrotron radiation and can reproduce the non-thermal radiation of gamma-ray bursts observed in the Ginga and BATSE ranges (from a few keV to a few MeV). ",Non-thermal Radiation of Cosmological gamma-ray Bursters " The vulnerability of deep neural networks (DNNs) to adversarial attack, which is an attack that can mislead state-of-the-art classifiers into making an incorrect classification with high confidence by deliberately perturbing the original inputs, raises concerns about the robustness of DNNs to such attacks. Adversarial training, which is the main heuristic method for improving adversarial robustness and the first line of defense against adversarial attacks, requires many sample-by-sample calculations to increase training size and is usually insufficiently strong for an entire network. This paper provides a new perspective on the issue of adversarial robustness, one that shifts the focus from the network as a whole to the critical part of the region close to the decision boundary corresponding to a given class. From this perspective, we propose a method to generate a single but image-agnostic adversarial perturbation that carries the semantic information implying the directions to the fragile parts on the decision boundary and causes inputs to be misclassified as a specified target. We call the adversarial training based on such perturbations ""region adversarial training"" (RAT), which resembles classical adversarial training but is distinguished in that it reinforces the semantic information missing in the relevant regions. Experimental results on the MNIST and CIFAR-10 datasets show that this approach greatly improves adversarial robustness even using a very small dataset from the training data; moreover, it can defend against FGSM adversarial attacks that have a completely different pattern from the model seen during retraining. ",Improving adversarial robustness of deep neural networks by using semantic information " Human mobility drives major societal phenomena including epidemics, economies, and innovation. Historically, mobility was constrained by geographic distance, however, in the globalizing world, language, culture, and history are increasingly important. We propose using the neural embedding model word2vec for studying mobility and capturing its complexity. Word2ec is shown to be mathematically equivalent to the gravity model of mobility, and using three human trajectory datasets, we demonstrate that it encodes nuanced relationships between locations into a vector-space, providing a measure of effective distance that outperforms baselines. Focusing on the case of scientific mobility, we show that embeddings uncover cultural, linguistic, and hierarchical relationships at multiple levels of granularity. Connecting neural embeddings to the gravity model opens up new avenues for the study of mobility. ",Unsupervised embedding of trajectories captures the latent structure of mobility " The $\alpha$ inelastic scattering on $^{16}$O is investigated with the coupled-channel calculation using the $\alpha$-nucleus coupled-channel potentials, which are microscopically derived by folding the the Melbourne $g$-matrix $NN$ interaction with the $^{16}$O and $\alpha$ densities. The matter and transition densities of $^{16}$O are calculated by a microscopic structure model of the variation after the spin-parity projections combined with the generator coordinate method of $^{12}$C+$\alpha$ in the framework of the antisymmetrized molecular dynamics. The calculation reproduces the observed elastic and inelastic cross sections at incident energies of $E_\alpha$=104 MeV, 130 MeV, 146 MeV, and 386 MeV. The coupled-channel effect on the cross sections is also discussed. ",First microscopic coupled-channel calculation of $\alpha$ inelastic cross sections on $^{16}$O " Localization of light is the photon analog of electron localization in disordered lattices for whose discovery Anderson received the Nobel prize in 1977. The question about its existence in open three-dimensional materials has eluded an experimental and full theoretical verification for decades. Here we study numerically electromagnetic vector wave transmittance through realistic digital representations of hyperuniform dielectric networks, a new class of highly correlated but disordered photonic band gap materials. We identify the evanescent decay of the transmitted power in the gap and diffusive transport far from the gap. Near the gap, we find that transport sets off diffusive but, with increasing slab thickness, crosses over gradually to a faster decay, signaling localization. We show that we can describe the transition to localization at the mobility edge using the self-consistent theory of localization based on the concept of a position-dependent diffusion coefficient. ",Transition from Light Diffusion to Localization in Three-Dimensional Amorphous Dielectric Networks near the Band Edge " Whether the near-infrared (NIR) extinction law is universal has been a long debated topic. Based on the APOGEE H-band spectroscopic survey as a key project of SDSS-III, the intrinsic colors of a large number of giant stars are accurately determined from the stellar effective temperature. Taking this advantage and using a sample of 5942 K-type giants, the NIR extinction law is carefully re-visited. The color excess ratio E(J-H)/E(J-Ks), representative of the NIR extinction law, shows no dependence on the color excess when E(J-Ks) changes from ~0.3 to ~4.0, which implies a universal NIR extinction law from diffuse to dense regions. The constant value of E(J-H)/E(J-Ks), 0.64, corresponds to a power law index of 1.95. The other two ratios, E(H-Ks)/E(J-Ks) and E(J-H)/E(H-Ks), are 0.36 and 1.78 respectively. The results are consistent with the MRN dust size distribution. ",Universality of the Near-Infrared Extinction Law Based on the APOGEE Survey " The Australian Government uses the means-test as a way of managing the pension budget. Changes in Age Pension policy impose difficulties in retirement modelling due to policy risk, but any major changes tend to be `grandfathered' meaning that current retirees are exempt from the new changes. In 2015, two important changes were made in regards to allocated pension accounts -- the income means-test is now based on deemed income rather than account withdrawals, and the income-test deduction no longer applies. We examine the implications of the new changes in regards to optimal decisions for consumption, investment, and housing. We account for regulatory minimum withdrawal rules that are imposed by regulations on allocated pension accounts, as well as the 2017 asset-test rebalancing. The new policy changes are modelled in a utility maximizing lifecycle model and solved as an optimal stochastic control problem. We find that the new rules decrease the benefits from planning the consumption in relation to the means-test, while the housing allocation increases slightly in order to receive additional Age Pension. The difference in optimal drawdown between the old and new policy are only noticeable early in retirement until regulatory minimum withdrawal rates are enforced. However, the amount of extra Age Pension received for many households is now significantly different due to the new deeming income rules, which benefit slightly wealthier households who previously would receive no Age Pension due to the income-test and minimum withdrawals. ",The 2015-2017 policy changes to the means-tests of Australian Age Pension: implication to decisions in retirement " As well known, permanent of a square (0,1)-matrix $A$ of order $n$ enumerates the permutations $\beta$ of $1,2,...,n$ with the incidence matrices $B\leq A.$ To obtain enumerative information on even and odd permutations with condition $B\leq A,$ we should calculate two-fold vector $(a_1,a_2)$ with $a_1+a_2 =per A.$ More general, the introduced $\omega$-permanent, where $\omega=e^{2\pi i/m},$ we calculate as $m$-fold vector. For these and other matrix functions we generalize the Laplace theorem of their expansion over elements of the first row, using the defined so-called ""combinatorial minors"". In particular, in this way, we calculate the cycle index of permutations with condition $B\leq A.$ ",Combinatorial minors for matrix functions and their applications " Deep learning (DL) has become an integral part of solutions to various important problems, which is why ensuring the quality of DL systems is essential. One of the challenges of achieving reliability and robustness of DL software is to ensure that algorithm implementations are numerically stable. DL algorithms require a large amount and a wide variety of numerical computations. A naive implementation of numerical computation can lead to errors that may result in incorrect or inaccurate learning and results. A numerical algorithm or a mathematical formula can have several implementations that are mathematically equivalent, but have different numerical stability properties. Designing numerically stable algorithm implementations is challenging, because it requires an interdisciplinary knowledge of software engineering, DL, and numerical analysis. In this paper, we study two mature DL libraries PyTorch and Tensorflow with the goal of identifying unstable numerical methods and their solutions. Specifically, we investigate which DL algorithms are numerically unstable and conduct an in-depth analysis of the root cause, manifestation, and patches to numerical instabilities. Based on these findings, we launch, the first database of numerical stability issues and solutions in DL. Our findings and provide future references to developers and tool builders to prevent, detect, localize and fix numerically unstable algorithm implementations. To demonstrate that, using {\it DeepStability} we have located numerical stability issues in Tensorflow, and submitted a fix which has been accepted and merged in. ",DeepStability: A Study of Unstable Numerical Methods and Their Solutions in Deep Learning " BBP-type formulas are usually discovered experimentally, one at a time and in specific bases, through computer searches. In this paper, however, we derive directly, without doing any searches, explicit digit extraction BBP-type formulas in general binary bases $b=2^{12p}$, for $p$ positive odd integers. As particular examples, new binary formulas are presented for $\pi\sqrt 3$, $\pi\sqrt 3\log 2$, $\sqrt 3\;{\rm Cl}_2(\pi/3)$ and a couple of other polylogarithm constants. A variant of the formula for $\pi\sqrt 3\log 2$ derived in this paper has been known for over ten years but was hitherto unproved. Binary BBP-type formulas for the logarithms of an infinite set of primes and binary BBP-type representations for the arctangents of an infinite set of rational numbers are also presented. Finally, new binary BBP-type zero relations are established. ",A class of digit extraction BBP-type formulas in general binary bases " We study spin chains with boundaries that are dual to open strings suspended between systems of giant gravitons and dual giant gravitons. The anomalous dimensions computed in the gauge theory are in complete quantitative agreement with energies computed in the dual string theory. The comparison makes use of a description in terms of magnons, generalizing results for a single maximal giant graviton. The symmetries of the problem determine the structure of the magnon boundary reflection/scattering matrix up to a phase. We compute a reflection/scattering matrix element at weak coupling and verify that it is consistent with the answer determined by symmetry. We find the reflection/scattering matrix does not satisfy the boundary Yang-Baxter equation so that the boundary condition on the open spin chain spoils integrability. We also explain the interpretation of the double coset ansatz in the magnon language. ",Anomalous Dimensions of Heavy Operators from Magnon Energies " The SCHOK bound states that the number of marginal deformations of certain two-dimensional conformal field theories is bounded linearly from above by the number of relevant operators. In conformal field theories defined via sigma models into Calabi-Yau manifolds, relevant operators can be estimated, in the point-particle approximation, by the low-lying spectrum of the scalar Laplacian on the manifold. In the strict large volume limit, the standard asymptotic expansion of Weyl and Minakshisundaram-Pleijel diverges with the higher-order curvature invariants. We propose that it would be sufficient to find an a priori uniform bound on the trace of the heat kernel for large but finite volume. As a first step in this direction, we then study the heat trace asymptotics, as well as the actual spectrum of the scalar Laplacian, in the vicinity of a conifold singularity. The eigenfunctions can be written in terms of confluent Heun functions, the analysis of which gives evidence that regions of large curvature will not prevent the existence of a bound of this type. This is also in line with general mathematical expectations about spectral continuity for manifolds with conical singularities. A sharper version of our results could, in combination with the SCHOK bound, provide a basis for a global restriction on the dimension of the moduli space of Calabi-Yau manifolds. ",Bounding the Heat Trace of a Calabi-Yau Manifold " Efficient dynamic spectrum access mechanism is crucial for improving the spectrum utilization. In this paper, we consider the dynamic spectrum access mechanism design with both complete and incomplete network information. When the network information is available, we propose an evolutionary spectrum access mechanism. We use the replicator dynamics to study the dynamics of channel selections, and show that the mechanism achieves an equilibrium that is an evolutionarily stable strategy and is also max-min fair. With incomplete network information, we propose a distributed reinforcement learning mechanism for dynamic spectrum access. Each secondary user applies the maximum likelihood estimation method to estimate its expected payoff based on the local observations, and learns to adjust its mixed strategy for channel selections adaptively over time. We study the convergence of the learning mechanism based on the theory of stochastic approximation, and show that it globally converges to an approximate Nash equilibrium. Numerical results show that the proposed evolutionary spectrum access and distributed reinforcement learning mechanisms achieve up to 82% and 70% performance improvement than a random access mechanism, respectively, and are robust to random perturbations of channel selections. ",Evolutionary Game and Learning for Dynamic Spectrum Access " The Maxwell-Klein-Gordon equations in 2+1 dimensions in temporal gauge are locally well-posed for low regularity data even below energy level. The corresponding (3+1)-dimensional case was considered by Yuan. Fundamental for the proof is a partial null structure in the nonlinearity which allows to rely on bilinear estimates in wave-Sobolev spaces by d'Ancona, Foschi and Selberg, on an $(L^p_x L^q_t)$ - estimate for the solution of the wave equation, and on the proof of a related result for the Yang-Mills equations by Tao. ",Low regularity solutions for the (2+1) - dimensional Maxwell-Klein-Gordon equations in temporal gauge " We study the dynamics of the non-classical correlations for few atom systems in the presence of strong interactions for a number of recently developed adiabatic state preparation protocols. We show that entanglement can be created in a controlled fashion and can be attributed to two distinct sources, the atom-atom interaction and the distribution of atoms among different traps. ",Entanglement in spatial adiabatic processes for interacting atoms " Learning enabled autonomous systems provide increased capabilities compared to traditional systems. However, the complexity of and probabilistic nature in the underlying methods enabling such capabilities present challenges for current systems engineering processes for assurance, and test, evaluation, verification, and validation (TEVV). This paper provides a preliminary attempt to map recently developed technical approaches in the assurance and TEVV of learning enabled autonomous systems (LEAS) literature to a traditional systems engineering v-model. This mapping categorizes such techniques into three main approaches: development, acquisition, and sustainment. We review the latest techniques to develop safe, reliable, and resilient learning enabled autonomous systems, without recommending radical and impractical changes to existing systems engineering processes. By performing this mapping, we seek to assist acquisition professionals by (i) informing comprehensive test and evaluation planning, and (ii) objectively communicating risk to leaders. ",A Mapping of Assurance Techniques for Learning Enabled Autonomous Systems to the Systems Engineering Lifecycle " The size-dependent electrical resistivity of single-layer graphene ribbons has been studied experimentally for ribbon widths from 16 nm to 320 nm. The experimental findings are that the resistivity follows a more dramatic trend than that seen for metallic nanowires of similar dimensions, due to a combination of surface scattering from the edges, band-gap related effects and shifts in the Fermi level due to edge effects. We show that the Charge Neutrality point switches polarity below a ribbon width of around 50 nm, and that at this point, the thermal coefficient of resistance is a maximum. The majority doping type therefore can be controlled by altering ribbon width. We also demonstrate that an alumina passivation layer has a significant effect on the mean free path of the charge carriers within the graphene, which can be probed directly via measurements of the width-dependent resistivity. We propose a model for conduction that takes edge and confinement effects into account. ",Unravelling the electrical properties of epitaxial Graphene nanoribbons " We analyze the online response to the preprint publication of a cohort of 4,606 scientific articles submitted to the preprint database arXiv.org between October 2010 and May 2011. We study three forms of responses to these preprints: downloads on the arXiv.org site, mentions on the social media site Twitter, and early citations in the scholarly record. We perform two analyses. First, we analyze the delay and time span of article downloads and Twitter mentions following submission, to understand the temporal configuration of these reactions and whether one precedes or follows the other. Second, we run regression and correlation tests to investigate the relationship between Twitter mentions, arXiv downloads and article citations. We find that Twitter mentions and arXiv downloads of scholarly articles follow two distinct temporal patterns of activity, with Twitter mentions having shorter delays and narrower time spans than arXiv downloads. We also find that the volume of Twitter mentions is statistically correlated with arXiv downloads and early citations just months after the publication of a preprint, with a possible bias that favors highly mentioned articles. ","How the Scientific Community Reacts to Newly Submitted Preprints: Article Downloads, Twitter Mentions, and Citations" Shear $\eta$ and bulk $\zeta$ viscosities are calculated in a quasiparticle model within a relaxation time approximation for pure gluon matter. Below $T_c$ the confined sector is described within a quasiparticle glueball model. Particular attention is paid to behavior of the shear and bulk viscosities near $T_c$. The constructed equation of state reproduces the first-order phase transition for the glue matter. It is shown that with this equation of state it is possible to describe the temperature dependence of the shear viscosity to entropy ratio $\eta/s$ and the bulk viscosity to entropy ratio $\zeta/s$ in reasonable agreement with available lattice data but absolute values of the $\zeta/s$ ratio underestimate the upper limits of this ratio in the lattice measurements typically by an order of magnitude. ,Shear and bulk viscosities for pure glue matter " We define kappa classes on moduli spaces of KSBA stable varieties and pairs, generalizing the Miller-Morita-Mumford classes on moduli of curves, and compute them in some cases where the virtual fundamental class is known to exist, including Burniat and Campedelli surfaces. For Campedelli surfaces, an intermediate step is finding the Chow (same as cohomology) ring of the GIT quotient $(\mathbb P^2)^7//SL(3)$. ",Kappa classes on KSBA spaces It is proposed to identify a strong electric field created during relativistic collisions of asymmetric nuclei via observation of pseudorapidity and transverse momentum distributions of hadrons with the same mass but opposite charges. The detailed calculation results for the directed flow within the Parton-Hadron String Dynamics model are given for Cu-Au interactions at the NICA collision energies of $\sqrt{s_{NN}}=9$ and $5$ GeV. The separation effect is observable at 9 GeV as clearly as at 200 GeV ,Evidence for creation of strong electromagnetic fields in relativistic heavy-ion collisions " There are many approaches is use today to either prevent or minimize the impact of inter-query interactions on a shared cluster. Despite these measures, performance issues due to concurrent executions of mixed workloads still prevail causing undue waiting times for queries. Analyzing these resource interferences is thus critical in order to answer time sensitive questions like 'who is causing my query to slowdown' in a multi-tenant environment. More importantly, dignosing whether the slowdown of a query is a result of resource contentions caused by other queries or some other external factor can help an admin narrow down the many possibilities of performance degradation. This process of investigating the symptoms of resource contentions and attributing blame to concurrent queries is non-trivial and tedious, and involves hours of manually debugging through a cycle of query interactions. In this paper, we present ProtoXplore - a Proto or first system to eXplore contentions, that helps administrators determine whether the blame for resource bottlenecks can be attributed to concurrent queries, and uses a methodology called Resource Acquire Time Penalty (RATP) to quantify this blame towards contentious sources accurately. Further, ProtoXplore builds on the theory of explanations and enables a step-wise deep exploration of various levels of performance bottlenecks faced by a query during its execution using a multi-level directed acyclic graph called ProtoGraph. Our experimental evaluation uses ProtoXplore to analyze the interactions between TPC-DS queries on Apache Spark to show how ProtoXplore provides explanations that help in diagnosing contention related issues and better managing a changing mixed workload in a shared cluster. ",Analyzing Query Performance and Attributing Blame for Contentions in a Cluster Computing Framework " Considering today's lifestyle, people just sleep forgetting the benefits it provides to the human body. The reasons for not having a productive sleep could be many. Smart-Yoga Pillow (SaYoPillow) is envisioned as a device that may help in recognizing the importance of a good quality sleep to alleviate stress while establishing a measurable relationship between stress and sleeping habits. A system that analyzes the sleeping habits by continuously monitoring the physiological changes that occur during rapid eye movement (REM) and non-rapid eye movement (NREM) stages of sleep is proposed in the current work. In addition to the physiological parameter changes, factors such as sleep duration, snoring range, eye movement, and limb movements are also monitored. The SaYoPillow system is processed at the edge level with the storage being at the cloud. Not having to compromise the user's privacy, SaYoPillow proposes secure data transmission for both uploading and retrieving, and secure storage and communications as an attempt to reduce malicious attacks on healthcare. A user interface is provided for the user to control data accessibility and visibility. SaYoPillow has 96% accuracy which is close to other existing research works. However, SaYoPillow is the only work with security features as well as only work that considers sleeping habits for stress. ","SaYoPillow: A Blockchain-Enabled, Privacy-Assured Framework for Stress Detection, Prediction and Control Considering Sleeping Habits in the IoMT" " Recent years have witnessed wide application of hashing for large-scale image retrieval. However, most existing hashing methods are based on hand-crafted features which might not be optimally compatible with the hashing procedure. Recently, deep hashing methods have been proposed to perform simultaneous feature learning and hash-code learning with deep neural networks, which have shown better performance than traditional hashing methods with hand-crafted features. Most of these deep hashing methods are supervised whose supervised information is given with triplet labels. For another common application scenario with pairwise labels, there have not existed methods for simultaneous feature learning and hash-code learning. In this paper, we propose a novel deep hashing method, called deep pairwise-supervised hashing(DPSH), to perform simultaneous feature learning and hash-code learning for applications with pairwise labels. Experiments on real datasets show that our DPSH method can outperform other methods to achieve the state-of-the-art performance in image retrieval applications. ",Feature Learning based Deep Supervised Hashing with Pairwise Labels " Visual data and text data are composed of information at multiple granularities. A video can describe a complex scene that is composed of multiple clips or shots, where each depicts a semantically coherent event or action. Similarly, a paragraph may contain sentences with different topics, which collectively conveys a coherent message or story. In this paper, we investigate the modeling techniques for such hierarchical sequential data where there are correspondences across multiple modalities. Specifically, we introduce hierarchical sequence embedding (HSE), a generic model for embedding sequential data of different modalities into hierarchically semantic spaces, with either explicit or implicit correspondence information. We perform empirical studies on large-scale video and paragraph retrieval datasets and demonstrated superior performance by the proposed methods. Furthermore, we examine the effectiveness of our learned embeddings when applied to downstream tasks. We show its utility in zero-shot action recognition and video captioning. ",Cross-Modal and Hierarchical Modeling of Video and Text " The neutron-shell structure of $^{25}$F was studied using quasi-free (p,2p) knockout reaction at 270A MeV in inverse kinematics. The sum of spectroscopic factors of $\pi$0d$_{5/2}$ orbital is found to be $1.0 \pm 0.3$. However, the spectroscopic factor for the ground-state to ground-state transition ($^{25}$F, $^{24}$O$_{g.s.}$) is only $0.36\pm 0.13$, and $^{24}$O excited states are produced from the 0d$_{5/2}$ proton knockout. The result shows that the $^{24}$O core of $^{25}$F nucleus significantly differs from a free $^{24}$O nucleus, and the core consists of 35% $^{24}$O$_{g.s}$. and 65% excited $^{24}$O. ",How different is the core of $^{25}$F from $^{24}$O$_{g.s.}$? Intersections constitute one of the most dangerous elements in road systems. Traffic signals remain the most common way to control traffic at high-volume intersections and offer many opportunities to apply intelligent transportation systems to make traffic more efficient and safe. This paper describes an automated method to estimate the temporal exposure of road users crossing the conflict zone to lateral collision with road users originating from a different approach. This component is part of a larger system relying on video sensors to provide queue lengths and spatial occupancy that are used for real time traffic control and monitoring. The method is evaluated on data collected during a real world experiment. ,Automatic Estimation of the Exposure to Lateral Collision in Signalized Intersections using Video Sensors " We introduce the MAsked Generative VIdeo Transformer, MAGVIT, to tackle various video synthesis tasks with a single model. We introduce a 3D tokenizer to quantize a video into spatial-temporal visual tokens and propose an embedding method for masked video token modeling to facilitate multi-task learning. We conduct extensive experiments to demonstrate the quality, efficiency, and flexibility of MAGVIT. Our experiments show that (i) MAGVIT performs favorably against state-of-the-art approaches and establishes the best-published FVD on three video generation benchmarks, including the challenging Kinetics-600. (ii) MAGVIT outperforms existing methods in inference time by two orders of magnitude against diffusion models and by 60x against autoregressive models. (iii) A single MAGVIT model supports ten diverse generation tasks and generalizes across videos from different visual domains. The source code and trained models will be released to the public at https://magvit.cs.cmu.edu. ",MAGVIT: Masked Generative Video Transformer " We present a first scaling test of twisted mass QCD with pure Wilson quarks for a twisting angle of pi/2. We have computed the vector meson mass and the pseudoscalar decay constant for different values of beta at fixed value of r_0 m_PS. The results obtained in the quenched approximation are compared with data for pure Wilson and non-perturbatively O(a) improved Wilson computations. We show that our results from Wilson twisted mass QCD show clearly reduced lattice spacing errors, consistent with O(a) improvement and without the need of any improvement terms added. These results thus provide numerical evidence of the prediction in ref. [1]. ",Scaling test for Wilson twisted mass QCD " Machine-to-machine services are witnessing an unprecedented diffusion, which is expected to result in an ever-increasing data traffic load. In this context, satellite technology is playing a pivotal role, since it enables a widespread provisioning of machine-to-machine services. In particular, oil industry, maritime communications, as well as remote monitoring are sectors where the use of satellite communications is expected to dramatically explode within the next few years. In the light of this sudden increase of machine-to-machine data transported over satellite, a more thorough understanding of machine-to-machine service implementation over satellite is required, especially focusing on the interaction between transport and MAC layers of the protocol stack. Starting from these observations, this paper thoroughly analyses the interaction between TCP and the Contention Resolution Diversity Slotted Aloha access scheme defined in the DVB-RCS2 standard, assuming the use of an MQTT-like protocol to distribute machine-to-machine services. A novel TCP model is developed and validated through extensive simulation campaigns, which also shed important lights on the design choices enabling the efficient transport of machine-to-machine data via satellite. ",M2M Traffic via Random Access Satellite links: Interactions between Transport and MAC Layers Conformal totally symmetric arbitrary spin fermionic fields propagating in the flat space-time of even dimension greater than or equal to four are investigated. First-derivative metric-like formulation involving Fang-Fronsdal kinetic operator for such fields is developed. Gauge invariant Lagrangian and the corresponding gauge transformations are obtained. Gauge symmetries of the Lagrangian are realized by using auxiliary fields and the Stueckelberg fields. Realization of conformal algebra symmetries on the space of conformal gauge fermionic fields is obtained. The on-shell degrees of freedom of the fermionic arbitrary spin conformal fields are also studied. ,Conformal totally symmetric arbitrary spin fermionic fields " Motivated by a formula of A. Postnikov relating binary trees, we define the hook length polynomials for m-ary trees and plane forests, and show that these polynomials have a simple binomial expression. An integer value of this expression is C_{k,m}(n)=\frac{1}{mn+1}{(mn+1)k \choose n}, which we call the (k,m)-Catalan number. For proving the hook length formulas, we also introduce a combinatorial family, (k,m)-ary trees, which are counted by the (k,m)-Catalan numbers. ","(k,m)-Catalan Numbers and Hook Length Polynomials for Plane Trees" " We investigate the spatial and temporal variations of the high-degree mode frequencies calculated over localized regions of the Sun during the extended minimum phase between solar cycles 23 and 24. The frequency shifts measured relative to the spatial average over the solar disk indicate that the correlation between the frequency shift and magnetic field strength during the low-activity phase is weak. The disk-averaged frequency shifts computed relative to a minimal activity period also reveal a moderate correlation with different activity indices, with a maximum linear correlation of about 72%. From the investigation of the frequency shifts at different latitudinal bands, we do not find a consensus period for the onset of solar cycle 24. The frequency shifts corresponding to most of the latitudes in the northern hemisphere and 30 degree south of the equator indicate the minimum epoch to be February 2008, which is earlier than inferred from solar activity indices. ",Acoustic Mode Frequencies of the Sun during the Minimum Phase between Solar Cycles 23 and 24 " Spatiotemporal (ST) image data are increasingly common and often high-dimensional (high-D). Modeling ST data can be a challenge due to the plethora of independent and interacting processes which may or may not contribute to the measurements. Characterization can be considered the complement to modeling by helping guide assumptions about generative processes and their representation in the data. Dimensionality reduction (DR) is a frequently implemented type of characterization designed to mitigate the ""curse of dimensionality"" on high-D signals. For decades, Principal Component (PC) and Empirical Orthogonal Function (EOF) analysis has been used as a linear, invertible approach to DR and ST analysis. Recent years have seen the additional development of a suite of nonlinear DR algorithms, frequently categorized as ""manifold learning"". Here, we explore the idea of joint characterization of ST data manifolds using PCs/EOFs alongside two nonlinear DR approaches: Laplacian Eigenmaps (LE) and t-distributed stochastic neighbor embedding (t-SNE). Starting with a synthetic example and progressing to global, regional, and field scale ST datasets spanning roughly 5 orders of magnitude in space and 2 in time, we show these three DR approaches can yield complementary information about ST manifold topology. Compared to the relatively diffuse TFS produced by PCs/EOFs, the nonlinear approaches yield more compact manifolds with decreased ambiguity in temporal endmembers (LE) and/or in spatiotemporal clustering (t-SNE). These properties are compensated by the greater interpretability, significantly lower computational demand and diminished sensitivity to spatial aliasing for PCs/EOFs than LE or t-SNE. Taken together, we find joint characterization using the three complementary DR approaches capable of greater insight into generative ST processes than possible using any single approach alone. ",Joint Characterization of Spatiotemporal Data Manifolds " Feature norm datasets of human conceptual knowledge, collected in surveys of human volunteers, yield highly interpretable models of word meaning and play an important role in neurolinguistic research on semantic cognition. However, these datasets are limited in size due to practical obstacles associated with exhaustively listing properties for a large number of words. In contrast, the development of distributional modelling techniques and the availability of vast text corpora have allowed researchers to construct effective vector space models of word meaning over large lexicons. However, this comes at the cost of interpretable, human-like information about word meaning. We propose a method for mapping human property knowledge onto a distributional semantic space, which adapts the word2vec architecture to the task of modelling concept features. Our approach gives a measure of concept and feature affinity in a single semantic space, which makes for easy and efficient ranking of candidate human-derived semantic properties for arbitrary words. We compare our model with a previous approach, and show that it performs better on several evaluation tasks. Finally, we discuss how our method could be used to develop efficient sampling techniques to extend existing feature norm datasets in a reliable way. ",Feature2Vec: Distributional semantic modelling of human property knowledge " In this paper we propose a new way of obtaining four dimensional gauge invariant $U(n)$ gauge field from a bulk action. The results are valid for both Randall-Sundrum scenarios and are obtained without the introduction of other fields or new degrees of freedom. The model is based only in non-minimal couplings with the gravity field. We show that two non-minimal couplings are necessary, one with the field strength and the other with a mass term. Despite the loosing of five dimensional gauge invariance by the mass term a massless gauge field is obtained over the brane. To obtain this, we need of a fine tuning of the two parameters introduced through the couplings. The fine tuning is obtained by imposing the boundary conditions and to guarantee non-abelian gauge invariance in four dimensions. With this we are left with no free parameters and the model is completely determined. The model also provides analytical solutions to the linearized equations for the zero mode and for a general warp factor. ",Non-minimal couplings in Randall-Sundrum Scenarios " We report on atomic gas (HI) and molecular gas (as traced by CO(2-1)) redshifted absorption features toward the nuclear regions of the closest powerful radio galaxy, Centaurus A (NGC 5128). Our HI observations using the Very Long Baseline Array allow us to discern with unprecedented sub-parsec resolution HI absorption profiles toward different positions along the 21 cm continuum jet emission in the inner 0.""3 (or 5.4 pc). In addition, our CO(2-1) data obtained with the Submillimeter Array probe the bulk of the absorbing molecular gas with little contamination by emission, not possible with previous CO single-dish observations. We shed light with these data on the physical properties of the gas in the line of sight, emphasizing the still open debate about the nature of the gas that produces the broad absorption line (~55 km/s). First, the broad H I line is more prominent toward the central and brightest 21 cm continuum component than toward a region along the jet at a distance ~ 20 mas (or 0.4 pc) further from it. This suggests that the broad absorption line arises from gas located close to the nucleus, rather than from diffuse and more distant gas. Second, the different velocity components detected in the CO(2-1) absorption spectrum match well other molecular lines, such as those of HCO+(1-0), except the broad absorption line that is detected in HCO+(1-0) (and most likely related to that of the H I). Dissociation of molecular hydrogen due to the AGN seems to be efficient at distances <= 10 pc, which might contribute to the depth of the broad H I and molecular lines. ",Disentangling the circumnuclear environs of Centaurus A: II. On the nature of the broad absorption line " In this work we investigate intra-day patterns of activity on a population of 7,261 users of mobile health wearable devices and apps. We show that: (1) using intra-day step and sleep data recorded from passive trackers significantly improves classification performance on self-reported chronic conditions related to mental health and nervous system disorders, (2) Convolutional Neural Networks achieve top classification performance vs. baseline models when trained directly on multivariate time series of activity data, and (3) jointly predicting all condition classes via multi-task learning can be leveraged to extract features that generalize across data sets and achieve the highest classification performance. ",Intra-day Activity Better Predicts Chronic Conditions " This paper studies Ebert's hat problem with four players and two colors, where the probabilities of the colors may be different for each player. Our goal is to maximize the probability of winning the game and to describe winning strategies We use the new concept of an adequate set. The construction of adequate sets is independent of underlying probabilities and we can use this fact in the analysis of our general case. ",Generalized four person hat game We study the kinetics of 2D Bose gas cooling provided Bose particles interact with 3D phonons. At low temperatures phonon emission is prohibited by the energy and the momentum conservation. We show that both particle-particle scattering and impurity scattering assist the Bose gas cooling. ,Low temperature kinetics of 2D exciton gas cooling in quantum well bilayer " Let $R=\bigoplus_{\underline{n} \in \mathbb{N}^t}R_{\underline{n}}$ be a commutative Noetherian $\mathbb{N}^t$-graded ring, and $L = \bigoplus_{\underline{n}\in\mathbb{N}^t}L_{\underline{n}}$ be a finitely generated $\mathbb{N}^t$-graded $R$-module. We prove that there exists a positive integer $k$ such that for any $\underline{n} \in \mathbb{N}^t$ with $L_{\underline{n}} \neq 0$, there exists a primary decomposition of the zero submodule $O_{\underline{n}}$ of $L_{\underline{n}}$ such that for any $P \in {\rm Ass}_{R_0}(L_{\underline{n}})$, the $P$-primary component $Q$ in that primary decomposition contains $P^k L_{\underline{n}}$. We also give an example which shows that not all primary decompositions of $O_{\underline{n}}$ in $L_{\underline{n}}$ have this property. As an application of our result, we prove that there exists a fixed positive integer $l$ such that the $0^{\rm th}$ local cohomology $H_I^0(L_{\underline{n}}) = \big(0 :_{L_{\underline{n}}} I^l\big)$ for all ideals $I$ of $R_0$ and for all $\underline{n} \in \mathbb{N}^t$. ",Existence of special primary decompositions in multigraded modules " For concertgoers, musical interpretation is the most important factor in determining whether or not we enjoy a classical performance. Every performance includes mistakes---intonation issues, a lost note, an unpleasant sound---but these are all easily forgotten (or unnoticed) when a performer engages her audience, imbuing a piece with novel emotional content beyond the vague instructions inscribed on the printed page. While music teachers use imagery or heuristic guidelines to motivate interpretive decisions, combining these vague instructions to create a convincing performance remains the domain of the performer, subject to the whims of the moment, technical fluency, and taste. In this research, we use data from the CHARM Mazurka Project---forty-six professional recordings of Chopin's Mazurka Op. 63 No. 3 by consumate artists---with the goal of elucidating musically interpretable performance decisions. Using information on the inter-onset intervals of the note attacks in the recordings, we apply functional data analysis techniques enriched with prior information gained from music theory to discover relevant features and perform hierarchical clustering. The resulting clusters suggest methods for informing music instruction, discovering listening preferences, and analyzing performances. ",Markov-switching State Space Models for Uncovering Musical Interpretation " Many theories of gravity admit formulations in different, conformally related manifolds, known as the Jordan and Einstein conformal frames. Among them are various scalar-tensor theories of gravity and high-order theories with the Lagrangian $f(R)$ where $R$ is the scalar curvature and $f$ an arbitrary function. It may happen that a singularity in the Einstein frame corresponds to a regular surface S_trans in the Jordan frame, and the space-time is then continued beyond this surface. This phenomenon is called a conformal continuation (CC). We discuss the properties of vacuum static, spherically symmetric configurations of arbitrary dimension in scalar-tensor and $f(R)$ theories of gravity and indicate necessary and sufficient conditions for the existence of solutions admitting a CC. Two cases are distinguished, when S_trans is an ordinary regular sphere and when it is a Killing horizon. Two explicit examples of CCs are presented. ",Generalized theories of gravity and conformal continuations " Projective connections arise from equivalence classes of affine connections under the reparametrization of geodesics. They may also be viewed as quotient systems of the classical geodesic equation. After studying the link between integrals of the (classical) geodesic flow and its associated projective connection, we turn our attention to 2-dimensional metrics that admit one projective vector field, i.e. whose local flow sends unparametrized geodesics into unparametrized geodesics. We review and discuss the classification of these metrics, introducing special coordinates on the linear space of solutions to a certain system of partial differential equations, from which such metrics are obtained. Particularly, we discuss those that give rise to free second-order superintegrable Hamiltonian systems, i.e. which admit 2 additional, functionally independent quadratic integrals. We prove that these systems are parametrized by the 2-sphere, except for 6 exceptional points where the projective symmetry becomes homothetic. ",(Super-)integrable systems associated to 2-dimensional projective connections with one projective symmetry " The possibility of re-switching of techniques in Piero Sraffa's intersectoral model, namely the returning capital-intensive techniques with monotonic changes in the profit rate, is traditionally considered as a paradox putting at stake the viability of the neoclassical theory of production. It is argued here that this phenomenon can be rationalized within the neoclassical paradigm. Sectoral interdependencies can give rise to non-monotonic effects of progressive variations in income distribution on relative prices. The re-switching of techniques is, therefore, the result of cost-minimizing technical choices facing returning ranks of relative input prices in full consistency with the neoclassical perspective. ",Solving the Reswitching Paradox in the Sraffian Theory of Capital " We investigate by Lattice Boltzmann methods the effect of inertia on the deformation and break-up of a two-dimensional fluid droplet surrounded by fluid of equal viscosity (in a confined geometry) whose shear rate is increased very slowly. We give evidence that in two dimensions inertia is {\em necessary} for break-up, so that at zero Reynolds number the droplet deforms indefinitely without breaking. We identify two different routes to breakup via two-lobed and three-lobed structures respectively, and give evidence for a sharp transition between these routes as parameters are varied. ",Role of inertia in two-dimensional deformation and breakup of a droplet " Given a Lie algebra $\mathfrak{g}$ of an algebraic group over a ring $S,$ we show that the first Kac-Weisfeiler conjecture holds for reductions of $\mathfrak{g} \mod p$ for large enough primes $p,$ reproving a recent result of Martin, Stewart and Topley. As a byproduct of our proof, we show that the center of the skew field of fractions of the the enveloping algebra $\mathfrak{U}\mathfrak{g}_{k}$ for a field $k$ of characteristic $p>>0$ is generated by the $p$-center and by the reduction $\mod p$ of the center of the fraction skew field of $\mathfrak{U}\mathfrak{g}.$ ",A simple proof of the first Kac-Weisfeiler conjecture for algebraic Lie algebras in large characteristics " U-Net, as an encoder-decoder architecture with forward skip connections, has achieved promising results in various medical image analysis tasks. Many recent approaches have also extended U-Net with more complex building blocks, which typically increase the number of network parameters considerably. Such complexity makes the inference stage highly inefficient for clinical applications. Towards an effective yet economic segmentation network design, in this work, we propose backward skip connections that bring decoded features back to the encoder. Our design can be jointly adopted with forward skip connections in any encoder-decoder architecture forming a recurrence structure without introducing extra parameters. With the backward skip connections, we propose a U-Net based network family, namely Bi-directional O-shape networks, which set new benchmarks on multiple public medical imaging segmentation datasets. On the other hand, with the most plain architecture (BiO-Net), network computations inevitably increase along with the pre-set recurrence time. We have thus studied the deficiency bottleneck of such recurrent design and propose a novel two-phase Neural Architecture Search (NAS) algorithm, namely BiX-NAS, to search for the best multi-scale bi-directional skip connections. The ineffective skip connections are then discarded to reduce computational costs and speed up network inference. The finally searched BiX-Net yields the least network complexity and outperforms other state-of-the-art counterparts by large margins. We evaluate our methods on both 2D and 3D segmentation tasks in a total of six datasets. Extensive ablation studies have also been conducted to provide a comprehensive analysis for our proposed methods. ",Towards Bi-directional Skip Connections in Encoder-Decoder Architectures and Beyond " Such problems as computation of spectra of spin chains and vibrational spectra of molecules can be written as high-dimensional eigenvalue problems, i.e., when the eigenvector can be naturally represented as a multidimensional tensor. Tensor methods have proven to be an efficient tool for the approximation of solutions of high-dimensional eigenvalue problems, however, their performance deteriorates quickly when the number of eigenstates to be computed increases. We address this issue by designing a new algorithm motivated by the ideas of Riemannian optimization (optimization on smooth manifolds) for the approximation of multiple eigenstates in the tensor-train format, which is also known as matrix product state representation. The proposed algorithm is implemented in TensorFlow, which allows for both CPU and GPU parallelization. ",Low-rank Riemannian eigensolver for high-dimensional Hamiltonians " We study the applicability of the time-dependent variational principle in matrix product state manifolds for the long time description of quantum interacting systems. By studying integrable and nonintegrable systems for which the long time dynamics are known we demonstrate that convergence of long time observables is subtle and needs to be examined carefully. Remarkably, for the disordered nonintegrable system we consider the long time dynamics are in good agreement with the rigorously obtained short time behavior and with previous obtained numerically exact results, suggesting that at least in this case the apparent convergence of this approach is reliable. Our study indicates that while great care must be exercised in establishing the convergence of the method, it may still be asymptotically accurate for a class of disordered nonintegrable quantum systems. ",Time-dependent variational principle in matrix-product state manifolds: pitfalls and potential " We show that a zero-sum-free sequence of length $n$ over an abelian group spans at least $2n$ distinct subsequence sums, unless it possesses a rigid, easily-described structure. ",Zero-sum-free sequences with few subsequence sums " We study here the di-muon decay mode of a very light CP-odd Higgs boson of the NMSSM, a1, produced in association with a bottom-antibottom pair and find that, despite small event rates, a significant signal should be extractable from the SM background at the LHC with high luminosity. ",Muon Signals of Very Light CP-odd Higgs states of the NMSSM at the LHC " We introduce a new method for solving maximum likelihood problems through variational calculus, and apply it to the case of recovering an unknown star formation history, $SFR(t)$, from a resulting HR diagram. This approach allows a totally non-parametric solution which has the advantage of requiring no initial assumptions on the $SFR(t)$. As a full maximum likelihood statistical model is used, we take advantage of all the information available in the HR diagram, rather than concentrating on particular features such as turn off points or luminosity functions. We test the method using a series of synthetic HR diagrams produced from known $SFR(t)$, and find it to be quite successful under noise conditions comparable to those present in current observations. At this point we restrict the analysis to situations where the metallicity of the system is known, as is the case with the resolved populations of the dwarf spheroidal companions to the Milky Way or the solar neighbourhood Hipparcos data. We also include tests to quantify the way uncertainties in the assumed metallicity, binary fraction and IMF affect our inferences. ",Deriving star formation histories: Inverting HR diagrams through a variational calculus maximum likelihood method " The ideas related to the arrow of time are discussed briefly. I then focus on the prevalent physical mechanism in the evolution of the universe and developments in particle physics, spontaneous symmetry breaking, and show that it explicitly breaks time-reversal symmetry. For simplicity, I do this in a point mechanics gauge theory with symmetry group O(2). The proof of breakdown of time-reversal symmetry relies on the use of a time step function to express the Lagrangian valid for any time. ",Spontaneous Symmetry Breaking Breaks Time-Reversal Symmetry " We discuss the limits g tending to large, M tending to large with g^3/M = const. of the 1 + 1 dimensional Yukawa model. We take into account conclusion of the results on bound states of the Yukawa Model in this limit (obtained in [7]). We find that model reduces to an effective nonlocal phi 3 theory in this limit. We observe causality violation in this limit. We discuss the result. ",Causality in 1+1 Dimensional Yukawa Model-II Tiny violations of the Lorentz symmetry of relativity and the associated discrete CPT symmetry could emerge in a consistent theory of quantum gravity such as string theory. Recent evidence for linear polarization in gamma-ray bursts improves existing sensitivities to Lorentz and CPT violation involving photons by factors ranging from ten to a million. ,Constraints on relativity violations from gamma-ray bursts " We present an evaluation of the strong couplings JD^(*)D^(*) and JD^(*)D^(*)pi by an effective field theory of quarks and mesons. These couplings are necessary to calculate pi+J/psi --> D^(*)+barD^(*) cross sections, an important background to the J/psi suppression signal in the quark-gluon plasma. We write down the general effective lagrangian and compute the relevant couplings in the soft pion limit and beyond. ",J/psi couplings to charmed resonances and to pi " While finite non-commutative operator systems lie at the foundation of quantum measurement, they are also tools for understanding geometric iterations as used in the theory of iterated function systems (IFSs) and in wavelet analysis. Key is a certain splitting of the total Hilbert space and its recursive iterations to further iterated subdivisions. This paper explores some implications for associated probability measures (in the classical sense of measure theory), specifically their fractal components. We identify a fractal scale $s$ in a family of Borel probability measures $\mu$ on the unit interval which arises independently in quantum information theory and in wavelet analysis. The scales $s$ we find satisfy $s\in \mathbb{R}_{+}$ and $s\not =1$, some $s <1$ and some $s>1$. We identify these scales $s$ by considering the asymptotic properties of $\mu(J) /| J| ^{s}$ where $J$ are dyadic subintervals, and $| J| \to0$. ",The Measure of a Measurement " In earlier work we considered methods for predicting future levels of hurricane activity based on the assumption that historical mean activity was at one constant level from 1900 to 1994, and has been at another constant level since then. We now make this model a little more subtle, and account for the possibility of four different levels of mean hurricane activity since 1900. ",Year ahead prediction of US landfalling hurricane numbers: the optimal combination of multiple levels of activity since 1900 " An increasing share of image and video content is analyzed by machines rather than viewed by humans, and therefore it becomes relevant to optimize codecs for such applications where the analysis is performed remotely. Unfortunately, conventional coding tools are challenging to specialize for machine tasks as they were originally designed for human perception. However, neural network based codecs can be jointly trained end-to-end with any convolutional neural network (CNN)-based task model. In this paper, we propose to study an end-to-end framework enabling efficient image compression for remote machine task analysis, using a chain composed of a compression module and a task algorithm that can be optimized end-to-end. We show that it is possible to significantly improve the task accuracy when fine-tuning jointly the codec and the task networks, especially at low bit-rates. Depending on training or deployment constraints, selective fine-tuning can be applied only on the encoder, decoder or task network and still achieve rate-accuracy improvements over an off-the-shelf codec and task network. Our results also demonstrate the flexibility of end-to-end pipelines for practical applications. ","End-to-end optimized image compression for machines, a study" " Let $S$ be a unital associative ring and $S[t;\sigma,\delta]$ be a skew polynomial ring, where $\sigma$ is an injective endomorphism of $S$ and $\delta$ a left $\sigma$-derivation. For each $f\in S[t;\sigma,\delta]$ of degree $m>1$ with a unit as leading coefficient, we construct a unital nonassociative algebra whose behaviour reflects the properties of $f$. The algebras obtained yield canonical examples of right division algebras when $f$ is irreducible. We investigate the structure of these algebras. The structure of their right nucleus depends on the choice of $f$. In the classical literature, this nucleus appears as the eigenspace of $f$, and is used to investigate the irreducible factors of $f$. We give necessary and sufficient criteria for skew polynomials of low degree to be irreducible. ",How a nonassociative algebra reflects the properties of a skew polynomial " Recently, Transformers have shown promising performance in various vision tasks. A challenging issue in Transformer design is that global self-attention is very expensive to compute, especially for the high-resolution vision tasks. Local self-attention performs attention computation within a local region to improve its efficiency, which leads to their receptive fields in a single attention layer are not large enough, resulting in insufficient context modeling. When observing a scene, humans usually focus on a local region while attending to non-attentional regions at coarse granularity. Based on this observation, we develop the axially expanded window self-attention mechanism that performs fine-grained self-attention within the local window and coarse-grained self-attention in the horizontal and vertical axes, and thus can effectively capturing both short- and long-range visual dependencies. ",Axially Expanded Windows for Local-Global Interaction in Vision Transformers Perturbative and non-perturbative terms of the cross sections of ultraperipheral production of lepton pairs in ion collisions are taken into account. It is shown that production of low-mass $e^+e^-$ pairs is strongly enhanced (compared to perturbative estimates) due to the non-perturbative Sommerfeld-Gamow-Sakharov (SGS) factor. Coulomb attraction of the non-relativistic components of those pairs leads to the finite value of their mass distribution at lowest masses. Their annihilation can result in the increased intensity of 511 keV photons. It can be recorded at the NICA collider and is especially crucial in astrophysical implications regarding the 511 keV line emitted from the Galactic center. The analogous effect can be observed in lepton pairs production at LHC. Energy spectra of lepton pairs created in ultraperipheral nuclear collisions and their transverse momenta are calculated. ,Perturbative and non-perturbative effects in ultraperipheral production of lepton pairs " Fuzzing is one of the most popular and widely used techniques to find vulnerabilities in any application. Fuzzers are fast enough, but they still spend a good portion of time to restart a crashed application and then fuzz it from the beginning. Fuzzing an application from a point deeper in the execution is also important. To do this, a user needs to take a snapshot of the program while fuzzing it on top of an emulator, virtual machine, or by utilizing a special kernel module to enable checkpointing. Even with this ability, it can be difficult to attach a fuzzer after restoring a checkpoint. As a result, most fuzzers leverage a form of fork-server design. We propose a novel testing architecture that allows users to attach a fuzzer after the program has started running. We do this by natively checkpointing the target application at a point of interest, and attaching the fuzzer after restoring the checkpoint. A fork-server may even be engaged at the point of restoration. This not only improves the throughput of the fuzzing campaign by minimizing startup time, but opens up a new way to fuzz applications. With this architecture, a user can take a series of checkpoints at points of interest, and run parallel tests to reduce the overall state-complexity of an individual test. Checkpoints allow us to begin fuzzing from a deeper point in the execution path, omitting prior execution from the required coverage path. This and other checkpointing techniques are described in the paper to help improve fuzzing. ",An Architecture for Exploiting Native User-Land Checkpoint-Restart to Improve Fuzzing " The global pandemic of COVID-19 is continuing to have a significant effect on the well-being of global population, increasing the demand for rapid testing, diagnosis, and treatment. Along with COVID-19, other etiologies of pneumonia and tuberculosis constitute additional challenges to the medical system. In this regard, the objective of this work is to develop a new deep transfer learning pipeline to diagnose patients with COVID-19, pneumonia, and tuberculosis, based on chest x-ray images. We observed in some instances DenseNet and Resnet have orthogonal performances. In our proposed model, we have created an extra layer with convolutional neural network blocks to combine these two models to establish superior performance over either model. The same strategy can be useful in other applications where two competing networks with complementary performance are observed. We have tested the performance of our proposed network on two-class (pneumonia vs healthy), three-class (including COVID-19), and four-class (including tuberculosis) classification problems. The proposed network has been able to successfully classify these lung diseases in all four datasets and has provided significant improvement over the benchmark networks of DenseNet, ResNet, and Inception-V3. These novel findings can deliver a state-of-the-art pre-screening fast-track decision network to detect COVID-19 and other lung pathologies. ","DenResCov-19: A deep transfer learning network for robust automatic classification of COVID-19, pneumonia, and tuberculosis from X-rays" " We have selectively grown thin epitaxial GaAs films on Ge substrates with the aid of a 200 nm thin SiO2 mask layer. The selectively grown structures have lateral sizes ranging from 1 um width up to large areas of 1 by 1 mm2. The growth is fully selective, thanks to an optimized growth procedure, consisting of a 13 nm thin nucleation layer grown at high pressure, followed by low pressure growth of GaAs. This growth procedure inhibits the nucleation of GaAs on the mask area and is a good compromise between reduction of loading effects and inhibition of anti phase domain growth in the GaAs. Nevertheless, both microscopic and macroscopic loading effects can still be observed on x-section SEM images and profilometer measurements. X-ray diffraction and low temperature photoluminescence measurements demonstrate the good microscopic characteristics of the selectively grown GaAs. ",Selective Epitaxial Growth of GaAs on Ge Substrates with a SiO2 Pattern " We reinvestigate the correlation between black hole mass and bulge concentration. With an increased galaxy sample, updated estimates of galaxy distances, black hole masses, and Sersic indices `n' - a measure of concentration - we perform a least-squares regression analysis to obtain a relation suitable for the purpose of predicting black hole masses in other galaxies. In addition to the linear relation, log(M_bh) = 7.81(+/-0.08) + 2.69(+/-0.28)[log(n/3)] with epsilon_(intrin)=0.31 dex, we investigated the possibility of a higher order M_bh-n relation, finding the second order term in the best-fitting quadratic relation to be inconsistent with a value of zero at greater than the 99.99% confidence level. The optimal relation is given by log(M_bh) = 7.98(+/-0.09) + 3.70(+/-0.46)[log(n/3)] - 3.10(+/-0.84)[log(n/3)]^2, with epsilon_(intrin)=0.18 dex and a total absolute scatter of 0.31 dex. Extrapolating the quadratic relation, it predicts black holes with masses of ~10^3 M_sun in n=0.5 dwarf elliptical galaxies, compared to ~10^5 M_sun from the linear relation, and an upper bound on the largest black hole masses in the local universe, equal to 1.2^{+2.6}_{-0.4}x10^9 M_sun}. In addition, we show that the nuclear star clusters at the centers of low-luminosity elliptical galaxies follow an extrapolation of the same quadratic relation. Moreover, we speculate that the merger of two such nucleated galaxies, accompanied by the merger and runaway collision of their central star clusters, may result in the late-time formation of some supermassive black holes. Finally, we predict the existence of, and provide equations for, a relation between M_bh and the central surface brightness of the host bulge. ",A log-quadratic relation for predicting supermassive black hole masses from the host bulge Sersic index " In this article we describe the $G\times G$-equivariant $K$-ring of $X$, where $X$ is a regular compactification of a connected complex reductive algebraic group $G$. Furthermore, in the case when $G$ is a semisimple group of adjoint type, and $X$ its wonderful compactification, we describe its ordinary $K$-ring $K(X)$. More precisely, we prove that $K(X)$ is a free module over $K(G/B)$ of rank the cardinality of the Weyl group. We further give an explicit basis of $K(X)$ over $K(G/B)$, and also determine the structure constants with respect to this basis. ",Equivariant K-theory of compactifications of algebraic groups " Let $\hat{\boldsymbol x}$ be a normalised standard complex Gaussian vector, and project an Hermitian matrix $A$ onto the hyperplane orthogonal to $\hat{\boldsymbol x}$. In a recent paper Faraut [Tunisian J. Math. \textbf{1} (2019), 585--606] has observed that the corresponding eigenvalue PDF has an almost identical structure to the eigenvalue PDF for the rank 1 perturbation $A + b \hat{\boldsymbol x} \hat{\boldsymbol x}^\dagger$, and asks for an explanation. We provide this by way of a common derivation involving the secular equations and associated Jacobians. This applies too in related setting, for example when $\hat{\boldsymbol x}$ is a real Gaussian and $A$ Hermitian, and also in a multiplicative setting $A U B U^\dagger$ where $A, B$ are fixed unitary matrices with $B$ a multiplicative rank 1 deviation from unity, and $U$ is a Haar distributed unitary matrix. Specifically, in each case there is a dual eigenvalue problem giving rise to a PDF of almost identical structure. ",Co-rank 1 projections and the randomised Horn problem " We introduce ""Simulations Beyond The Local Universe"" (SIBELIUS) that connect the Local Group to its cosmic environment. We show that introducing hierarchical small-scale perturbations to a density field constrained on large scales by observations provides an efficient way to explore the sample space of Local Group analogues. From more than 60 000 simulations, we identify a hierarchy of Local Group characteristics emanating from different scales: the total mass, orientation, orbital energy and the angular momentum are largely determined by modes above $\lambda$ = 1.6 comoving Mpc (cMpc) in the primordial density field. Smaller scale variations are mostly manifest as perturbations to the MW-M31 orbit, and we find that the observables commonly used to describe the Local Group -- the MW-M31 separation and radial velocity -- are transient and depend on specifying scales down to 0.2 cMpc in the primordial density field. We further find that the presence of M33/LMC analogues significantly affects the MW-M31 orbit and its sensitivity to small-scale perturbations. We construct initial conditions that lead to the formation of a Local Group whose primary observables precisely match the current observations. ",The SIBELIUS Project: E Pluribus Unum " This paper presents a theory for the behaviour of isotropic-hardening/softening elastoplastic materials that do not have a preferred reference configuration. In spite of important differences, many ingredients of classical plasticity are present. Main features are: the elastic and plastic responses are given by solutions of hypoelastic differential equations, no decomposition of the deformation into elastic and plastic parts is done from the start, the hardening rule is an outcome, and the principle of material objectivity is obeyed. An important result is the existence of a limit surface that divides the stress space into regions of hardening and softening and is composed of equilibrium points of the differential equation of plastic response. ",An alternative mathematical theory of elastoplastic behaviour " The development of high-throughput sequencing and targeted therapies has led to the emergence of personalized medicine: a patient's molecular profile or the presence of a specific biomarker of drug response will correspond to a treatment recommendation made either by a physician or by a treatment assignment algorithm. The growing number of such algorithms raises the question of how to quantify their clinical impact knowing that a personalized medicine strategy will inherently include different versions of treatment. We thus specify an appropriate causal framework with multiple versions of treatment to define the causal effects of interest for precision medicine strategies and estimate them emulating clinical trials with observational data. Therefore, we determine whether the treatment assignment algorithm is more efficient than different control arms: gold standard treatment, observed treatments or random assignment of targeted treatments. Causal estimates of the precision medicine effects are first evaluated on simulated data and they demonstrate a lower biases and variances compared with naive estimation of the difference in expected outcome between treatment arms. The various simulations scenarios also point out the different bias sources depending on the clinical situation (heterogeneity of response, assignment of observed treatments etc.). A RShiny interactive application is also provided to further explore other user-defined scenarios. The method is then applied to data from patient-derived xenografts (PDX): each patient tumour is implanted in several immunodeficient cloned mice later treated with different drugs, thus providing access to all corresponding drug sensitivities for all patients. Access to these unique pre-clinical data emulating counterfactual outcomes allows to validate the reliability of causal estimates obtained with the proposed method. ",Causal inference with multiple versions of treatment and application to personalized medicine " For a sequence p=(p(1),p(2), ...) let G(n,p) denote the random graph with vertex set {1,2, ...,n} in which two vertices i, j are adjacent with probability p(|i-j|), independently for each pair. We study how the convergence of probabilities of first order properties of G(n,p), can be affected by the behaviour of p and the strength of the language we use. ",Convergence in homogeneous random graphs " We present a comparison between theoretical, frequency-dependent, damping rates and linewidths of radial-mode oscillations in red-giant stars located in the open cluster NGC 6819. The calculations adopt a time-dependent non-local convection model, with the turbulent pressure profile being calibrated to results of 3D hydrodynamical simulations of stellar atmospheres. The linewidths are obtained from extensive peakbagging of Kepler lightcurves. These observational results are of unprecedented quality owing to the long continuous observations by Kepler. The uniqueness of the Kepler mission also means that, for asteroseismic properties, this is the best data that will be available for a long time to come. We therefore take great care in modelling nine RGB stars in NGC 6819 using information from 3D simulations to obtain realistic temperature stratifications and calibrated turbulent pressure profiles. Our modelled damping rates reproduce well the Kepler observations, including the characteristic depression in the linewidths around the frequency of maximum oscillation power. Furthermore, we thoroughly test the sensitivity of the calculated damping rates to changes in the parameters of the nonlocal convection model. ",Modelling linewidths of Kepler red giants in NGC 6819 " The utilization of social media in epidemic surveillance has been well established. Nonetheless, bias is often introduced when pre-defined lexicons are used to retrieve relevant corpus. This study introduces a framework aimed at curating extensive dictionaries of medical colloquialisms and Unified Medical Language System (UMLS) concepts. The framework comprises three modules: a BERT-based Named Entity Recognition (NER) model that identifies medical entities from social media content, a deep-learning powered normalization module that standardizes the extracted entities, and a semi-supervised clustering module that assigns the most probable UMLS concept to each standardized entity. We applied this framework to COVID-19-related tweets from February 1, 2020, to April 30, 2022, generating a symptom dictionary (available at https://github.com/ningkko/UMLS_colloquialism/) composed of 9,249 standardized entities mapped to 876 UMLS concepts and 38,175 colloquial expressions. This framework demonstrates encouraging potential in addressing the constraints of keyword matching information retrieval in social media-based public health research. ",Streamlining Social Media Information Retrieval for Public Health Research with Deep Learning We describe the quantum state of a Bose-Einstein condensate at zero temperature. By evaluating the Q-function we show that the ground state of Bose-Einstein condensate under the Hartree approximation is squeezed. We find that multimode Schroedinger cat states are generated as the condensate evolves in a ballistic expansion. ,Characterisation of the dynamical quantum state of a zero temperature Bose-Einstein condensate " Let $S(\Lambda)$ be the cyclotomic q-Schur algebra associated to the Ariki-Koike algebra $H$. We construct a certain subalgebra $S^0(\Lambda)$ of $S(\Lambda)$, and show that it is a standardly based algebra in the sense of Du and Rui. $S^0(\Lambda)$ has a natural quotient $\bar{S^0}(\Lambda)$, which turns out to be a cellular algebra. In the case where the modified Ariki-Koike algebra $H^{\flat}$ is defined, $\bar{S^0}(\Lambda)$ coincides with the cyclotomic q-Schur algebra associated to $H^{\flat}$. In this paper, we discuss a relationship among the decomposition numbers of $S(\Lambda)$, $S^0(\Lambda)$ and $\bar{S^0}(\Lambda)$. In particular, we show that some important part of the decomposition matrix of $S(\Lambda)$ coincides with a part of the decomposition matrix of $\bar{S^0}(\Lambda)$. ",On decomposition numbers of the cyclotomic q-Schur algebras " Metasurfaces (MSs) have been utilized to manipulate different properties of electromagnetic waves. By combining local control over the wave amplitude, phase, and polarization into a single tunable structure, a multi-functional and reconfigurable metasurface can be realized, capable of full control over incident radiation. Here, we experimentally validate a multi-functional metasurface architecture for the microwave regime, where in principle variable loads are connected behind the backplane to reconfigurably shape the complex surface impedance. As a proof-of-concept step, we fabricate several metasurface instances with static loads in different configurations (surface mount capacitors and resistors of different values in different connection topologies) to validate the approach and showcase the different achievable functionalities. Specifically, we show perfect absorption for oblique incidence (both polarizations), broadband linear polarization conversion, and beam splitting, demonstrating control over the amplitude, polarization state, and wavefront, respectively. Measurements are performed in the 4-18 GHz range inside an anechoic chamber and show good agreement with theoretically-anticipated results. Our results clearly demonstrate the practical potential of the proposed architecture for reconfigurable electromagnetic wave manipulation. ","Multi-functional metasurface architecture for amplitude, polarization and wavefront control" " The acoustic radiation force produced by ultrasonic waves is the ""workhorse"" of particle manipulation in acoustofluidics. Nonspherical particles are also subjected to a mean torque known as the acoustic radiation torque. Together they constitute the mean-acoustic fields exerted on the particle. Analytical methods alone cannot calculate these fields on arbitrarily shaped particles in actual fluids and no longer fit for purpose. Here, a semi-analytical approach is introduced for handling subwavelength axisymmetric particles immersed in an isotropic Newtonian fluid. The obtained mean-acoustic fields depend on the scattering coefficients that reflect the monopole and dipole modes. These coefficients are determined by numerically solving the scattering problem. Our method is benchmarked by comparison with the exact result for a subwavelength rigid sphere in water. Besides, a more realistic case of a red blood cell immersed in blood plasma under a standing ultrasonic wave is investigated with our methodology. ",Mean-acoustic fields exerted on a subwavelength axisymmetric particle " In this paper, we have first given easily the characterization of special curves with the help of the Rotation minimizing frame (RMF). Also, rectifying-type curves are generalized n-dimensional space $R_{n}$. ",Rectifying-type curves and rotation minimizing frame $R_{n}$ " Eigenmode analysis is one of the most promising methods of analyzing large data sets in ongoing and near-future galaxy surveys. In such analyses, a fast evaluation of the correlation matrix in arbitrary cosmological models is crucial. The observational effects, including peculiar velocity distortions in redshift space, light-cone effects, selection effects, and effects of the complex shape of the survey geometry, should be taken into account in the analysis. In the framework of the linear theory of gravitational instability, we provide the methodology to quickly compute the correlation matrix. Our methods are not restricted to shallow redshift surveys, arbitrarily deep samples can be dealt with as well. Therefore, our methods are useful in constraining the geometry of the universe and the dark energy component, as well as the power spectrum of galaxies, since ongoing and near-future galaxy surveys probe the universe at intermediate to deep redshifts, z ~ 0.2--5. In addition to the detailed methods to compute the correlation matrix in 3-dimensional redshift surveys, methods to calculate the matrix in 2-dimensional projected samples are also provided. Prospects of applying our methods to likelihood estimation of the cosmological parameters are discussed. ",Eigenmode Analysis of Galaxy Distributions in Redshift Space " In the presence of fermionic matter the topologically distinct vacua of the standard model are metastable and can decay by tunneling through the sphaleron barrier. This process annihilates one fermion per doublet due to the anomalous non-conservation of baryon and lepton currents and is accompanied by a production of gauge and Higgs bosons. We present a numerical method to obtain local bounce solutions which minimize the Euclidean action in the space of all configurations connecting two adjacent topological sectors. These solutions determine the decay rate and the configuration of the fields after the tunneling. We also follow the real time evolution of this configuration and analyze the spectrum of the created bosons. If the matter density exceeds some critical value, the exponentially suppressed tunneling triggers off an avalanche producing an enormous amount of bosons. ",Spontaneous annihilation of high-density matter in the electroweak theory " As LOFAR has shown, using a dense array of radio antennas for detecting extensive air showers initiated by cosmic rays in the Earth's atmosphere makes it possible to measure the depth of shower maximum for individual showers with a statistical uncertainty less than $20\,g/cm^2$. This allows detailed studies of the mass composition in the energy region around $10^{17}\,eV$ where the transition from a Galactic to an Extragalactic origin could occur. Since SKA1-low will provide a much denser and very homogeneous antenna array with a large bandwidth of $50-350\,MHz$ it is expected to reach an uncertainty on the $X_{\max}$ reconstruction of less than $10\,g/cm^2$. We present first results of a simulation study with focus on the potential to reconstruct the depth of shower maximum for individual showers to be measured with SKA1-low. In addition, possible influences of various parameters such as the numbers of antennas included in the analysis or the considered frequency bandwidth will be discussed. ",Initial simulation study on high-precision radio measurements of the depth of shower maximum with SKA1-low " Five nontrivial stationary points are found for maximal gauged N=16 supergravity in three dimensions with gauge group $SO(8)\times SO(8)$ by restricting the potential to a submanifold of the space of $SU(3)\subset(SO(8)\times SO(8))_{\rm diag}$ singlets. The construction presented here uses the embedding of $E_{7(+7)}\subset E_{8(+8)}$ to lift the analysis of N=8, D=4 supergravity performed by N. Warner to N=16, D=3, and hence, these stationary points correspond to some of the known extrema of gauged N=8, D=4 supergravity. ",Some stationary points of gauged N=16 D=3 supergravity " With the propagation of sensor devices applied in smart home, activity recognition has ignited huge interest and most existing works assume that there is only one habitant. While in reality, there are generally multiple residents at home, which brings greater challenge to recognize activities. In addition, many conventional approaches rely on manual time series data segmentation ignoring the inherent characteristics of events and their heuristic hand-crafted feature generation algorithms are difficult to exploit distinctive features to accurately classify different activities. To address these issues, we propose an end-to-end Tree-Structure Convolutional neural network based framework for Multi-Resident Activity Recognition (TSC-MRAR). First, we treat each sample as an event and obtain the current event embedding through the previous sensor readings in the sliding window without splitting the time series data. Then, in order to automatically generate the temporal features, a tree-structure network is designed to derive the temporal dependence of nearby readings. The extracted features are fed into the fully connected layer, which can jointly learn the residents labels and the activity labels simultaneously. Finally, experiments on CASAS datasets demonstrate the high performance in multi-resident activity recognition of our model compared to state-of-the-art techniques. ",A Tree-structure Convolutional Neural Network for Temporal Features Exaction on Sensor-based Multi-resident Activity Recognition " Gyrochronology allows the derivation of ages for cool main sequence stars based on their observed rotation periods and masses, or a suitable proxy thereof. It is increasingly well-explored for FGK stars, but requires further measurements for older ages and K-M-type stars. We study the nearby, 3 Gyr-old open cluster Ruprecht 147 to compare it with the previously-studied, but far more distant, NGC 6819 cluster, and especially to measure cooler stars than was previously possible there. We constructed an inclusive list of 102 cluster members from prior work, including Gaia DR2, and for which light curves were also obtained during Campaign 7 of the Kepler/K2 space mission. [...] Periodic signals are found for 32 stars, 21 of which are considered to be both highly reliable and to represent single, or effectively single, Ru147 stars. These stars cover the spectral types from late-F to mid-M stars, and they have periods ranging from 6d-32d, allowing for a comparison of Ruprecht 147 to both of the other open clusters and to models of rotational spindown. The derived rotation periods connect reasonably to, overlap with, and extend to lower masses the known rotation period distribution of the 2.5 Gyr-old cluster NGC 6819. The data confirm that cool stars lie on a single surface in rotation period-mass-age space, and they simultaneously challenge its commonly assumed shape. The shape at the low mass region of the color-period diagram at the age of Ru147 favors a recently-proposed model, which requires a third mass-dependent timescale in addition to the two timescales required by a former model, suggesting that a third physical process is required to model rotating stars effectively. ",Rotation periods for cool stars in the open cluster Ruprecht 147 (NGC 6774): Implications for gyrochronology " We investigate numerically the zero-temperature physics of the one-dimensional Bose-Hubbard model in an incommensurate cosine potential, recently realized in experiments with cold bosons in optical superlattices L. Fallani et al., Phys. Rev. Lett. 98, 130404, (2007)]. An incommensurate cosine potential has intermediate properties between a truly periodic and a fully random potential, displaying a characteristic length scale (the quasi-period) which is shown to set a finite lower bound to the excitation energy of the system at special incommensurate fillings. This leads to the emergence of gapped incommensurate band-insulator (IBI) phases along with gapless Bose-glass (BG) phases for strong quasi-periodic potential, both for hardcore and softcore bosons. Enriching the spatial features of the potential by the addition of a second incommensurate component appears to remove the IBI regions, stabilizing a continuous BG phase over an extended parameter range. Moreover we discuss the validity of the local-density approximation in presence of a parabolic trap, clarifying the notion of a local BG phase in a trapped system; we investigate the behavior of first- and second-order coherence upon increasing the strength of the quasi-periodic potential; and we discuss the ab-initio derivation of the Bose-Hubbard Hamiltonian with quasi-periodic potential starting from the microscopic Hamiltonian of bosons in an incommensurate superlattice. ",Bosons in one-dimensional incommensurate superlattices " In many countries culture, practice or regulations inhibit the co-presence of relatives within the university faculty. We test the legitimacy of such attitudes and provisions, investigating the phenomenon of nepotism in Italy, a nation with high rates of favoritism. We compare the individual research performance of ""children"" who have ""parents"" in the same university against that of the ""non-children"" with the same academic rank and seniority, in the same field. The results show non-significant differences in performance. Analyses of career advancement show that children's research performance is on average superior to that of their colleagues who did not advance. The study's findings do not rule out the existence of nepotism, which has been actually recorded in a low percentage of cases, but do not prove either the most serious presumed consequences of nepotism, namely that relatives who are poor performers are getting ahead of non-relatives who are better performers. In light of these results, many attitudes and norms concerning parental ties in academia should be reconsidered. ",Relatives in the same university faculty: nepotism or merit? " A method for estimating the performance of low-density parity-check (LDPC) codes decoded by hard-decision iterative decoding algorithms on binary symmetric channels (BSC) is proposed. Based on the enumeration of the smallest weight error patterns that can not be all corrected by the decoder, this method estimates both the frame error rate (FER) and the bit error rate (BER) of a given LDPC code with very good precision for all crossover probabilities of practical interest. Through a number of examples, we show that the proposed method can be effectively applied to both regular and irregular LDPC codes and to a variety of hard-decision iterative decoding algorithms. Compared with the conventional Monte Carlo simulation, the proposed method has a much smaller computational complexity, particularly for lower error rates. ",Estimation of Bit and Frame Error Rates of Low-Density Parity-Check Codes on Binary Symmetric Channels " We suggest a new method for estimating the fractal dimension of the spatial distribution of galaxies, the method of selected cylinders. We show the capabilities of this method by constructing a two-point conditional column density for galaxies with known redshifts from the LEDA database. The fractal dimension of a sample of LEDA and EDR SDSS galaxies has been estimated to be D = 2.1 for cylinder lengths of 200 Mpc. Amajor advantage of the suggested method is that it allows scales comparable to the catalog depth to be analyzed for galaxy surveys in the form of conical sectors and small fields in the sky. ",The Method of a Two-Point Conditional Column Density for Estimating the Fractal Dimension of the Galaxy Distribution " Modulo inverse is an important arithmetic operation. Many famous algorithms in public key cryptography require to compute modulo inverse. It is argued that the method of DaYan deriving one of Jiushao Qin provides the most concise and transparent way of computing modulo inverse. Based on the rule of taking the least positive remainder in division, this paper presents a more precise algorithmic description of the method of DaYan deriving one to reflect Qin's original idea. Our form of the algorithm is straightforward and different from the ones in the literature. Some additional information can be revealed easily from the process of DaYan deriving one, e.g., the invariance property of the permanent of the state, natural connection to continued fractions. Comparison of Qin'a algorithm and the modern form of the Extended Euclidean algorithm is also given. Since DaYan deriving one is the key technical ingredient of Jiushao Qin's DaYan aggregation method (aka the Chinese Remainder Theorem), we include some explanation to the latter as well. ",On the Algorithmic Significance and Analysis of the Method of DaYan Deriving One " The present contribution investigates shape optimisation problems for a class of semilinear elliptic variational inequalities with Neumann boundary conditions. Sensitivity estimates and material derivatives are firstly derived in an abstract operator setting where the operators are defined on polyhedral subsets of reflexive Banach spaces. The results are then refined for variational inequalities arising from minimisation problems for certain convex energy functionals considered over upper obstacle sets in $H^1$. One particularity is that we allow for dynamic obstacle functions which may arise from another optimisation problems. We prove a strong convergence property for the material derivative and establish state-shape derivatives under regularity assumptions. Finally, as a concrete application from continuum mechanics, we show how the dynamic obstacle case can be used to treat shape optimisation problems for time-discretised brittle damage models for elastic solids. We derive a necessary optimality system for optimal shapes whose state variables approximate desired damage patterns and/or displacement fields. ",Shape optimisation for a class of semilinear variational inequalities with applications to damage models " We study effective gravitational F-terms, obtained by integrating an $U(N)$ adjoint chiral superfield $\Phi$ coupled to the ${\cal N}=1$ gauge chiral superfield $W_\alpha$ and supergravity, to arbitrary orders in the gravitational background. The latter includes in addition to the ${\cal N}=1$ Weyl superfield $G_{\alpha\beta\gamma}$, the self-dual graviphoton field strength $F_{\alpha\beta}$ of the parent, broken ${\cal N}=2$ theory. We first study the chiral ring relations resulting from the above non-standard gravitational background and find agreement, for gauge invariant operators, with those obtained from the dual closed string side via Bianchi identities for ${\cal N}=2$ supergravity coupled to vector multiplets. We then derive generalized anomaly equations for connected correlators on the gauge theory side, which allow us to solve for the basic one-point function $\langle {\rm Tr} W^2/(z-\Phi)\rangle$ to all orders in $F^2$. By generalizing the matrix model loop equation to the generating functional of connected correlators of resolvents, we prove that the gauge theory result coincides with the genus expansion of the associated matrix model, after identifying the expansion parameters on the two sides. ",Gravitational F-terms through anomaly equations and deformed chiral rings " We introduce Minimal Achievable Sufficient Statistic (MASS) Learning, a training method for machine learning models that attempts to produce minimal sufficient statistics with respect to a class of functions (e.g. deep networks) being optimized over. In deriving MASS Learning, we also introduce Conserved Differential Information (CDI), an information-theoretic quantity that - unlike standard mutual information - can be usefully applied to deterministically-dependent continuous random variables like the input and output of a deep network. In a series of experiments, we show that deep networks trained with MASS Learning achieve competitive performance on supervised learning and uncertainty quantification benchmarks. ",Minimal Achievable Sufficient Statistic Learning " Given the rapid advances in unmanned aerial vehicles, or drones, and increasing need to monitor traffic at a city level, one of the current research gaps is how to systematically deploy drones over multiple periods. We propose a real-time data-driven approach: we formulate the first deterministic arc-inventory routing problem and derive its stochastic dynamic policy. The policy is expected to be of greatest value in scenarios where uncertainty is highest and costliest, such as city traffic monitoring during major events. The Bellman equation for an approximation of the proposed inventory routing policy is formulated as a selective vehicle routing problem. We propose an approximate dynamic programming algorithm based on Least Squares Monte Carlo simulation to find that policy. The algorithm has been modified so that the least squares dependent variable is defined to be the ""expected stock out cost upon the next replenishment"". The new algorithm is tested on 30 simulated instances of real time trajectories over 5 time periods of the selective VRP to evaluate the proposed policy and algorithm. Computational results on the selected instances show that the algorithm can outperform the myopic policy by 23% to 28% over those tests, depending on the parametric design. Further tests are conducted on classic benchmark arc routing problem instances. The 11-link instance gdb19 is expanded into a sequential 15-period stochastic dynamic example and used to demonstrate why a naive static multi-period deployment plan would not be effective in real networks. ",Dynamic UAV-based traffic monitoring under uncertainty as a stochastic arc-inventory routing policy " Photon pair generation in silicon photonic integrated circuits relies on four wave mixing via the third order nonlinearity. Due to phase matching requirements and group velocity dispersion, this method has typically required TE polarized light. Here, we demonstrate TM polarized photon pair production in linearly uncoupled silicon resonators with more than an order of magnitude more dispersion than previous work. We achieve measured rates above 2.8 kHz and a heralded second order correlation of $g^{(2)}(0) = 0.0442 \pm 0.0042$. This method enables phase matching in dispersive media and paves the way for novel entanglement generation in silicon photonic devices. ",Nonlinear Photon Pair Generation in a Highly Dispersive Medium " Recent neutron scattering experiments suggested that frustrated magnetic interactions give rise to antiferromagnetic spiral and fractional skyrmion lattice phases in MnSc$_2$S$_4$. Here, to trace the signatures of these modulated phases, we studied the spin excitations of MnSc$_2$S$_4$ by THz spectroscopy at 300 mK up to 12 T. We found a single magnetic resonance with linearly increasing frequency in field. The corresponding $g$-factor of Mn$^{2+}$ ions $g$ = 1.96, and the absence of other resonances imply very weak anisotropies and negligible contribution of higher harmonics to the spiral state. The significant difference between the dc magnetic susceptibility and the lowest-frequency ac susceptibility in our experiment implies the existence of mode(s) below 100 GHz. ",Spin excitations in the magnetically ordered phases of MnSc$_2$S$_4$ " HDR+ is an image processing pipeline presented by Google in 2016. At its core lies a denoising algorithm that uses a burst of raw images to produce a single higher quality image. Since it is designed as a versatile solution for smartphone cameras, it does not necessarily aim for the maximization of standard denoising metrics, but rather for the production of natural, visually pleasing images. In this article, we specifically discuss and analyze the HDR+ burst denoising algorithm architecture and the impact of its various parameters. With this publication, we provide an open source Python implementation of the algorithm, along with an interactive demo. ",An Analysis and Implementation of the HDR+ Burst Denoising Method We establish compactness estimates for $\bar{\partial}_{M}$ on a compact pseudoconvex CR-submanifold $M$ of $\mathbb{C}^{n}$ of hypersurface type that satisfies the (analogue of the) geometric sufficient conditions for compactness of the $\bar{\partial}$-Neumann operator given by the authors earlier. These conditions are formulated in terms of certain short time flows in complex tangential directions. ,Geometric sufficient conditions for compactness of the complex Green operator " In this note we consider a one-dimensional quantum mechanical particle constrained by a parabolic well perturbed by a Gaussian potential. As the related Birman-Schwinger operator is trace class, the Fredholm determinant can be exploited in order to compute the modified eigenenergies which differ from those of the harmonic oscillator due to the presence of the Gaussian perturbation. By taking advantage of Wang's results on scalar products of four eigenfunctions of the harmonic oscillator, it is possible to evaluate quite accurately the two lowest-lying eigenvalues as functions of the coupling constant $\lambda$. ",The two lowest eigenvalues of the harmonic oscillator in the presence of a Gaussian perturbation " The aim of this note is to show how the introduction of certain tableaux, called Catalan alternative tableaux, provides a very simple and elegant description of the product in the Hopf algebra of binary trees defined by Loday and Ronco. Moreover, we use this description to introduce a new associative product on the space of binary trees. ",The product of trees in the Loday-Ronco algebra through Catalan alternative tableaux " We firstly report an axion haloscope search with toroidal geometry. In this pioneering search, we exclude the axion-photon coupling $g_{a\gamma\gamma}$ down to about $5\times10^{-8}$ GeV$^{-1}$ over the axion mass range from 24.7 to 29.1 $\mu$eV at a 95\% confidence level. The prospects for axion dark matter searches with larger scale toroidal geometry are also considered. ",First axion dark matter search with toroidal geometry " Goetze and Woelfle (GW) wrote the conductivity in terms of a memory function M as (ine2/m)/(omega+M(omega)), where M=i/tau in the Drude limit. The analytic properties of -M are the same as those of the self-energy of a retarded Green's function. In the approximate treatment of GW, -M closely resembles a self-energy, with differences, e.g., the imaginary part is twice too large. The correct relation between -M and the self-energy is known for the electron-phonon case and is conjectured to be similar for other perturbations. When vertex corrections are ignored there is a known relation. A derivation using Matsubara temperature Green's functions is given. ",Electron Self-Energy and Generalized Drude Formula for Infrared Conductivity of Metals " Hubbard ladders are an important stepping stone to the physics of the two-dimensional Hubbard model. While many of their properties are accessible to numerical and analytical techniques, the question of whether weakly hole-doped Hubbard ladders are dominated by superconducting or charge-density-wave correlations has so far eluded a definitive answer. In particular, previous numerical simulations of Hubbard ladders have seen a much faster decay of superconducting correlations than expected based on analytical arguments. We revisit this question using a state-of-the-art implementation of the density matrix renormalization group algorithm that allows us to simulate larger system sizes with higher accuracy than before. Performing careful extrapolations of the results, we obtain improved estimates for the Luttinger liquid parameter and the correlation functions at long distances. Our results confirm that, as suggested by analytical considerations, superconducting correlations become dominant in the limit of very small doping. ",Pair Correlations in Doped Hubbard Ladders " The number of academic papers being published is increasing exponentially in recent years, and recommending adequate citations to assist researchers in writing papers is a non-trivial task. Conventional approaches may not be optimal, as the recommended papers may already be known to the users, or be solely relevant to the surrounding context but not other ideas discussed in the manuscript. In this work, we propose a novel embedding algorithm DocCit2Vec, along with the new concept of ``structural context'', to tackle the aforementioned issues. The proposed approach demonstrates superior performances to baseline models in extensive experiments designed to simulate practical usage scenarios. ",Citation Recommendations Considering Content and Structural Context Embedding " Starting from the Ashtekar Hamiltonian variables for general relativity, the self-dual Einstein equations (SDE) may be rewritten as evolution equations for three divergence free vector fields given on a three dimensional surface with a fixed volume element. From this general form of the SDE, it is shown how they may be interpreted as the field equations for a two dimensional field theory. It is further shown that these equations imply an infinite number of non-local conserved currents. A specific way of writing the vector fields allows an identification of the full SDE with those of the two dimensional chiral model, with the gauge group being the group of area preserving diffeomorphisms of a two dimensional surface. This gives a natural Hamiltonian formulation of the SDE in terms of that of the chiral model. The conservation laws using the explicit chiral model form of the equations are also given. ",Self-dual gravity as a two dimensional theory and conservation laws " We consider classical and quantum mechanics related to an additional noncommutativity, symmetric in position and momentum coordinates. We show that such mechanical system can be transformed to the corresponding one which allows employment of the usual formalism. In particular, we found explicit connections between quadratic Hamiltonians and Lagrangians, in their commutative and noncommutative regimes. In the quantum case we give general procedure how to compute Feynman's path integral in this noncommutative phase space with quadratic Lagrangians (Hamiltonians). This approach is applied to a charged particle in the noncommutative plane exposed to constant homogeneous electric and magnetic fields. ",Noncommutative Quantum Mechanics with Path Integral " Classical time series models have serious difficulties in modeling and forecasting the enormous fluctuations of electricity spot prices. Markov regime switch models belong to the most often used models in the electricity literature. These models try to capture the fluctuations of electricity spot prices by using different regimes, each with its own mean and covariance structure. Usually one regime is dedicated to moderate prices and another is dedicated to high prices. However, these models show poor performance and there is no theoretical justification for this kind of classification. The merit order model, the most important micro-economic pricing model for electricity spot prices, however, suggests a continuum of mean levels with a functional dependence on electricity demand. We propose a new statistical perspective on modeling and forecasting electricity spot prices that accounts for the merit order model. In a first step, the functional relation between electricity spot prices and electricity demand is modeled by daily price-demand functions. In a second step, we parameterize the series of daily price-demand functions using a functional factor model. The power of this new perspective is demonstrated by a forecast study that compares our functional factor model with two established classical time series models as well as two alternative functional data models. ",Modeling and forecasting electricity spot prices: A functional data perspective " The decay width, forward-backward asymmetry and tau lepton longitudinal and transversal polarization for the exclusive (B -> K tau^+ tau^-) decay in a two Higgs doublet model are computed. It is shown that the forward-backward asymmetry and longitudinal polarization of the tau lepton are very effective tools for establishing new physics. ",Two Higgs Doublet Model and Lepton Polarization in the B -> K tau+ tau- Decay " Gravitational wave (GW) detection is now commonplace and as the sensitivity of the global network of GW detectors improves, we will observe $\mathcal{O}(100)$s of transient GW events per year. The current methods used to estimate their source parameters employ optimally sensitive but computationally costly Bayesian inference approaches where typical analyses have taken between 6 hours and 5 days. For binary neutron star and neutron star black hole systems prompt counterpart electromagnetic (EM) signatures are expected on timescales of 1 second -- 1 minute and the current fastest method for alerting EM follow-up observers, can provide estimates in $\mathcal{O}(1)$ minute, on a limited range of key source parameters. Here we show that a conditional variational autoencoder pre-trained on binary black hole signals can return Bayesian posterior probability estimates. The training procedure need only be performed once for a given prior parameter space and the resulting trained machine can then generate samples describing the posterior distribution $\sim 6$ orders of magnitude faster than existing techniques. ",Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy " Using the formulation of electrodynamics in rotating media, we put into explicit quantitative form the effect of rotation on interference and diffraction patterns as observed in the rotating medium's rest-frame. As a paradigm experiment we focus the interference generated by a linear array of sources in a homogeneous medium. The interference is distorted due to rotation; the maxima now follow curved trajectories. Unlike the classical Sagnac effect in which the rotation induced phase is independent of the refraction index $n$, here the maxima bending increases when $n$ decreases, suggesting that $\epsilon$-near-zero metamaterials can enhance optical gyroscopes and rotation-induced non-reciprocal devices. This result is counter intuitive as one may expect that a wave that travels faster would bend less. The apparent contradiction is clarified via the Minkowski momentum picture for a quasi-particle model of the interference that introduces the action of a Coriolis force, and by the Abraham picture of the wave-only momentum. our results may also shed light on the Abraham-Minkowski controversy as examined in non-inertial electrodynamics. ",Rest frame interference in rotating structures and metamaterials " A recently proposed method for scaling real accelerograms to obtain sets of code-compliant records is assessed. The method, which uses combined time and amplitude scaling, corroborated with an imposed value of an instrumental, Arias type intensity, allows the generation of sets of accelerograms for which the values of the mean response spectrum for a given period range are not less than 90% of the elastic response spectrum specified by the code. The method, which is compliant with both for the Romanian seismic code, P100-1/2006, and Eurocode 8, was described in previous papers. Based on dynamic analyses of single-degree-of freedom (SDOF) and of multi degree-of-freedom (MDOF) systems, a detailed application and assessment of the method is performed, for the case of the long corner period design spectrum in Bucharest. Conclusions are drawn on the advantages of the method, as well as on its potential improvement in the future. ",Use of combined scaling of real seismic records to obtain code-compliant sets of accelerograms: application for the city of Bucharest " Most parameter constraints obtained from cosmic microwave background (CMB) anisotropy data are based on power estimates and rely on approximate likelihood functions; computational difficulties generally preclude an exact analysis based on pixel values. With the specific goal of testing this kind of approach, we have performed a complete (un-approximated) likelihood analysis combining the COBE, Saskatoon and MAX data sets. We examine in detail the ability of certain approximate techniques based on band-power estimates to recover the full likelihood constraints. The traditional $\chi^2$-method does not always find the same best-fit model as the likelihood analysis (a bias), due mainly to the false assumption of Gaussian likelihoods that makes the method overly sensitive to data outliers. Although an improvement, other approaches employing non-Gaussian flat-band likelihoods do not always faithfully reproduce the complete likelihood constraints either; not even when using the exact flat-band likelihood curves. We trace this to the neglect of spectral information by simple flat band-power estimates. A straightforward extension incorporating a local effective slope (of the power spectrum, $C_l$) provides a faithful representation of the likelihood surfaces without significantly increasing computing cost. Finally, we also demonstrate that the best-fit model to this particular data set is a {\em good fit}, or that the observations are consistent with Gaussian sky fluctuations, according to our statistic. ",Concerning Parameter Estimation Using the Cosmic Microwave Background We report the magnetic properties of yttrium iron garnet (YIG) thin films grown by pulsed laser deposition technique. The films were deposited on Si (100) substrates in the range of 15-50 nm thickness. Magnetic characterizations were investigated by ferromagnetic resonance spectra. Perpendicular magnetic easy axis was achieved up to 50 nm thickness. We observed that the perpendicular anisotropy values decreased by increasing the film thickness. The origin of the perpendicular magnetic anisotropy (PMA) was attributed to the texture and the lattice distortion in the YIG thin films. We anticipate that perpendicularly magnetized YIG thin films on Si substrates pave the way for a cheaper and compatible fabrication process. ,Origin of Perpendicular Magnetic Anisotropy in Yttrium Iron Garnet Thin Films Grown on Si (100) " In this paper, we describe all traces for the BCH star-product on the dual of a Lie algebra. First we show by an elementary argument that the BCH as well as the Kontsevich star-product are strongly closed if and only if the Lie algebra is unimodular. In a next step we show that the traces of the BCH star-product are given by the $\ad$-invariant functionals. Particular examples are the integration over coadjoint orbits. We show that for a compact Lie group and a regular orbit one can even achieve that this integration becomes a positive trace functional. In this case we explicitly describe the corresponding GNS representation. Finally we discuss how invariant deformations on a group can be used to induce deformations of spaces where the group acts on. ",Traces for star products on the dual of a Lie algebra " The spectroscopy of microlensed sources towards the Galactic bulge provides a unique opportunity to study (i) the kinematics of the Galactic bulge, particularly its far-side, (ii) the effects of extinction on the microlensed sources, and (iii) the contributions of the bulge and the disk lenses to the microlensing optical depth. We present the results from such a spectroscopic study of 17 microlensed sources carried out using the ESO Faint Object Spectrograph (EFOSC) at the 3.6 m European Southern Observatory (ESO) telescope. The spectra of the unlensed sources and Kurucz model spectra were used as templates to derive the radial velocities and the extinctions of the microlensed sources. It is shown that there is an extinction shift between the microlensed population and the non-microlensed population but there is no apparent correlation between the extinction and the radial velocity. This extinction offset, in our best model, would imply that 65% of the events are caused by self-lensing within the bulge. The sample needs to be increased to about 100 sources to get a clear picture of the kinematics of the bulge. ",Studying the Galactic Bulge Through Spectroscopy of Microlensed Sources: II. Observations " The emergence of functional oligonucleotides on early Earth required a molecular selection mechanism to screen for specific sequences with prebiotic functions. Cyclic processes such as daily temperature oscillations were ubiquitous in this environment and could trigger oligonucleotide phase separation. Here, we propose sequence selection based on phase separation cycles realized through sedimentation in a system subjected to the feeding of oligonucleotides. Using theory and experiments with DNA, we show sequence-specific enrichment in the sedimented dense phase, in particular of short 22-mer DNA sequences. The underlying mechanism selects for complementarity, as it enriches sequences that tightly interact in the condensed phase through base-pairing. Our mechanism also enables initially weakly biased pools to enhance their sequence bias or to replace the most abundant sequences as the cycles progress. Our findings provide an example of a selection mechanism that may have eased screening for the first auto-catalytic self-replicating oligonucleotides. ",Selection of prebiotic oligonucleotides by cyclic phase separation " We study a refinement of the symmetric multiple zeta value, called the $t$-adic symmetric multiple zeta value, by considering its finite truncation. More precisely, two kinds of regularizations (harmonic and shuffle) give two kinds of the $t$-adic symmetric multiple zeta values, thus we introduce two kinds of truncations correspondingly. Then we show that our truncations tend to the corresponding $t$-adic symmetric multiple zeta values, and satisfy the harmonic and shuffle relations, respectively. This gives a new proof of the double shuffle relations for $t$-adic symmetric multiple zeta values, first proved by Jarossay. In order to prove the shuffle relation, we develop the theory of truncated $t$-adic symmetric multiple zeta values associated with $2$-colored rooted trees. Finally, we discuss a refinement of Kaneko-Zagier's conjecture and the $t$-adic symmetric multiple zeta values of Mordell-Tornheim type. ",Truncated $t$-adic symmetric multiple zeta values and double shuffle relations " In addition to ever-present thermal noise, various communication and sensor systems can contain significant amounts of interference with outlier (e.g. impulsive) characteristics. Such outlier noise can be efficiently mitigated in real-time using intermittently nonlinear filters. Depending on the noise nature and composition, improvements in the quality of the signal of interest will vary from ""no harm"" to substantial. In this paper, we explain in detail why the underlying outlier nature of interference often remains obscured, discussing the many challenges and misconceptions associated with state-of-art analog and/or digital nonlinear mitigation techniques, especially when addressing complex practical interference scenarios. We then focus on the methodology and tools for real-time outlier noise mitigation, demonstrating how the ""excess band"" observation of outlier noise enables its efficient in-band mitigation. We introduce the basic real-time nonlinear components that are used for outlier noise filtering, and provide examples of their implementation. We further describe complementary nonlinear filtering arrangements for wide- and narrow-band outlier noise reduction, providing several illustrations of their performance and the effect on channel capacity. Finally, we outline ""effectively analog"" digital implementations of these filtering structures, discuss their broader applications, and comment on the ongoing development of the platform for their demonstration and testing. ",Hidden outlier noise and its mitigation " Comparatively little is known about atmospheric chemistry on Uranus and Neptune, because remote spectral observations of these cold, distant ``Ice Giants'' are challenging, and each planet has only been visited by a single spacecraft during brief flybys in the 1980s. Thermochemical equilibrium is expected to control the composition in the deeper, hotter regions of the atmosphere on both planets, but disequilibrium chemical processes such as transport-induced quenching and photochemistry alter the composition in the upper atmospheric regions that can be probed remotely. Surprising disparities in the abundance of disequilibrium chemical products between the two planets point to significant differences in atmospheric transport. The atmospheric composition of Uranus and Neptune can provide critical clues for unravelling details of planet formation and evolution, but only if it is fully understood how and why atmospheric constituents vary in a three-dimensional sense and how material coming in from outside the planet affects observed abundances. Future mission planning should take into account the key outstanding questions that remain unanswered about atmospheric chemistry on Uranus and Neptune, particularly those questions that pertain to planet formation and evolution, and those that address the complex, coupled atmospheric processes that operate on Ice Giants within our solar system and beyond. ",Atmospheric chemistry on Uranus and Neptune " We present a comparison of the noncommutative field theories built using two different star products: Moyal and Wick-Voros (or normally ordered). We compare the two theories in the context of the noncommutative geometry determined by a Drinfeld twist, and the comparison is made at the level of Green's functions and S-matrix. We find that while the Green's functions are different for the two theories, the S-matrix is the same in both cases, and is different from the commutative case. ",Twisted Noncommutative Field Theory: Wick-Voros vs Moyal " Duality games are a way of looking at wave-particle duality. In these games. Alice and Bob together are playing against the House. The House specifies, at random, which of two sub-games Alice and Bob will play. One game, Ways, requires that they obtain path information about a particle going through an $N$-path interferometer and the other, Phases, requires that they obtain phase information. In general, because of wave-particle duality, Alice and Bob cannot always win the overall game. However, if the required amount of path and phase information is not too great, for example specifying a set of paths or phases, one of which is the right one, then they can always win. Here we study examples of duality games that can always be won, and develop a wave-particle duality relation expressed only in terms of mutual information to help analyze these games. ",Partial particle and wave information and weak duality games " We analyze properties of non-hermitian matrices of size M constructed as square submatrices of unitary (orthogonal) random matrices of size N>M, distributed according to the Haar measure. In this way we define ensembles of random matrices and study the statistical properties of the spectrum located inside the unit circle. In the limit of large matrices, this ensemble is characterized by the ratio M/N. For the truncated CUE we derive analytically the joint density of eigenvalues from which easily all correlation functions are obtained. For N-M fixed and N--> infinity the universal resonance-width distribution with N-M open channels is recovered. ",Truncations of random unitary matrices " With the advent of ALMA, it is now possible to observationally constrain how disks form around deeply embedded protostars. In particular, the recent ALMA C3H2 line observations of the nearby protostar L1527 have been interpreted as evidence for the so-called ""centrifugal barrier,"" where the protostellar envelope infall is gradually decelerated to a stop by the centrifugal force in a region of super-Keplerian rotation. To test the concept of centrifugal barrier, which was originally based on angular momentum conserving-collapse of a rotating test particle around a fixed point mass, we carry out simple axisymmetric hydrodynamic simulations of protostellar disk formation including a minimum set of ingredients: self-gravity, rotation, and a prescribed viscosity that enables the disk to accrete. We find that a super-Keplerian region can indeed exist when the viscosity is relatively large but, unlike the classic picture of centrifugal barrier, the infalling envelope material is not decelerated solely by the centrifugal force. The region has more specific angular momentum than its surrounding envelope material, which points to an origin in outward angular momentum transport in the disk (subject to the constraint of disk expansion by the infalling envelope), rather than the spin-up of the envelope material envisioned in the classic picture as it falls closer to the center in order to conserve angular momentum. For smaller viscosities, the super-Keplerian rotation is weaker or non-existing. We conclude that, despite the existence of super-Keplerian rotation in some parameter regime, the classic picture of centrifugal barrier is not supported by our simulations. ",Centrifugal Barrier and Super-Keplerian Rotation in Protostellar Disk Formation " Despite the tremendous successes of science in providing knowledge and technologies, the Replication Crisis has highlighted that scientific institutions have much room for improvement. Peer-review is one target of criticism and suggested reforms. However, despite numerous controversies peer review systems, plus the obvious complexity of the incentives affecting the decisions of authors and reviewers, there is very little systematic and strategic analysis of peer-review systems. In this paper, we begin to address this feature of the peer-review literature by applying the tools of game theory. We use simulations to develop an evolutionary model based around a game played by authors and reviewers, before exploring some of its tendencies. In particular, we examine the relative impact of double-blind peer-review and open review on incentivising reviewer effort under a variety of parameters. We also compare (a) the impact of one review system versus another with (b) other alterations, such as higher costs of reviewing. We find that is no reliable difference between peer-review systems in our model. Furthermore, under some conditions, higher payoffs for good reviewing can lead to less (rather than more) author effort under open review. Finally, compared to the other parameters that we vary, it is the exogenous utility of author effort that makes an important and reliable difference in our model, which raises the possibility that peer-review might not be an important target for institutional reforms. ",Double blind vs. open review: an evolutionary game logit-simulating the behavior of authors and reviewers " The set of world lines for the non-relativistic quartic oscillator satisfying Newton's equation of motion for all space and time in 1-1 dimensions with no constraints other than the ""spring"" restoring force is shown to be equivalent (1-1-onto) to the corresponding set for the harmonic oscillator. This is established via an energy preserving invertible linearization map which consists of an explicit nonlinear algebraic deformation of coordinates and a nonlinear deformation of time coordinates involving a quadrature. In the context stated, the map also explicitly solves Newton's equation for the quartic oscillator for arbitrary initial data on the real line. This map is extended to all attractive potentials given by even powers of the space coordinate. It thus provides classes of new solutions to the initial value problem for all these potentials. ",An Invertible Linearization Map for the Quartic Oscillator " From the LHC runs we know that, with increasing collider energy, weak-boson-fusion Higgs production dominates as an environment for precision measurements. We show how a future hadron collider performs for three challenging benchmark signatures. Because all of these measurements rely on the tagging jet signature, we first give a comprehensive analysis of weak-boson-fusion kinematics and a proposed two-step jet veto at a 100 TeV hadron collider. We then find this machine to be sensitive to invisible Higgs branching ratios of 0.5%, a second-generation muon Yukawa coupling of 2%, and an enhanced total Higgs width of around 5%, the latter with essentially no model dependence. This kind of performance crucially relies on a sufficient detector coverage and a dedicated weak-boson-fusion trigger channel. ",Weak Boson Fusion at 100 TeV " We consider the semilinear heat equation \begin{eqnarray*} \partial_t u = \Delta u + |u|^{p-1} u \ln ^{\alpha}( u^2 +2), \end{eqnarray*} in the whole space $\mathbb{R}^n$, where $p > 1$ and $ \alpha \in \mathbb{R}$. Unlike the standard case $\alpha = 0$, this equation is not scaling invariant. We construct for this equation a solution which blows up in finite time $T$ only at one blowup point $a$, according to the following asymptotic dynamics: \begin{eqnarray*} u(x,t) \sim \psi(t) \left(1 + \frac{(p-1)|x-a|^2}{4p(T -t)|\ln(T -t)|} \right)^{-\frac{1}{p-1}} \text{ as } t \to T, \end{eqnarray*} where $\psi(t)$ is the unique positive solution of the ODE \begin{eqnarray*} \psi' = \psi^p \ln^{\alpha}(\psi^2 +2), \quad \lim_{t\to T}\psi(t) = + \infty. \end{eqnarray*} The construction relies on the reduction of the problem to a finite dimensional one and a topological argument based on the index theory to get the conclusion. By the interpretation of the parameters of the finite dimensional problem in terms of the blowup time and the blowup point, we show the stability of the constructed solution with respect to perturbations in initial data. To our knowledge, this is the first successful construction for a genuinely non-scale invariant PDE of a stable blowup solution with the derivation of the blowup profile. From this point of view, we consider our result as a breakthrough. ",Construction of a stable blowup solution with a prescribed behavior for a non-scaling invariant semilinear heat equation " We show how one can obtain an asymptotic expression for some special functions satisfying a second order differential equation with a very explicit error term starting from appropriate upper bounds. We will work out the details for the Bessel function $J_\nu (x)$ and the Airy function $Ai(x)$ and find a sharp approximation for their zeros. We also answer the question raised by Olenko by showing that $$c_1 | \nu^2-1/4\,| < \sup_{x \ge 0} x^{3/2}|J_\nu(x)-\sqrt{\frac{2}{\pi x}} \, \cos (x-\frac{\pi \nu}{2}-\frac{\pi}{4}\,)| 1 or < 1, respectively. We quantified the NICE of tetraspanins, growth factor receptors and integrins in EVs of eight breast cancer cell lines of varying metastatic potential and organotropism, combinatorially mapping up to 104 protein pairs. Our analysis revealed protein enrichment and co-expression patterns consistent with previous findings. For the organotropic cell lines, most protein pairs were co-enriched on EVs, with the majority of NICE values between 0.2 to 11.5, and extending from 0.037 to 80.4. Median NICE were either negative, neutral or positive depending on the cells. NICE analysis is easily multiplexed and is compatible with microarrays, bead-based and single EV assays. Additional studies are needed to deepen our understanding of the potential and significance of NICE for research and clinical uses. ",Protein Co-Enrichment Analysis of Extracellular Vesicles " We present shell model calculations of nuclear neutrino energy spectra for 70 $sd$-shell nuclei over the mass number range $A=21-35$. Our calculations include nuclear excited states as appropriate for the hot and dense conditions characteristic of pre-collapse massive stars. We consider neutrinos produced by charged lepton captures and decays and, for the first time in tabular form, neutral current nuclear deexcitation, providing neutrino energy spectra on the Fuller-Fowler-Newman temperature-density grid for these interaction channels for each nucleus. We use the full $sd$-shell model space to compute initial nuclear states up to 20 MeV excitation with transitions to final states up to 35-40 MeV, employing a modification of the Brink-Axel hypothesis to handle high temperature population factors and the nuclear partition functions. ",Neutrino Spectra from Nuclear Weak Interactions in $sd$-Shell Nuclei Under Astrophysical Conditions " We study the polarized lepton pair forward-backward asymmetries in (B -> K^* l^+ l^-) decay using a general, model independent form of the effective Hamiltonian. We present the general expression for nine double-polarization forward-backward asymmetries. It is shown that, the study of the forward-backward asymmetries of the doubly-polarized lepton pair is a very useful tool for establishing new physics beyond the standard model. ",Polarized lepton pair forward-backward asymmetries in (B -> K^* l^+ l^-) decay beyond the standard model " We consider a symmetric Anderson impurity model, with a soft-gap hybridization vanishing at the Fermi level with a power law r > 0. Three facets of the problem are examined. First the non-interacting limit, which despite its simplicity contains much physics relevant to the U > 0 case: it exhibits both strong coupling (SC) states (for r < 1) and local moment (LM) states (for r > 1), with characteristic signatures in both spectral properties and thermodynamic functions. Second, we establish general conditions upon the interaction self-energy for the occurence of a SC state for U > 0. This leads to a pinning theorem, whereby the modified spectral function is pinned at the Fermi level for any U where a SC state exists; it generalizes to arbitrary r the familiar pinning condition for the normal r = 0 Anderson model. Finally, we consider explicitly spectral functions at the simplest level: second order perturbation theory in U, which we conclude is applicable for r < 1/2 and r > 1 but not for 1/2 < r < 1. Characteristic spectral features observed in numerical renormalization group calculations are thereby recovered, for both SC and LM phases; and for the SC state the modified spectral functions are found to contain a generalized Abrikosov-Suhl resonance exhibiting a characteristic low-energy Kondo scale with increasing interaction strength. ",Magnetic impurities in gapless Fermi systems: perturbation theory " Let $\Omega=\Omega_0\setminus \overline{\Theta}\subset \mathbb{R}^n$, $n\geq 2$, where $\Omega_0$ and $\Theta$ are two open, bounded and convex sets such that $\overline{\Theta}\subset \Omega_0$ and let $\beta<0$ be a given parameter. We consider the eigenvalue problem for the Laplace operator associated to $\Omega$, with Robin boundary condition on $\partial \Omega_0$ and Neumann boundary condition on $\partial \Theta$. In [Paoli-Piscitelli-Trani, ESAIM-COCV '20] it is proved that the spherical shell is the only maximizer for the first Robin-Neumann eigenvalue in the class of domains $\Omega$ with fixed outer perimeter and volume. We establish a quantitative version of the afore-mentioned isoperimetric inequality; the main novelty consists in the introduction of a new type of hybrid asymmetry, that turns out to be the suitable one to treat the different conditions on the outer and internal boundary. Up to our knowledge, in this context, this is the first stability result in which both the outer and the inner boundary are perturbed. ",A stability result for the first Robin-Neumann eigenvalue: A double perturbation approach " Brain graphs (i.e, connectomes) constructed from medical scans such as magnetic resonance imaging (MRI) have become increasingly important tools to characterize the abnormal changes in the human brain. Due to the high acquisition cost and processing time of multimodal MRI, existing deep learning frameworks based on Generative Adversarial Network (GAN) focused on predicting the missing multimodal medical images from a few existing modalities. While brain graphs help better understand how a particular disorder can change the connectional facets of the brain, synthesizing a target brain multigraph (i.e, multiple brain graphs) from a single source brain graph is strikingly lacking. Additionally, existing graph generation works mainly learn one model for each target domain which limits their scalability in jointly predicting multiple target domains. Besides, while they consider the global topological scale of a graph (i.e., graph connectivity structure), they overlook the local topology at the node scale (e.g., how central a node is in the graph). To address these limitations, we introduce topology-aware graph GAN architecture (topoGAN), which jointly predicts multiple brain graphs from a single brain graph while preserving the topological structure of each target graph. Its three key innovations are: (i) designing a novel graph adversarial auto-encoder for predicting multiple brain graphs from a single one, (ii) clustering the encoded source graphs in order to handle the mode collapse issue of GAN and proposing a cluster-specific decoder, (iii) introducing a topological loss to force the prediction of topologically sound target brain graphs. The experimental results using five target domains demonstrated the outperformance of our method in brain multigraph prediction from a single graph in comparison with baseline approaches. ",Brain Multigraph Prediction using Topology-Aware Adversarial Graph Neural Network " Recent advances in understanding of the basic properties of compressible Magnetohydrodynamic (MHD) turbulence call for revisions of some of the generally accepted concepts. First, MHD turbulence is not so messy as it is usually believed. In fact, the notion of strong non-linear coupling of compressible and incompressible motions is not tenable. Alfven, slow and fast modes of MHD turbulence follow their own cascades and exhibit degrees of anisotropy consistent with theoretical expectations. Second, the fast decay of turbulence is not related to the compressibility of fluid. Rates of decay of compressible and incompressible motions are very similar. Third, viscosity by neutrals does not suppress MHD turbulence in a partially ionized gas. Instead, MHD turbulence develops magnetic cascade at scales below the scale at which neutrals damp ordinary hydrodynamic motions. The implications of those changes of MHD turbulence paradigm for molecular clouds require further studies. Those studies can benefit from testing of theoretical predictions using new statistical techniques that utilize spectroscopic data. We briefly discuss advances in development of tools using which the statistics of turbulent velocity can be recovered from observations. ",Basic Properties of Compressible MHD Turbulence: Implications for Molecular Clouds " A search for the rare two-body charmless baryonic decay $B^+ \to p \bar\Lambda$ is performed with $pp$ collision data, corresponding to an integrated luminosity of $3\,\mbox{fb}^{-1}$, collected by the LHCb experiment at centre-of-mass energies of 7 and 8 TeV. An excess of $B^+ \to p \bar\Lambda$ candidates with respect to background expectations is seen with a statistical significance of 4.1 standard deviations, and constitutes the first evidence for this decay. The branching fraction, measured using the $B^+ \to K^0_{\mathrm S} \pi^+$ decay for normalisation, is \begin{eqnarray} \mathcal{B}(B^+ \to p \bar\Lambda) & = & ( 2.4 \,^{+1.0}_{-0.8} \pm 0.3 ) \times 10^{-7} \,, \nonumber \end{eqnarray} where the first uncertainty is statistical and the second systematic. ",Evidence for the two-body charmless baryonic decay $B^+ \to p \bar\Lambda$ " ATCA HI and radio continuum observations of the peculiar southern galaxy IC2554 and its surroundings reveal typical signatures of an interacting galaxy group. We detected a large HI cloud between IC2554 and the elliptical galaxy NGC3136B. The gas dynamics in IC2554 itself, which is sometimes described as a colliding pair, are surprisingly regular, whereas NGC3136B was not detected. The HI cloud, which emerges from IC2554 as a large arc-shaped plume, has a size of about 30 kpc, larger than that of IC2554. The total HI mass of the IC2554 system is about 2 x 10^9 Msun, a third of which resides in the HI cloud. It is possible that tidal interaction between IC2554 and NGC3136B caused this spectacular HI cloud, but the possibility of IC2554 being a merger remnant is also discussed. We also detected HI gas in the nearby galaxies ESO092-G009 and RKK1959 as well as an associated HI cloud, ATCA J1006-6710. Together they have an HI mass of about 4.6 x 10^8 Msun. Another new HI source, ATCA J1007-6659, with an HI mass of only about 2.2 x 10^7 Msun was detected roughly between IC2554 and ESO092-G009 and corresponds to a face-on low surface brightness dwarf galaxy. Star formation is evident only in the galaxy IC2554 with a rate of about 4 Msun/yr. ",ATCA HI Observations of the Peculiar Galaxy IC2554 " C\^{a}mpeanu and Ho (2004) determined the maximum finite state complexity of finite languages, building on work of Champarnaud and Pin (1989). They stated that it is very difficult to determine the number of maximum-complexity languages. Here we give a formula for this number. We also generalize their work from languages to functions on finite sets. ",The number of languages with maximum state complexity " We propose how to generate genuine multipartite entanglement of electron spin qubits in a chain of quantum dots using the naturally available single-qubit rotations and two-qubit Heisenberg exchange interaction in the system. We show that the minimum number of required operations to generate entangled states of the GHZ-, cluster and W-type scales linearly with the number of qubits and estimate the fidelities of the generated entangled cluster states. As the required single and two-qubit operations have recently been realized, our proposed scheme opens the way for experimental investigation of multipartite entanglement with electron spin qubits. ",Production of multipartite entanglement for electron spins in quantum dots " In this paper presents the results obtained in the field of spectral theory operators of fractional differentiation. Proven a number of propositions which represents independent interest in the theory of fractional calculus. Introduced construction of multidimensional fractional integral in the direction. Formulated the sufficient conditions of representability functions by the fractional integral in the direction, in particular proved the embedding of a Sobolev space in classes of functions representable by the fractional integral in direction. Note that the technique of proof borrowed from the one-dimensional case is of particular interest. It should be noted that was constructed extension of Kipriyanov operator, was found a conjugate operator. This is all creates a complete picture reflecting the qualitative properties of fractional differential operators. ",Spectral Properties of Fractional Differentiation Operators " We report the results of a survey for fluorescent Ly-alpha emission carried out in the field surrounding the z=3.1 quasar QSO0420-388 using the FORS2 instrument on the VLT. We first review the properties expected for fluorescent Ly-alpha emitters, compared with those of other non-fluorescent Ly-alpha emitters. Our observational search detected 13 Ly-alpha sources sparsely sampling a volume of ~14000 comoving Mpc^3 around the quasar. The properties of these in terms of i) the line equivalent width, ii) the line profile and iii) the value of the surface brightness related to the distance from the quasar, all suggest that several of these may be plausibly fluorescent. Moreover, their number is in good agreement with the expectation from theoretical models. One of the best candidates for fluorescence is sufficiently far behind QSO0420-388 that it would imply that the quasar has been active for (at least) ~60 Myrs. Further studies on such objects will give information about proto-galactic clouds and on the radiative history (and beaming) of the high-redshift quasars. ",Plausible fluorescent Ly-alpha emitters around the z=3.1 QSO0420-388 " The law of the iterated logarithm for partial sums of weakly dependent processes was intensively studied by Walter Philipp in the late 1960s and 1970s. In this paper, we aim to extend these results to nondegenerate U-statistics of data that are strongly mixing or functionals of an absolutely regular process. ",Law of the Iterated Logarithm for U-Statistics of Weakly Dependent Observations " The target space of minimal $(2,2m-1)$ strings is embedded into the phase space of an integrable mechanical model. Quantum effects on the target space correspond to quantum corrections on the mechanical model. In particular double scaling is equivalent to standard uniform approximation at the classical turning points ot the mechanical model. After adding ZZ brane perturbations the quantum target remains smooth and topologically trivial. Around the ZZ brane singularities the Baker-Ahkiezer wave function is given in terms of the parabollic cylinder function. ",Minimal strings and Semiclassical Expansion " Power and channel allocation in interference-limited systems is a key enabler for beyond 5G (B5G) technologies, such as multi-carrier full duplex non-orthogonal multiple access (FD-NOMA). In FD-NOMA systems power allocation is a very computationally intense non-convex problem due to the presence of strong interference and the integrality condition on channel allocation. In this paper, we propose an iterative power allocation algorithm based on the minimization of the weighted mean square error, which converges to a feasible allocation of the original problem. Experimental results show that the proposed algorithm has by far the lowest complexity among other state-of-the-art solutions. Moreover, they assess the validity of our approach showing performance close to the theoretical optimum. ",WMMSE resource allocation for NOMA-FD " People today typically use multiple online social networks (Facebook, Twitter, Google+, LinkedIn, etc.). Each online network represents a subset of their ""real"" ego-networks. An interesting and challenging problem is to reconcile these online networks, that is, to identify all the accounts belonging to the same individual. Besides providing a richer understanding of social dynamics, the problem has a number of practical applications. At first sight, this problem appears algorithmically challenging. Fortunately, a small fraction of individuals explicitly link their accounts across multiple networks; our work leverages these connections to identify a very large fraction of the network. Our main contributions are to mathematically formalize the problem for the first time, and to design a simple, local, and efficient parallel algorithm to solve it. We are able to prove strong theoretical guarantees on the algorithm's performance on well-established network models (Random Graphs, Preferential Attachment). We also experimentally confirm the effectiveness of the algorithm on synthetic and real social network data sets. ",An efficient reconciliation algorithm for social networks " We present an effective field theory for the Kondo lattice, which can exhibit, in a certain range of parameters, a non Fermi liquid paramagnetic phase at the brink of a zero temperature Anti Ferromagnetic (AF) transition. The model is derived in a natural way from the bosonic Kondo-Heisenberg model, in which the Kondo resonances are treated as true (but damped) Grassmann fields. One loop Renormalization Group (RG) treatment of this model gives a phase diagram for the Kondo lattice as a function of $ J_K $ where, for $ J_K< J_c$ the system shows AF order, for $J_K> J_1$ one has the heavy electron phase and for $ J_c < J_K < J_1$ the formation of the Kondo singlets is incomplete, leading to the breakdown of the Landau Fermi liquid theory. ",The fate of Kondo resonances in certain Kondo lattices: a ``poor woman's'' scaling analysis " Is the output softmax layer, which is adopted by most language models (LMs), always the best way to compute the next word probability? Given so many attention layers in a modern transformer-based LM, are the pointer networks redundant nowadays? In this study, we discover that the answers to both questions are no. This is because the softmax bottleneck sometimes prevents the LMs from predicting the desired distribution and the pointer networks can be used to break the bottleneck efficiently. Based on the finding, we propose several softmax alternatives by simplifying the pointer networks and accelerating the word-by-word rerankers. In GPT-2, our proposals are significantly better and more efficient than mixture of softmax, a state-of-the-art softmax alternative. In summarization experiments, without significantly decreasing its training/testing speed, our best method based on T5-Small improves factCC score by 2 points in CNN/DM and XSUM dataset, and improves MAUVE scores by 30% in BookSum paragraph-level dataset. ","Revisiting the Architectures like Pointer Networks to Efficiently Improve the Next Word Distribution, Summarization Factuality, and Beyond" " Birch reduction is a time-proven way to hydrogenate aromatic hydrocarbons (such as benzene), which relies on the reducing power of electrons released from alkali metals into liquid ammonia. We have succeeded to characterize the key intermediates of the Birch reduction process - the solvated electron and dielectron and the benzene radical anion - using cyclic voltammetry and photoelectron spectroscopy, aided by electronic structure calculations. In this way, we not only quantify the electron binding energies of these species, which are decisive for the mechanism of the reaction but also use Birch reduction as a case study to directly connect the two seemingly unrelated experimental techniques. ","Bridging Electrochemistry and Photoelectron Spectroscopy in the Context of Birch Reduction: Detachment Energies and Redox Potentials of Electron, Dielectron, and Benzene Radical Anion in Liquid Ammonia" " Kerker preconditioner, based on the dielectric function of homogeneous electron gas, is designed to accelerate the self-consistent field (SCF) iteration in the density functional theory (DFT) calculations. However, question still remains regarding its applicability to the inhomogeneous systems. In this paper, we develop a modified Kerker preconditioning scheme which captures the long-range screening behavior of inhomogeneous systems thus improve the SCF convergence. The effectiveness and efficiency is shown by the tests on long-z slabs of metals, insulators and metal-insulator contacts. For situations without a priori knowledge of the system, we design the a posteriori indicator to monitor if the preconditioner has suppressed charge sloshing during the iterations. Based on the a posteriori indicator, we demonstrate two schemes of the self-adaptive configuration for the SCF iteration. ",Applicability of Kerker preconditioning scheme to the self-consistent density functional theory calculations of inhomogeneous systems " We consider a many body fermionic system with an incommensurate external potential and a short range interaction in one dimension. We prove that, for certain densities and weak interactions, the zero temperature thermodynamical correlations are exponentially decaying for large distances, a property indicating persistence of localization in the interacting ground state. The analysis is based on Renormalization Group, and convergence of the renormalized expansion is achieved using fermionic cancellations and controlling the small divisor problem assuming a Diophantine condition for the frequency. ",Localization in an interacting quasi-periodic fermionic chain " Most low-resource languages do not have the necessary resources to create even a substantial monolingual corpus. These languages may often be found in government proceedings but mainly in Portable Document Format (PDF) that contains legacy fonts. Extracting text from these documents to create a monolingual corpus is challenging due to legacy font usage and printer-friendly encoding, which are not optimized for text extraction. Therefore, we propose a simple, automatic, and novel idea that can scale for Tamil, Sinhala, English languages, and many documents along with parallel corpora. Since Tamil and Sinhala are Low-Resource Languages, we improved the performance of Tesseract by employing LSTM-based training on more than 20 legacy fonts to recognize printed characters in these languages. Especially, our model detects code-mixed text, numbers, and special characters from the printed document. It is shown that this approach can reduce the character-level error rate of Tesseract from 6.03 to 2.61 for Tamil (-3.42% relative change) and 7.61 to 4.74 for Sinhala (-2.87% relative change), as well as the word-level error rate from 39.68 to 20.61 for Tamil (-19.07% relative change) and 35.04 to 26.58 for Sinhala (-8.46% relative change) on the test set. Also, our newly created parallel corpus consists of 185.4k, 168.9k, and 181.04k sentences and 2.11M, 2.22M, and 2.33M Words in Tamil, Sinhala, and English respectively. This study shows that fine-tuning Tesseract models on multiple new fonts help to understand the texts and enhances the performance of the OCR. We made newly trained models and the source code for fine-tuning Tesseract, freely available. ",Adapting the Tesseract Open-Source OCR Engine for Tamil and Sinhala Legacy Fonts and Creating a Parallel Corpus for Tamil-Sinhala-English " Data visualization captions help readers understand the purpose of a visualization and are crucial for individuals with visual impairments. The prevalence of poor figure captions and the successful application of deep learning approaches to image captioning motivate the use of similar techniques for automated figure captioning. However, research in this field has been stunted by the lack of suitable datasets. We introduce LineCap, a novel figure captioning dataset of 3,528 figures, and we provide insights from curating this dataset and using end-to-end deep learning models for automated figure captioning. ",LineCap: Line Charts for Data Visualization Captioning Models " We present an innovative two-headed attention layer that combines geometric and latent features to segment a 3D scene into semantically meaningful subsets. Each head combines local and global information, using either the geometric or latent features, of a neighborhood of points and uses this information to learn better local relationships. This Geometric-Latent attention layer (Ge-Latto) is combined with a sub-sampling strategy to capture global features. Our method is invariant to permutation thanks to the use of shared-MLP layers, and it can also be used with point clouds with varying densities because the local attention layer does not depend on the neighbor order. Our proposal is simple yet robust, which allows it to achieve competitive results in the ShapeNetPart and ModelNet40 datasets, and the state-of-the-art when segmenting the complex dataset S3DIS, with 69.2% IoU on Area 5, and 89.7% overall accuracy using K-fold cross-validation on the 6 areas. ",Two Heads are Better than One: Geometric-Latent Attention for Point Cloud Classification and Segmentation " We propose a new method for the production of ultracold molecular ions. This method utilizes sympathetic cooling due to the strong collisions between appropriately chosen molecular ions and laser-cooled neutral atoms to realize ultracold, internal ground-state molecular ions. In contrast to other experiments producing cold molecular ions, our proposed method efficiently cools both the internal and external molecular ion degrees of freedom. The availability of an ultracold, absolute ground-state sample of molecular ions would have broad impact to fields as diverse as quantum chemistry, astrophysics, and fundamental physics; and may lead to the development of a robust, scalable quantum computer. ",A new method for producing ultracold molecular ions Scholarly communication is today immersed in publish or perish culture that propels noncooperative behaviour in the sense of strategic games played by researchers. Here we introduce and describe a blockchain based platform for decentralized scholarly communication. The design of the platform rests on community driven publishing reviewing processes and implements incentives that promote cooperative user behaviour. Key to achieve cooperation in blockchain based scholarly communication is to transform a static research paper into a modifiable research paper under continuous peer review process. We describe and discuss the implementation of a modifiable research paper as a smart contract on the blockchain. ,Publish-and-Flourish: decentralized co-creation and curation of scholarly content " We perform a suite of smoothed particle hydrodynamics simulations to investigate in detail the results of a giant impact on the young Uranus. We study the internal structure, rotation rate, and atmospheric retention of the post-impact planet, as well as the composition of material ejected into orbit. Most of the material from the impactor's rocky core falls in to the core of the target. However, for higher angular momentum impacts, significant amounts become embedded anisotropically as lumps in the ice layer. Furthermore, most of the impactor's ice and energy is deposited in a hot, high-entropy shell at a radius of ~3 Earth radii. This could explain Uranus' observed lack of heat flow from the interior and be relevant for understanding its asymmetric magnetic field. We verify the results from the single previous study of lower resolution simulations that an impactor with a mass of at least 2 Earth masses can produce sufficiently rapid rotation in the post-impact Uranus for a range of angular momenta. At least 90% of the atmosphere remains bound to the final planet after the collision, but over half can be ejected beyond the Roche radius by a 2 or 3 Earth mass impactor. This atmospheric erosion peaks for intermediate impactor angular momenta (~3*10^36 kg m^2 s^-1). Rock is more efficiently placed into orbit and made available for satellite formation by 2 Earth mass impactors than 3 Earth mass ones, because it requires tidal disruption that is suppressed by the more massive impactors. ","Consequences of Giant Impacts on Early Uranus for Rotation, Internal Structure, Debris, and Atmospheric Erosion" " Given $E \subseteq \mathbb{F}_q^d \times \mathbb{F}_q^d$, with the finite field $\mathbb{F}_q$ of order $q$ and the integer $d \ge 2$, we define the two-parameter distance set as $\Delta_{d, d}(E)=\left\{\left(\|x_1-y_1\|, \|x_2-y_2\|\right) : (x_1,x_2), (y_1,y_2) \in E \right\}$. Birklbauer and Iosevich (2017) proved that if $|E| \gg q^{\frac{3d+1}{2}}$, then $ |\Delta_{d, d}(E)| = q^2$. For the case of $d=2$, they showed that if $|E| \gg q^{\frac{10}{3}}$, then $ |\Delta_{2, 2}(E)| \gg q^2$. In this paper, we present extensions and improvements of these results. ",On the two-parameter Erd\H{o}s-Falconer distance problem over finite fields " One of the main problems for extracting the Cosmic Microwave Background (CMB) from submm/mm observations is to correct for the Galactic components, mainly synchrotron, free - free and thermal dust emission with the required accuracy. Through a series of papers, it has been demonstrated that this task can be fulfilled by means of simple neural networks with high confidence. The main purpose of this paper is to demonstrate that the CMB BB power spectrum detected in the Planck 2015 polarization maps is present in the improved Planck 2017 maps with higher signal-to-noise ratio. Two features have been detected in the EB power spectrum in the new data set, both with S/N $\sim$4 . The origin of these features is most likely leakage from E to B with a level of about 1 per cent. This leakage gives no significant contribution to the detected BB power spectrum. The TB power spectrum is consistent with a zero signal. Altogether, the BB power spectrum is not consistent with the 'canonical' tensor-to-scalar models combined with gravitational lensing spectra. These results will give additional strong arguments for support to the proposed polarization satellite projects to follow up on the Planck mission . ",Confirmation of the detection of B-modes in the Planck polarization maps " Modern astronomical data processing requires complex software pipelines to process ever growing datasets. For radio astronomy, these pipelines have become so large that they need to be distributed across a computational cluster. This makes it difficult to monitor the performance of each pipeline step. To gain insight into the performance of each step, a performance monitoring utility needs to be integrated with the pipeline execution. In this work we have developed such a utility and integrated it with the calibration pipeline of the Low Frequency Array, LOFAR, a leading radio telescope. We tested the tool by running the pipeline on several different compute platforms and collected the performance data. Based on this data, we make well informed recommendations on future hardware and software upgrades. The aim of these upgrades is to accelerate the slowest processing steps for this LOFAR pipeline. The pipeline collector suite is open source and will be incorporated in future LOFAR pipelines to create a performance database for all LOFAR processing. ",Pipeline Collector: gathering performance data for distributed astronomical pipelines " We explore thermodynamic contributions to the three-dimensional de Sitter horizon originating from metric and Chern-Simons gauge field fluctuations. In Euclidean signature these are computed by the partition function of gravity coupled to matter semi-classically expanded about the round three-sphere saddle. We investigate a corresponding Lorentzian picture - drawing inspiration from the topological entanglement entropy literature - in the form of an edge-mode theory residing at the de Sitter horizon. We extend the discussion to three-dimensional gravity with positive cosmological constant, viewed (semi-classically) as a complexified Chern-Simons theory. The putative gravitational edge-mode theory is a complexified version of the chiral Wess-Zumino-Witten model associated to the edge-modes of ordinary Chern-Simons theory. We introduce and solve a family of complexified Abelian Chern-Simons theories as a way to elucidate some of the more salient features of the gravitational edge-mode theories. We comment on the relation to the AdS$_4$/CFT$_3$ correspondence. ",Three-dimensional de Sitter horizon thermodynamics " Information is physical but information is also processed in finite time. Where computing protocols are concerned, finite-time processing in the quantum regime can dynamically generate coherence. Here we show that this can have significant thermodynamic implications. We demonstrate that quantum coherence generated in the energy eigenbasis of a system undergoing a finite-time information erasure protocol yields rare events with extreme dissipation. These fluctuations are of purely quantum origin. By studying the full statistics of the dissipated heat in the slow driving limit, we prove that coherence provides a non-negative contribution to all statistical cumulants. Using the simple and paradigmatic example of single bit erasure, we show that these extreme dissipation events yield distinct, experimentally distinguishable signatures. ",Quantum fluctuations hinder finite-time information erasure near the Landauer limit " We study strategic information transmission in a hierarchical setting where information gets transmitted through a chain of agents up to a decision maker whose action is of importance to every agent. This situation could arise whenever an agent can communicate to the decision maker only through a chain of intermediaries, for example, an entry-level worker and the CEO in a firm, or an official in the bottom of the chain of command and the president in a government. Each agent can decide to conceal part or all the information she receives. Proving we can focus on simple equilibria, where the only player who conceals information is the first one, we provide a tractable recursive characterization of the equilibrium outcome, and show that it could be inefficient. Interestingly, in the binary-action case, regardless of the number of intermediaries, there are a few pivotal ones who determine the amount of information communicated to the decision maker. In this case, our results underscore the importance of choosing a pivotal vice president for maximizing the payoff of the CEO or president. ",Hierarchical Bayesian Persuasion: Importance of Vice Presidents " We propose a development of the Analytic Hierarchy Process (AHP) permitting to use the methodology also in cases of decision problems with a very large number of alternatives evaluated with respect to several criteria. While the application of the original AHP method involves many pairwise comparisons between alternatives and criteria, our proposal is composed of three steps: (i) direct evaluation of the alternatives at hand on the considered criteria, (ii) selection of some reference evaluations; (iii) application of the original AHP method to reference evaluations; (iv) revision of the direct evaluation on the basis of the prioritization supplied by AHP on reference evaluations. The new proposal has been tested and validated in an experiment conducted on a sample of university students. The new methodology has been therefore applied to a real world problem involving the evaluation of 21 Social Housing initiatives sited in the Piedmont region (Italy). To take into account interaction between criteria, the Choquet integral preference model has been considered within a Non Additive Robust Ordinal Regression approach. ",Using a new parsimonious AHP methodology combined with the Choquet integral: An application for evaluating social housing initiatives " In this paper we look for standing waves for nonlinear Schr\""odinger equations $$ i\frac{\partial \psi}{\partial t}+\Delta \psi - g(|y|) \psi -W^{\prime}(| \psi |)\frac{\psi}{| \psi |}=0 $$ with cylindrically symmetric potentials $g$ vanishing at infinity and non-increasing, and a $C^1$ nonlinear term satisfying weak assumptions. In particular we show the existence of standing waves with non-vanishing angular momentum with prescribed $L^2$ norm. The solutions are obtained via a minimization argument, and the proof is given for an abstract functional which presents lack of compactness. As a particular case we prove the existence of standing waves with non-vanishing angular momentum for the nonlinear hydrogen atom equation. ","Nonlinear Schr\""odinger equations with strongly singular potentials" " The processes of hole localization in the Y3Al5O12 and Lu3Al5O12 single crystals were investigated by electron paramagnetic resonance (EPR) and thermally stimulated luminescence (TSL). It was found that holes created by x-ray irradiation at 77 K are predominantly self-trapped at regular oxygen ions forming O- hole center. This self-trapped hole (STH) center is thermally stable to about 100 K in both YAG and LuAG crystals. At higher temperatures, thermally liberated holes are retrapped at oxygen ions in the vicinity of an acceptor ion such as Mg2+ and Al_{Y} or Al_{Lu} antisite ion that leads to increase of the thermal stability of the trapped hole to app. 150 K. TSL measurements show two composite glow peaks in the temperature range of 77 - 280 K, the temperature positions of which well correlate with the thermal stability of the O- centers. The hole thermal ionization energy was determined from a numerical fit of the TSL peaks within the model of second order kinetics. It is in the range of 0.25 - 0.26 eV for the O- STH center, and increases to 0.41 - 0.45 eV for O- center stabilized by the acceptor. Revealed O- centers can be attributed to O- small polarons formed mainly due to the hole stabilization by short-range interaction with the surrounding lattice. ",Hole self-trapping in the Y3Al5O12 and Lu3Al5O12 garnet crystals " Data concerning the users and usage of Online Social Networks (OSNs) has become available externally, from public resources (e.g., user profiles), participation in OSNs (e.g., establishing relationships and recording transactions such as user updates) and APIs of the OSN provider (such as the Twitter API). APIs let OSN providers monetize the release of data while helping control measurement load, e.g. by providing samples with different cost-granularity tradeoffs. To date, this approach has been more suited to releasing transactional data, with graphical data still being obtained by resource intensive methods such a graph crawling. In this paper, we propose a method for OSNs to provide samples of the user graph of tunable size, in non-intersecting increments, with sample selection that can be weighted to enhance accuracy when estimating different features of the graph. ",Efficient Sampling for Better OSN Data Provisioning " We consider the non-resonant mixing between photons and scalar ALPs with masses much less than the plasma frequency along the path, with specific reference to the chameleon scalar field model. The mixing would alter the intensity and polarization state of the cosmic microwave background (CMB) radiation. We find that the average modification to the CMB polarization modes is negligible. However the average modification to the CMB intensity spectrum is more significant and we compare this to high precision measurements of the CMB monopole made by the far infrared absolute spectrophotometer (FIRAS) on board the COBE satellite. The resulting 95\% confidence limit on the scalar-photon conversion probability in the primordial field (at 100 GHz) is P < 2.6x10^{-2}. This corresponds to a degenerate constraint on the photon-scalar coupling strength, g, and the magnitude of the primordial magnetic field. Taking the upper bound on the strength of the primordial magnetic field derived from the CMB power spectra, B < 5.0x10^{-9}G, this would imply an upper bound on the photon-scalar coupling strength in the range g < 7.14x10^{-13}GeV^{-1} to g < 9.20x10^{-14}GeV^{-1}, depending on the power spectrum of the primordial magnetic field. ",Chameleon-Photon Mixing in a Primordial Magnetic Field " This paper concerns the long-time asymptotics of diffusions with degenerate coefficients at the domain's boundary. Degenerate diffusion operators with mixed linear and quadratic degeneracies find applications in the analysis of asymmetric transport at edges separating topological insulators. In one space dimension, we characterize all possible invariant measures for such a class of operators and in all cases show exponential convergence of the Green's kernel to such invariant measures. We generalize the results to a class of two-dimensional operators including those used in the analysis of topological insulators. Several numerical simulations illustrate our theoretical findings. ",Long time asymptotics of mixed-type Kimura diffusions " The meteorology of hot Jupiters has been characterized primarily with thermal measurements, but recent observations suggest the possibility of directly detecting the winds by observing the Doppler shift of spectral lines seen during transit. Motivated by these observations, we show how Doppler measurements can place powerful constraints on the meteorology. We show that the atmospheric circulation--and Doppler signature--of hot Jupiters splits into two regimes. Under weak stellar insolation, the day-night thermal forcing generates fast zonal jet streams from the interaction of atmospheric waves with the mean flow. In this regime, air along the terminator (as seen during transit) flows toward Earth in some regions and away from Earth in others, leading to a Doppler signature exhibiting superposed blueshifted and redshifted components. Under intense stellar insolation, however, the strong thermal forcing damps these planetary-scale waves, inhibiting their ability to generate jets. Strong frictional drag likewise damps these waves and inhibits jet formation. As a result, this second regime exhibits a circulation dominated by high-altitude, day-to-night airflow, leading to a predominantly blueshifted Doppler signature during transit. We present state-of-the-art circulation models including nongray radiative transfer to quantify this regime shift and the resulting Doppler signatures; these models suggest that cool planets like GJ 436b lie in the first regime, HD 189733b is transitional, while planets hotter than HD 209458b lie in the second regime. Moreover, we show how the amplitude of the Doppler shifts constrains the strength of frictional drag in the upper atmospheres of hot Jupiters. If due to winds, the ~2-km/sec blueshift inferred on HD 209458b may require drag time constants as short as 10^4-10^6 seconds, possibly the result of Lorentz-force braking on this planet's hot dayside. ",Doppler Signatures of the Atmospheric Circulation on Hot Jupiters " Real world experiments are expensive, and thus it is important to reach a target in minimum number of experiments. Experimental processes often involve control variables that changes over time. Such problems can be formulated as a functional optimisation problem. We develop a novel Bayesian optimisation framework for such functional optimisation of expensive black-box processes. We represent the control function using Bernstein polynomial basis and optimise in the coefficient space. We derive the theory and practice required to dynamically adjust the order of the polynomial degree, and show how prior information about shape can be integrated. We demonstrate the effectiveness of our approach for short polymer fibre design and optimising learning rate schedules for deep networks. ",Bayesian functional optimisation with shape prior " Recently, the topic of Casimir repulsion has received a great deal of attention, largely because of the possibility of technological application. The general subject has a long history, going back to the self-repulsion of a conducting spherical shell and the repulsion between a perfect electric conductor and a perfect magnetic conductor. Recently it has been observed that repulsion can be achieved between ordinary conducting bodies, provided sufficient anisotropy is present. For example, an anisotropic polarizable atom can be repelled near an aperture in a conducting plate. Here we provide new examples of this effect, including the repulsion on such an atom moving on a trajectory nonintersecting a conducting cylinder; in contrast, such repulsion does not occur outside a sphere. Classically, repulsion does occur between a conducting ellipsoid placed in a uniform electric field and an electric dipole. The Casimir-Polder force between an anisotropic atom and an anisotropic dielectric semispace does not exhibit repulsion. The general systematics of repulsion are becoming clear. ","Casimir-Polder repulsion: Polarizable atoms, cylinders, spheres, and ellipsoids" " We generalize the theory of Lorentz-covariant distributions to broader classes of functionals including ultradistributions, hyperfunctions, and analytic functionals with a tempered growth. We prove that Lorentz-covariant functionals with essential singularities can be decomposed into polynomial covariants and establish the possibility of the invariant decomposition of their carrier cones. We describe the properties of odd highly singular generalized functions. These results are used to investigate the vacuum expectation values of nonlocal quantum fields with an arbitrary high-energy behavior and to extend the spin--statistics theorem to nonlocal field theory. ","Lorentz-covariant ultradistributions, hyperfunctions, and analytic functionals" " Online weighted matching problem is a fundamental problem in machine learning due to its numerous applications. Despite many efforts in this area, existing algorithms are either too slow or don't take $\mathrm{deadline}$ (the longest time a node can be matched) into account. In this paper, we introduce a market model with $\mathrm{deadline}$ first. Next, we present our two optimized algorithms (\textsc{FastGreedy} and \textsc{FastPostponedGreedy}) and offer theoretical proof of the time complexity and correctness of our algorithms. In \textsc{FastGreedy} algorithm, we have already known if a node is a buyer or a seller. But in \textsc{FastPostponedGreedy} algorithm, the status of each node is unknown at first. Then, we generalize a sketching matrix to run the original and our algorithms on both real data sets and synthetic data sets. Let $\epsilon \in (0,0.1)$ denote the relative error of the real weight of each edge. The competitive ratio of original \textsc{Greedy} and \textsc{PostponedGreedy} is $\frac{1}{2}$ and $\frac{1}{4}$ respectively. Based on these two original algorithms, we proposed \textsc{FastGreedy} and \textsc{FastPostponedGreedy} algorithms and the competitive ratio of them is $\frac{1 - \epsilon}{2}$ and $\frac{1 - \epsilon}{4}$ respectively. At the same time, our algorithms run faster than the original two algorithms. Given $n$ nodes in $\mathbb{R} ^ d$, we decrease the time complexity from $O(nd)$ to $\widetilde{O}(\epsilon^{-2} \cdot (n + d))$. ",Fast and Efficient Matching Algorithm with Deadline Instances " Tautological bundles of realizations of matroids were introduced in [BEST23] as a unifying geometric model for studying matroids. We compute the cohomologies of exterior and symmetric powers of these vector bundles, and show that they depend only on the matroid of the realization. As an application, we show that the log canonical bundle of a wonderful compactification of a hyperplane arrangement complement, in particular the moduli space of pointed rational curves, has vanishing higher cohomologies. ",Cohomologies of tautological bundles of matroids " We prove that the diameter of any unweighted connected graph G is O(k log n/lambda_k), for any k>= 2. Here, lambda_k is the k smallest eigenvalue of the normalized laplacian of G. This solves a problem posed by Gil Kalai. ",A Universal upper bound on Graph Diameter based on Laplacian Eigenvalues " Quantum key distribution is a cryptographic primitive for the distribution of symmetric encryption keys between two parties that possess a pre-shared secret. Since the pre-shared secret is a requirement, quantum key distribution may be viewed as a key growing protocol. We note that the use of pre-shared secrets coupled with access to randomness beacons may enable key growing which, though not secure from an information-theoretic standpoint, remains quantum safe. ",A note on quantum safe symmetric key growing " In this paper, we propose a low error rate and real-time stereo vision system on GPU. Many stereo vision systems on GPU have been proposed to date. In those systems, the error rates and the processing speed are in trade-off relationship. We propose a real-time stereo vision system on GPU for the high resolution images. This system also maintains a low error rate compared to other fast systems. In our approach, we have implemented the cost aggregation (CA), cross-checking and median filter on GPU in order to realize the real-time processing. Its processing speed is 40 fps for 1436x992 pixels images when the maximum disparity is 145, and its error rate is the lowest among the GPU systems which are faster than 30 fps. ",Real-Time High-Quality Stereo Matching System on a GPU " It is essential to explore two-dimensional (2D) material with magnetic ordering in new generation spintronic devices. Particularly, the seeking of room-temperature 2D ferromagnetic (FM) materials is a hot topic of current research. Here, we study magnetism of the Mn-doped and electron-doped SiC monolayer using first-principle calculations. For the Mn-doped SiC monolayer, we find that either electron or hole could mediate the ferromagnetism in the system and the Curie temperature ($T_C$) can be improved by appropriate carrier doping. The codoping strategy is also discussed on improving $T_C$. The transition between antiferromagnetic and FM phase can be found by strain engineering. The $T_C$ is improved above room temperature (RT) under the strain larger than $0.06$. Moreover, the Mn-doped SiC monolayer develops half-metal at the strain range of $0.05-0.1$. On the other hand, the direct electron doping can induce ferromagnetism due to the van Hove singularity in density of states of the conduction band edge of the SiC monolayer. The $T_C$ is found to be around RT. These fascinating controllable electronic and magnetic properties are desired for spintronic applications. ",Tunable room-temperature ferromagnetism in the SiC monolayer " We propose a quadratic unconstrained binary optimization (QUBO) formulation of rectified linear unit (ReLU) type functions. Different from the q-loss function proposed by Denchev et al. (2012), a simple discussion based on the Legendre duality is not sufficient to obtain the QUBO formulation of the ReLU-type functions. In addition to the Legendre duality, we employ the Wolfe duality, and the QUBO formulation of the ReLU-type is derived. The QUBO formulation is available in Ising-type annealing methods, including quantum annealing machines. ",Quadratic unconstrained binary optimization formulation for rectified-linear-unit-type functions " We resolve the local semistable reduction problem for overconvergent F-isocrystals at monomial valuations (Abhyankar valuations of height 1 and residue transcendence degree 0). We first introduce a higher-dimensional analogue of the generic radius of convergence for a p-adic differential module, which obeys a convexity property. We then combine this convexity property with a form of the p-adic local monodromy theorem for so-called fake annuli. ","Semistable reduction for overconvergent F-isocrystals, III: Local semistable reduction at monomial valuations" " Multi-armed bandit (MAB) algorithms are efficient approaches to reduce the opportunity cost of online experimentation and are used by companies to find the best product from periodically refreshed product catalogs. However, these algorithms face the so-called cold-start at the onset of the experiment due to a lack of knowledge of customer preferences for new products, requiring an initial data collection phase known as the burn-in period. During this period, MAB algorithms operate like randomized experiments, incurring large burn-in costs which scale with the large number of products. We attempt to reduce the burn-in by identifying that many products can be cast into two-sided products, and then naturally model the rewards of the products with a matrix, whose rows and columns represent the two sides respectively. Next, we design two-phase bandit algorithms that first use subsampling and low-rank matrix estimation to obtain a substantially smaller targeted set of products and then apply a UCB procedure on the target products to find the best one. We theoretically show that the proposed algorithms lower costs and expedite the experiment in cases when there is limited experimentation time along with a large product set. Our analysis also reveals three regimes of long, short, and ultra-short horizon experiments, depending on dimensions of the matrix. Empirical evidence from both synthetic data and a real-world dataset on music streaming services validates this superior performance. ",Speed Up the Cold-Start Learning in Two-Sided Bandits with Many Arms " The Quadratic Assignment Problem (QAP) is a well-known permutation-based combinatorial optimization problem with real applications in industrial and logistics environments. Motivated by the challenge that this NP-hard problem represents, it has captured the attention of the optimization community for decades. As a result, a large number of algorithms have been proposed to tackle this problem. Among these, exact methods are only able to solve instances of size $n<40$. To overcome this limitation, many metaheuristic methods have been applied to the QAP. In this work, we follow this direction by approaching the QAP through Estimation of Distribution Algorithms (EDAs). Particularly, a non-parametric distance-based exponential probabilistic model is used. Based on the analysis of the characteristics of the QAP, and previous work in the area, we introduce Kernels of Mallows Model under the Hamming distance to the context of EDAs. Conducted experiments point out that the performance of the proposed algorithm in the QAP is superior to (i) the classical EDAs adapted to deal with the QAP, and also (ii) to the specific EDAs proposed in the literature to deal with permutation problems. ",Kernels of Mallows Models under the Hamming Distance for solving the Quadratic Assignment Problem " Originally motivated by a stability problem in Fluid Mechanics, we study the spectral and pseudospectral properties of the differential operator $H_\epsilon = -\partial_x^2 + x^2 + i\epsilon^{-1}f(x)$ on $L^2(R)$, where $f$ is a real-valued function and $\epsilon > 0$ a small parameter. We define $\Sigma(\epsilon)$ as the infimum of the real part of the spectrum of $H_\epsilon$, and $\Psi(\epsilon)^{-1}$ as the supremum of the norm of the resolvent of $H_\epsilon$ along the imaginary axis. Under appropriate conditions on $f$, we show that both quantities $\Sigma(\epsilon)$, $\Psi(\epsilon)$ go to infinity as $\epsilon \to 0$, and we give precise estimates of the growth rate of $\Psi(\epsilon)$. We also provide an example where $\Sigma(\epsilon)$ is much larger than $\Psi(\epsilon)$ if $\epsilon$ is small. Our main results are established using variational ""hypocoercive"" methods, localization techniques and semiclassical subelliptic estimates. ",Spectral asymptotics for large skew-symmetric perturbations of the harmonic oscillator " We solve the parametric generalized effective Schr\""odinger equation with a specific choice of posi-tion-dependent mass function and Morse oscillator potential by means of the Nikiforov-Uvarov (NU) method combined with the Pekeris approximation scheme. All bound-state energies are found explicitly and all corresponding radial wave functions are built analytically. We choose the Weyl or Li and Kuhn ordering for the ambiguity parameters in our numerical work to calculate the energy spectrum for a few and diatomic molecules with arbitrary vibration and rotation quantum numbers and different position-dependent mass functions. Two special cases including the constant mass and the vibration s-wave (l =0) are also investigated. ",Effective Schroedinger equation with general ordering ambiguity position-dependent mass Morse potential " We present low-temperature anelastic and dielectric spectroscopy measurements on the perovskite ionic conductor BaCe(1-x)Y(x)O(3-x/2) in the protonated, deuterated and outgassed states. Three main relaxation processes are ascribed to proton migration, reorientation about an Y dopant and tunneling around a same O atom. An additional relaxation maximum appears only in the dielectric spectrum around 60 K, and does not involve H motion, but may be of electronic origin, e.g. small polaron hopping. The peak at the lowest temperature, assigned to H tunneling, has been fitted with a relaxation rate presenting crossovers from one-phonon transitions, nearly independent of temperature, to two-phonon processes, varying as T^7, to Arrhenius-like. Substituting H with D lowers the overall rate by 8 times. The corresponding peak in the dielectric loss has an intensity nearly 40 times smaller than expected from the classical reorientation of the electric dipole associated with the OH complex. This fact is discussed in terms of coherent tunneling states of H in a cubic and orthorhombically distorted lattice, possibly indicating that only H in the symmetric regions of twin boundaries exhibit tunneling, and in terms of reduction of the effective dipole due to lattice polarization. ",Hydrogen tunneling in the perovskite ionic conductor BaCe(1-x)Y(x)O(3-d) " We present a numerical classification of the spherically symmetric, static solutions to the Einstein--Yang--Mills equations with cosmological constant $\Lambda$. We find three qualitatively different classes of configurations, where the solutions in each class are characterized by the value of $\Lambda$ and the number of nodes, $n$, of the Yang--Mills amplitude. For sufficiently small, positive values of the cosmological constant, $\Lambda < \Llow(n)$, the solutions generalize the Bartnik--McKinnon solitons, which are now surrounded by a cosmological horizon and approach the deSitter geometry in the asymptotic region. For a discrete set of values $\Lambda_{\rm reg}(n) > \Lambda_{\rm crit}(n)$, the solutions are topologically $3$--spheres, the ground state $(n=1)$ being the Einstein Universe. In the intermediate region, that is for $\Llow(n) < \Lambda < \Lhig(n)$, there exists a discrete family of global solutions with horizon and ``finite size''. ",Cosmological Analogues of the Bartnik--McKinnon Solutions " Compressibility of boron subphosphide B12P2 has been studied under quasi-hydrostatic conditions up to 26 GPa and 2600 K using laser-heated diamond anvil cell and angle-dispersive synchrotron X-ray diffraction. 300-K data fit yields the values of bulk modulus B0 = 192(11) GPa and its first pressure derivative B0' = 5.5(12). At ambient pressure the thermal expansion is quasi-linear up to 1300 K with average volume expansion coefficient {\alpha} = 17.4(1) 10-6 K-1. The whole set of experimental p-V-T data is well described by the Anderson-Gr\""uneisen model with {\delta}T = 6. ",Thermoelastic equation of state of boron subphosphide B12P2 " Recently, Hirsch (2019a) proposed a new variant of the h index called the $h_\alpha$ index. He formulated as follows: ""we define the $h_\alpha$ index of a scientist as the number of papers in the h-core of the scientist (i.e. the set of papers that contribute to the h-index of the scientist) where this scientist is the $\alpha$-author"" (p. 673). The $h_\alpha$ index was criticized by Leydesdorff, Bornmann, and Opthof (2019). One of their most important points is that the index reinforces the Matthew effect in science. We address this point in the current study using a recently developed Stata command (h_index) and R package (hindex), which can be used to simulate h index and $h_\alpha$index applications in research evaluation. The user can investigate under which conditions $h_\alpha$ reinforces the Matthew effect. The results of our study confirm what Leydesdorff et al. (2019) expected: the $h_\alpha$ index reinforces the Matthew effect. This effect can be intensified if strategic behavior of the publishing scientists and cumulative advantage effects are additionally considered in the simulation. ",Does the $h_\alpha$ index reinforce the Matthew effect in science? Agent-based simulations using Stata and R " Based on the framework of the Isospin-Dependent Quantum Molecular Dynamics (IQMD) model in which the initial neutron and proton densities are sampled according to the droplet model, the correlation between triton-to-$^{3}$He yield ratio (R(t/$^{3}$He)$=$Yield(t)/Yield($^{3}$He)) and neutron skin thickness (${\delta}_{np}$) in neutron-rich projectile induced reactions is investigated. By changing the diffuseness parameter of neutron density distribution in the droplet model for the projectile to obtain different ${\delta}_{np}$, the relationship between ${\delta}_{np}$ and the corresponding R(t/$^{3}$He) in semi-peripheral collisions is obtained. The calculated results show that R(t/$^{3}$He) has a strong linear correlation with ${\delta}_{np}$ for neutron-rich $^{50}$Ca and $^{68}$Ni nuclei. It is suggested that R(t/$^{3}$He) could be regarded as a good experimental observable to extract ${\delta}_{np}$ for neutron-rich nuclei because the yields of charged particles triton and $^{3}$He can be measured quite precisely. ",Triton/$^{3}$He ratio as an observable for neutron skin thickness There was obtained a numerical external solution for the exact system of the RTG equations with some natural boundary conditions in the static spherically symmetric case. The properties of the solution are discussed. ,Numerical Spherically Symmetric Static Solution of the RTG Equations Outside the Matter " I compare several network-level measures of centrality to common measures of author reputation and influence (e.g. hindex, i10index), all taken over the data set of papers published in 2017 at major computer systems conferences and some controls. I hypothesize that centrality measures will correlate strongly with the reputation and influence measures. My results confirm several expected correlations and exhibit a few surprising absences of correlation. In particular, there was an absence of statistically significant correlation between degree centrality and hindex, ",A Network-Level View of Author Influence " We describe and evaluate a novel white-box fuzzer for C programs named FuSeBMC, which combines fuzzing and symbolic execution, and applies Bounded Model Checking (BMC) to find security vulnerabilities in C programs. FuSeBMC explores and analyzes C programs (1) to find execution paths that lead to property violations and (2) to incrementally inject labels to guide the fuzzer and the BMC engine to produce test-cases for code coverage. FuSeBMC successfully participates in Test-Comp'21 and achieves first place in the Cover-Error category and second place in the Overall category. ",FuSeBMC: A White-Box Fuzzer for Finding Security Vulnerabilities in C Programs " In Domain Adaptation (DA), where the feature distributions of the source and target domains are different, various distance-based methods have been proposed to minimize the discrepancy between the source and target domains to handle the domain shift. In this paper, we propose a new similarity function, which is called Population Correlation (PC), to measure the domain discrepancy for DA. Base on the PC function, we propose a new method called Domain Adaptation by Maximizing Population Correlation (DAMPC) to learn a domain-invariant feature representation for DA. Moreover, most existing DA methods use hand-crafted bottleneck networks, which may limit the capacity and flexibility of the corresponding model. Therefore, we further propose a method called DAMPC with Neural Architecture Search (DAMPC-NAS) to search the optimal network architecture for DAMPC. Experiments on several benchmark datasets, including Office-31, Office-Home, and VisDA-2017, show that the proposed DAMPC-NAS method achieves better results than state-of-the-art DA methods. ",Domain Adaptation by Maximizing Population Correlation with Neural Architecture Search " The Cauchy problem to the Fokker-Planck-Boltzmann equation under Grad's angular cut-off assumption is investigated. When the initial data is a small perturbation of an equilibrium state, global existence and optimal temporal decay estimates of classical solutions are established. Our analysis is based on the coercivity of the Fokker-Planck operator and an elementary weighted energy method. ",Global Existence and Decay of Solutions to the Fokker-Planck-Boltzmann Equation " We numerically solve the effective loop quantum cosmology dynamics for the vacuum Bianchi type II and type IX spacetimes, in particular studying how the Kasner exponents evolve across the loop quantum cosmology bounce. We find that when the spatial curvature is negligible at the bounce then the Kasner exponents transform according to the same simple equation as for a Bianchi type I spacetime in effective loop quantum cosmology, while there are departures from this transformation rule in cases where the spatial curvature is significant during the bounce. We also use high-precision numerics to compute the evolution of a Bianchi type IX spacetime through multiple bounces and recollapses, and find indications of chaotic behaviour. Interestingly, the numerics indicate that it is during the classical recollapse, and not the loop quantum cosmology bounce, that nearby solutions diverge most strongly. ",Numerics of Bianchi type II and type IX spacetimes in effective loop quantum cosmology " In a previous paper with Schmid (math.NT/0402382) we considered the regularity of automorphic distributions for GL(2,R), and its connections to other topics in number theory and analysis. In this paper we turn to the higher rank setting, establishing the nontrivial bound sum_{n < T} a_n exp(2 pi i n alpha) = O_\epsilon (T^{3/4+\epsilon}) uniformly for alpha real, for a_n the coefficients of the L-function of a cusp form on GL(3,Z)\GL(3,R). We also derive an equivalence (Theorem 7.1) between analogous cancellation statements for cusp forms on GL(n,R), and the sizes of certain period integrals. These in turn imply estimates for the second moment of cusp form L-functions. ",Cancellation in additively twisted sums on GL(n) " The production of all identified hadrons at the CERN Large Hadron Collider (LHC) is studied with emphasis on the $p_T$ distributions up to 20 GeV/c in central collisions at $\sqrt{s_{NN}}=2.76$ TeV. The parton recombination model is used to determine the hadronic \ppt\ spectra from the quark \dis s. From the heavy hyperon spectra it is known from earlier studies that the $u, d, s$ thermal \dis s in \ppt\ are exponential with large inverse slopes that cannot be identified with any temperature in conventional fluid models. They are used as inputs in our model together with shower partons determined from our treatment of momentum degradation that uses high-\ppt\ pion data as input. Those thermal and shower partons are used to calculate the $p_T$ \dis s of all observed hadrons ($\pi, K, p, \Lambda$, $\Xi$, $\Omega$ and $\phi$) over wide ranges of \ppt, so the system is highly constrained. We show how well the LHC data can be reproduced with only a few parameters to adjust. Centrality dependence has not been studied. What is learned is that minijets are important, not only in giving rise to abundant shower partons, but also in the conversion of semihard partons in the medium to soft partons that enhance the thermal partons. Since the conversion process can occur throughout the expansion phase of the high-density medium, this work provides the basis for questioning the validity of the assumption of rapid equilibration. ",A unified study of the production of all identified hadrons over wide ranges of transverse momenta at LHC " Considering a conversation thread, stance classification aims to identify the opinion (e.g. agree or disagree) of replies towards a given target. The target of the stance is expected to be an essential component in this task, being one of the main factors that make it different from sentiment analysis. However, a recent study shows that a target-oblivious model outperforms target-aware models, suggesting that targets are not useful when predicting stance. This paper re-examines this phenomenon for rumour stance classification (RSC) on social media, where a target is a rumour story implied by the source tweet in the conversation. We propose adversarial attacks in the test data, aiming to assess the models robustness and evaluate the role of the data in the models performance. Results show that state-of-the-art models, including approaches that use the entire conversation thread, overly relying on superficial signals. Our hypothesis is that the naturally high occurrence of target-independent direct replies in RSC (e.g. ""this is fake"" or just ""fake"") results in the impressive performance of target-oblivious models, highlighting the risk of target instances being treated as noise during training. ",Evaluating the Role of Target Arguments in Rumour Stance Classification " We present the results of a search for pair production of scalar top quarks in an R-parity violating supersymmetry scenario in 106 pb-1 of ppbar collisions at $\sqrt{s} = 1.8$ TeV collected by the Collider Detector at Fermilab. In this mode each scalar top quark decays into a tau lepton and a b quark. We search for events with two tau's, one decaying leptonically (e or mu) and one decaying hadronically, and two jets. No candidate events pass our final selection criteria. We set a 95% confidence level lower limit on the scalar top quark mass at 122 GeV/c2 for Br (stop-> tau + b) = 1. ",Search For Pair Production of Scalar Top Quarks in R-parity Violating Decay Modes in ppbar Collisions at sqrt{s} = 1.8 TeV " We present a method to derive an upper bound for the entropy density of coupled map lattices with local interactions from local observations. To do this, we use an embedding technique being a combination of time delay and spatial embedding. This embedding allows us to identify the local character of the equations of motion. Based on this method we present an approximate estimate of the entropy density by the correlation integral. ",Local estimates for entropy densities in coupled map lattices " The L-arginine mixed potassium dihydrogenphosphate (KDP) crystal reported by Saravanan et al Indian J Pure App Phys 51 (2013) 254, is a dubious crystal. The reported unit cell parameters contradict the definition of the tetragonal crystal system. ","Comments on the paper: Structural, optical properties and effect of amino acid on growth of KDP crystals" " We investigate the various distributions explaining multi-dimensional structure of kaon at the level of its constituents ($u$ and $\bar{s}$) using the light-cone quark model. The overlap form of wavefunctions associated with the light-cone quark model is adopted for the calculations. The generalized parton distributions(GPDs)of $u$ and $\bar{s}$ quarks are presented for the case when the momentum transfer in the longitudinal direction is non-zero. The dependence of kaon GPDs is studied in terms of variation of quark longitudinal momentum fraction, momentum transfer in longitudinal direction and total momentum transfer to the final state of hadron. The transverse impact-parameter dependent GPDs are also studied by taking the Fourier transformation of general GPDs. Further, the quantum phase-space distributions; Wigner distributions are studied for the case of unpolarized, longitudinally-polarized and transversely-polarized parton in an unpolarized kaon. The Wigner distributions are analysed in the transverse impact-parameter plane, the transverse momentum plane and the mixed plane. Further, to get a complete picture of kaon in terms of its valence quarks, the variation of longitudinal momentum fraction carried by quark and antiquark in the generalized transverse momentum-dependent parton distributions (GTMDs) is studied for different values of transverse quark and antiquark momentum $({\bf k}_\perp)$ as well as for different values of momentum transferred to the kaon in transverse direction $({\bf \Delta}_\perp)$. This has been done for zero as well as non-zero skewedness representing respectively the absence and presence of momentum transfer to the final state of kaon in longitudinal direction. Furthermore, the possible spin-orbit correlation for $u$ and $\bar{s}$ in kaon is elaborated in context of Wigner distributions and GTMDs. ",Study of kaon structure using the light-cone quark model " We consider the leading order result for polarized leptoproduction, putting emphasis on transverse momentum dependent effects appearing in azimuthal asymmetries. Measurements of weighted cross sections enable extraction of the distribution of transversely polarized quarks. We focus on the distribution in a longitudinally polarized hadron and estimate the expected asymmetries in leptoproduction. ",Probing transverse quark polarization via azimuthal asymmetries in leptoproduction " We use appropriately defined short ranged reference models of liquid water to clarify the different roles local hydrogen bonding, van der Waals attractions, and long ranged electrostatic interactions play in the solvation and association of apolar solutes in water. While local hydrogen bonding in- teractions dominate hydrophobic effects involving small solutes, longer ranged electrostatic and dis- persion interactions are found to be increasingly important in the description of interfacial structure around large solutes. The hydrogen bond network sets the solute length scale at which a crossover in solvation behavior between these small and large length scale regimes is observed. Unbalanced long ranged forces acting on interfacial water molecules are also important in hydrophobic association, illustrated here by analysis of the association of model methane and buckminsterfullerene solutes. ",Dissecting Hydrophobic Hydration and Association " We investigate dynamic reconfigurable component-based systems whose architectures are described by formulas of Propositional Configuration Logics. We present several examples of reconfigurable systems based on well-known architectures, and state preliminary decidability results. ",Dynamic reconfiguration of component-based systems described by propositional configuration logic " In this paper we compare two finite words $u$ and $v$ by the lexicographical order of the infinite words $u^\omega$ and $v^\omega$. Informally, we say that we compare $u$ and $v$ by the infinite order. We show several properties of Lyndon words expressed using this infinite order. The innovative aspect of this approach is that it allows to take into account also non trivial conditions on the prefixes of a word, instead that only on the suffixes. In particular, we derive a result of Ufnarovskij [V. Ufnarovskij, ""Combinatorial and asymptotic methods in algebra"", 1995] that characterizes a Lyndon word as a word which is greater, with respect to the infinite order, than all its prefixes. Motivated by this result, we introduce the prefix standard permutation of a Lyndon word and the corresponding (left) Cartesian tree. We prove that the left Cartesian tree is equal to the left Lyndon tree, defined by the left standard factorization of Viennot [G. Viennot, ""Alg\`ebres de Lie libres et mono\""ides libres"", 1978]. This result is dual with respect to a theorem of Hohlweg and Reutenauer [C. Hohlweg and C. Reutenauer, ""Lyndon words, permutations and trees"", 2003]. ",Some variations on Lyndon words In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixture. The most widely used application is quantifying the diet of organisms based on the food sources they have been observed to consume. At the centre of the multivariate statistical model we propose is a compositional mixture of the food sources corrected for various metabolic factors. The compositional component of our model is based on the isometric log ratio (ilr) transform of Egozcue (2003). Through this transform we can apply a range of time series and non-parametric smoothing relationships. We illustrate our models with 3 case studies based on real animal dietary behaviour. ,Bayesian Stable Isotope Mixing Models " All supersymmetric gauge theories based on simple groups which have an affine quantum moduli space, i.e. one generated by gauge invariants with no relations, W=0, and anomaly matching at the origin, are classified. It is shown that the only theories with no gauge invariants (and moduli space equal to a single point) are the two known examples, SU(5) with 5-bar + 10 and SO(10) with a spinor. The index of the matter representation must be at least as big as the index of the adjoint in theories which have a non-trivial relation among the gauge invariants. ",Supersymmetric Gauge Theories with an Affine Quantum Moduli Space " The crossed channels of generalized reaction \gamma N - \gamma N have been considered. The transformation coefficients from the independent helicity amplitudes to the invariant functions are calculated. The explicit expressions for invariant functions have been obtained with the subject to the contribution of Born diagrams in s-, u-, and t-channel and six resonances in s- and u-channel. It has been shown that the obtained invariant functions meet the requirements of crossing-symmetry. ",General Formulae of Invariant Functions of the Generalized Reaction \gamma N - \gamma N in the Effective Lagrangians Method " We compute dual-conformally invariant ladder integrals that are capped off by pentagons at each end of the ladder. Such integrals appear in six-point amplitudes in planar N=4 super-Yang-Mills theory. We provide exact, finite-coupling formulas for the basic double pentaladder integrals as a single Mellin integral over hypergeometric functions. For particular choices of the dual conformal cross ratios, we can evaluate the integral at weak coupling to high loop orders in terms of multiple polylogarithms. We argue that the integrals are exponentially suppressed at strong coupling. We describe the space of functions that contains all such double pentaladder integrals and their derivatives, or coproducts. This space, a prototype for the space of Steinmann hexagon functions, has a simple algebraic structure, which we elucidate by considering a particular discontinuity of the functions that localizes the Mellin integral and collapses the relevant symbol alphabet. This function space is endowed with a coaction, both perturbatively and at finite coupling, which mixes the independent solutions of the hypergeometric differential equation and constructively realizes a coaction principle of the type believed to hold in the full Steinmann hexagon function space. ",The Double Pentaladder Integral to All Orders " By means of $q$-series, we prove that any natural number is a sum of an even square and two triangular numbers, and that each positive integer is a sum of a triangular number plus $x^2+y^2$ for some integers $x$ and $y$ with $x\not\equiv y (mod 2)$ or $x=y>0$. The paper also contains some other results and open conjectures on mixed sums of squares and triangular numbers. ",Mixed sums of squares and triangular numbers " Complex oxide systems have attracted considerable attention because of their fascinating properties, including the magnetic ordering at the conducting interface between two band insulators, such as LaAlO3 (LAO) and SrTiO3 (STO). However, the manipulation of the spin degree of freedom at the LAO/STO heterointerface has remained elusive. Here, we have fabricated hybrid magnetic tunnel junctions consisting of Co and LAO/STO ferromagnets with the insertion of a Ti layer in between, which clearly exhibit magnetic switching and the tunnelling magnetoresistance (TMR) effect below 10 K. The magnitude and the of the TMR are strongly dependent on the direction of the rotational magnetic field parallel to the LAO/STO plane, which is attributed to a strong Rashba-type spin orbit coupling in the LAO/STO heterostructure. Our study provides a further support for the existence of the macroscopic ferromagnetism at LAO/STO heterointerfaces and opens a novel route to realize interfacial spintronics devices. ",Polarity-tunable magnetic tunnel junctions based on ferromagnetism at oxide heterointerfaces " Causal discovery from interventional data is an important problem, where the task is to design an interventional strategy that learns the hidden ground truth causal graph $G(V,E)$ on $|V| = n$ nodes while minimizing the number of performed interventions. Most prior interventional strategies broadly fall into two categories: non-adaptive and adaptive. Non-adaptive strategies decide on a single fixed set of interventions to be performed while adaptive strategies can decide on which nodes to intervene on sequentially based on past interventions. While adaptive algorithms may use exponentially fewer interventions than their non-adaptive counterparts, there are practical concerns that constrain the amount of adaptivity allowed. Motivated by this trade-off, we study the problem of $r$-adaptivity, where the algorithm designer recovers the causal graph under a total of $r$ sequential rounds whilst trying to minimize the total number of interventions. For this problem, we provide a $r$-adaptive algorithm that achieves $O(\min\{r,\log n\} \cdot n^{1/\min\{r,\log n\}})$ approximation with respect to the verification number, a well-known lower bound for adaptive algorithms. Furthermore, for every $r$, we show that our approximation is tight. Our definition of $r$-adaptivity interpolates nicely between the non-adaptive ($r=1$) and fully adaptive ($r=n$) settings where our approximation simplifies to $O(n)$ and $O(\log n)$ respectively, matching the best-known approximation guarantees for both extremes. Our results also extend naturally to the bounded size interventions. ",Adaptivity Complexity for Causal Graph Discovery " We study the singular series associated to a cubic form with integer coefficients. If the number of variables is at least $10$, we prove the absolute convergence (and hence positivity) under the assumption of Davenport's Geometric Condition, improving on a result of Heath-Brown. For the case of $9$ variables, we give a conditional treatment. We also provide a new short and elementary proof of Davenport's Shrinking Lemma which has been a crucial tool in previous literature on this and related problems. ",The singular series of a cubic form in many variables and a new proof of Davenport's Shrinking Lemma " We present multi-wavelength observations of the afterglow of the short GRB111117A, and follow-up observations of its host galaxy. From rapid optical and radio observations we place limits of r \gtrsim 25.5 mag at \deltat \approx 0.55 d and F_nu(5.8 GHz) < 18 \muJy at \deltat \approx 0.50 d, respectively. However, using a Chandra observation at t~3.0 d we locate the absolute position of the X-ray afterglow to an accuracy of 0.22"" (1 sigma), a factor of about 6 times better than the Swift-XRT position. This allows us to robustly identify the host galaxy and to locate the burst at a projected offset of 1.25 +/- 0.20"" from the host centroid. Using optical and near-IR observations of the host galaxy we determine a photometric redshift of z=1.3 (+0.3,-0.2), one of the highest for any short GRB, and leading to a projected physical offset for the burst of 10.5 +/- 1.7 kpc, typical of previous short GRBs. At this redshift, the isotropic gamma-ray energy is E_{gamma,iso} \approx 3\times10^51 erg (rest-frame 23-2300 keV) with a peak energy of E_{pk} \approx 850-2300 keV (rest-frame). In conjunction with the isotropic X-ray energy, GRB111117A appears to follow our recently-reported E_x,iso-E_gamma,iso-E_pk universal scaling. Using the X-ray data along with the optical and radio non-detections we find that for a blastwave kinetic energy of E_{K,iso} \approx E_{gamma,iso}, the circumburst density is n_0 \sim 3x10^(-4)-1 cm^-3 (for a range of epsilon_B=0.001-0.1). Similarly, from the non-detection of a break in the X-ray light curve at t<3 d, we infer a minimum opening angle for the outflow of theta_j> 3-10 degrees (depending on the circumburst density). We conclude that Chandra observations of short GRBs are effective at determining precise positions and robust host galaxy associations in the absence of optical and radio detections. ",The Afterglow and Environment of the Short GRB111117A " The Interface Region Imaging Spectrograph (IRIS) reveals small-scale rapid brightenings in the form of bright grains all over coronal holes and the quiet sun. These bright grains are seen with the IRIS 1330 \AA, 1400 \AA\ and 2796 \AA\ slit-jaw filters. We combine coordinated observations with IRIS and from the ground with the Swedish 1-m Solar Telescope (SST) which allows us to have chromospheric (Ca II 8542 \AA, Ca II H 3968 \AA, H\alpha, and Mg II k 2796 \AA), and transition region (C II 1334 \AA, Si IV 1402) spectral imaging, and single-wavelength Stokes maps in Fe I 6302 \AA at high spatial (0.33""), temporal and spectral resolution. We conclude that the IRIS slit-jaw grains are the counterpart of so-called acoustic grains, i.e., resulting from chromospheric acoustic waves in a non-magnetic environment. We compare slit-jaw images with spectra from the IRIS spectrograph. We conclude that the grain intensity in the 2796 \AA\ slit-jaw filter comes from both the Mg II k core and wings. The signal in the C II and Si IV lines is too weak to explain the presence of grains in the 1300 and 1400 \AA\ slit-jaw images and we conclude that the grain signal in these passbands comes mostly from the continuum. Even though weak, the characteristic shock signatures of acoustic grains can often be detected in IRIS C II spectra. For some grains, spectral signature can be found in IRIS Si IV. This suggests that upward propagating acoustic waves sometimes reach all the way up to the transition region. ",Internetwork chromospheric bright grains observed with IRIS " Knowledge distillation has proven to be an effective technique in improving the performance a student model using predictions from a teacher model. However, recent work has shown that gains in average efficiency are not uniform across subgroups in the data, and in particular can often come at the cost of accuracy on rare subgroups and classes. To preserve strong performance across classes that may follow a long-tailed distribution, we develop distillation techniques that are tailored to improve the student's worst-class performance. Specifically, we introduce robust optimization objectives in different combinations for the teacher and student, and further allow for training with any tradeoff between the overall accuracy and the robust worst-class objective. We show empirically that our robust distillation techniques not only achieve better worst-class performance, but also lead to Pareto improvement in the tradeoff between overall performance and worst-class performance compared to other baseline methods. Theoretically, we provide insights into what makes a good teacher when the goal is to train a robust student. ",Robust Distillation for Worst-class Performance " Lending decisions are usually made with proprietary models that provide minimally acceptable explanations to users. In a future world without such secrecy, what decision support tools would one want to use for justified lending decisions? This question is timely, since the economy has dramatically shifted due to a pandemic, and a massive number of new loans will be necessary in the short term. We propose a framework for such decisions, including a globally interpretable machine learning model, an interactive visualization of it, and several types of summaries and explanations for any given decision. The machine learning model is a two-layer additive risk model, which resembles a two-layer neural network, but is decomposable into subscales. In this model, each node in the first (hidden) layer represents a meaningful subscale model, and all of the nonlinearities are transparent. Our online visualization tool allows exploration of this model, showing precisely how it came to its conclusion. We provide three types of explanations that are simpler than, but consistent with, the global model: case-based reasoning explanations that use neighboring past cases, a set of features that were the most important for the model's prediction, and summary-explanations that provide a customized sparse explanation for any particular lending decision made by the model. Our framework earned the FICO recognition award for the Explainable Machine Learning Challenge, which was the first public challenge in the domain of explainable machine learning. ","A Holistic Approach to Interpretability in Financial Lending: Models, Visualizations, and Summary-Explanations" " In this paper, we present a simple and robust numerical method able to predict, with high accuracy, the photo-thermal effects occurring for a gold nanoparticles arrangement under externally applied strain. The physical system is numerically implemented in the COMSOL Multiphysics simulation platform. The gold nanoparticles distributions are excited by linearly polarized light. By considering the system at rest and under the action of a mechanical stress, we analyze the extinction cross section, and we observe the production of heat at the nanoscale. The purpose of this work is to describe how sensitive the local temperature of the gold nanoparticles arrangement is to the formation of localized photo-thermal hot spots. ",Numerical modeling of active thermo-plasmonics experiments " It is proved that the family of equivalence classes of Lip-normed C*-algebras introduced by M. Rieffel, up to isomorphisms preserving the Lip-seminorm, is not complete w.r.t. the matricial quantum Gromov-Hausdorff distance introduced by D. Kerr. This is shown by exhibiting a Cauchy sequence whose limit, which always exists as an operator system, is not completely order isomorphic to any C*-algebra. Conditions ensuring the existence of a C*-structure on the limit are considered, making use of the notion of ultraproduct. More precisely, a necessary and sufficient condition is given for the existence, on the limiting operator system, of a C*-product structure inherited from the approximating C*-algebra. Such condition can be considered as a generalisation of the f-Leibniz conditions introduced by Kerr and Li. Furthermore, it is shown that our condition is not necessary for the existence of a C*-structure tout court, namely there are cases in which the limit is a C*-algebra, but the C*-structure is not inherited. ",The problem of completeness for Gromov-Hausdorff metrics on C*-algebras " We address the theory of magnon-phonon interactions and compute the corresponding quasi-particle and transport lifetimes in magnetic insulators with focus on yttrium iron garnet at intermediate temperatures from anisotropy- and exchange-mediated magnon-phonon interactions, the latter being derived from the volume dependence of the Curie temperature. We find in general weak effects of phonon scattering on magnon transport and the Gilbert damping of the macrospin Kittel mode. The magnon transport lifetime differs from the quasi-particle lifetime at shorter wavelengths. ",Magnon-phonon interactions in magnetic insulators " The quantum anomalous Hall (QAH) effect has been demonstrated in two-dimensional topological insulator systems incorporated with ferromagnetism. However, a comprehensive understanding of mesoscopic transport in sub-micron QAH devices has yet been established. Here we fabricated miniaturized QAH devices with channel widths down to 600 nm, where the QAH features are still preserved. A back-scattering channel is formed in narrow QAH devices through percolative hopping between 2D compressible puddles. Large resistance fluctuations are observed in narrow devices near the coercive field, which is associated with collective interference between intersecting paths along domain walls when the device geometry is smaller than the phase coherence length $L_\phi$. Through measurement of size-dependent breakdown current, we confirmed that the chiral edge states are confined at the physical boundary with its width on the order of Fermi wavelength. ",Mesoscopic Transport of Quantum Anomalous Hall Effect in Sub-Micron Size Regime " In this paper we provide a more general class of non-associative products using the exterior and Clifford bundles on the 7-sphere. Some additional properties encompass previous formalisms in the Clifford algebra context, and wider classes of non-associative structures on the 7-sphere are investigated, evinced by the directional non-associative products and the mixed composition of generalized non-associative products between Clifford algebra multivectors. These non-associative products are further generalized by considering the non-associative shear of arbitrary Clifford bundle Cl(0,7) elements into octonions. We assert new properties inherited from the non-associative structure introduced in the whole Clifford bundle on S7, which naturally induce involutions on the Clifford bundle and provide immediate generalizations concerning well-established formal results and potential applications in physics. ",Generalized non-associative structures on the 7-sphere " Extreme Multi-label Text Classification (XMTC) has been a tough challenge in machine learning research and applications due to the sheer sizes of the label spaces and the severe data scarce problem associated with the long tail of rare labels in highly skewed distributions. This paper addresses the challenge of tail label prediction by proposing a novel approach, which combines the effectiveness of a trained bag-of-words (BoW) classifier in generating informative label descriptions under severe data scarce conditions, and the power of neural embedding based retrieval models in mapping input documents (as queries) to relevant label descriptions. The proposed approach achieves state-of-the-art performance on XMTC benchmark datasets and significantly outperforms the best methods so far in the tail label prediction. We also provide a theoretical analysis for relating the BoW and neural models w.r.t. performance lower bound. ",Long-tailed Extreme Multi-label Text Classification with Generated Pseudo Label Descriptions " We investigate the associations between background galaxies and foreground clusters of galaxies due to the effect of gravitational lensing by clusters of galaxies. Similar to the well-known quasar-galaxy ones, these associations depend sensitively on the shape of galaxy number-magnitude or number-flux relation, and both positive and ``negative"" associations are found to be possible, depending on the limiting magnitude and/or the flux threshold in the surveys. We calculate the enhancement factors assuming a singular isothermal sphere model for clusters of galaxies and a pointlike model for background sources selected in three different wavelengths, $B$, $K$ and radio. Our results show that $K-$ selected galaxies might constitute the best sample to test the ``negative"" associations while it is unlikely that one can actually observe any association for blue galaxies. We also point out that bright radio sources ($S>1$ Jy) can provide strong positive associations, which may have been already detected in 3CR sample. ",Galaxy-cluster associations from gravitational lensing " We compute the Green ring of the Taft algebra $H_n(q)$, where $n$ is a positive integer greater than 1, and $q$ is an $n$-th root of unity. It turns out that the Green ring $r(H_n(q))$ of the Taft algebra $H_n(q)$ is a commutative ring generated by two elements subject to certain relations defined recursively. Concrete examples for $n=2,3,..., 8$ are given. ",The Green Rings of Taft algebras " Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram. Operated at L2, its science instrument is a 1.65m space telescope, featuring a grism-fed near-infrared (NIR) (0.85-1.7micron) survey camera/spectrometer with a 0.12 square degree field of view. During its two-year primary mission, Destiny will detect, observe, and characterize ~3000 SN Ia events over the redshift interval 0.4 0$, with word complexity $p$ satisfying $\limsup \frac{p(q)}{q} < 1.5 + \epsilon$. For arbitrary $f(q) \to \infty$, said subshifts can be made to satisfy $p(q) < q + f(q)$ infinitely often. We establish that every subshift associated to a rank-one transformation (on a probability space) which is not an odometer satisfies $\limsup p(q) - 1.5q = \infty$ and that this is optimal for rank-ones. ",Word Complexity of (Measure-Theoretically) Weakly Mixing Rank-One Subshifts " The Vela supernova remnant (SNR) shows several ejecta fragments protruding beyond the forward shock (shrapnel). Recent studies have revealed high Si abundance in two shrapnel (A and G), located in opposite directions with respect to the SNR center. This suggests the possible existence of a Si-rich jet-counterjet structure. We analyzed an XMM-Newton observation of a bright clump, behind shrapnel G, which lies along the direction connecting A and G. The aim is to study the physical and chemical properties of this clump to ascertain whether it is part of this putative jet-like structure. We produced background-corrected and adaptively-smoothed count-rate images and median photon energy maps, and performed a spatially resolved spectral analysis. We identified two structures with different physical properties. The first one is remarkably elongated along the direction connecting A and G. Its X-ray spectrum is much softer than that of the other two shrapnel, to the point of hindering the determination of the Si abundance, however its physical and chemical properties are consistent with those of shrapnel A and G. The second structure, running along the southeast-northwest direction, has a higher temperature and appears like a thin filament. By analyzing the ROSAT data, we have found that this filament is part of a very large and coherent structure that we identified in the western rim of the shell. We obtained a thorough description of the tail of Shrapnel G. In addition we discovered a coherent and very extended feature that we interpret as a signature of an earlier interaction of the remnant with the stellar wind of its progenitor star. The peculiar Ne/O ratio we found in the wind residual may be suggestive of a Wolf-Rayet progenitor for Vela SNR, though further analysis is required to address this point. ",X-ray emitting structures in the Vela SNR: ejecta anisotropies and progenitor stellar wind residuals " Functional uncertainty quantification (FunUQ) was recently proposed to quantify uncertainties in models and simulations that originate from input functions, as opposed to parameters. This paper extends FunUQ to quantify uncertainties originating from interatomic potentials in isothermal-isobaric molecular dynamics (MD) simulations and to the calculation of defect formation energies. We derive and verify a computationally inexpensive expression to compute functional derivatives in MD based on perturbation theory. We show that this functional derivative of the quantities of interest (average internal energy, volume, and defect energies in our case) with respect to the interatomic potential can be used to predict those quantities for a different interatomic potential, without re-running the simulation. The codes and scripts to perform FunUQ in MD are freely available for download. In addition, to facilitate reproducibility and to enable use of best practices for the approach, we created Jupyter notebooks to perform FunUQ analysis on MD simulations and made them available for online simulation in nanoHUB. The tool uses cloud computing resources and users can view, edit, and run end-to-end workflows from a standard web-browser without the need to need to download or install any software. ",Functional uncertainty quantification for isobaric molecular dynamics simulations and defect formation energies " We give new applications of graded Lie algebras to: identities of standard polynomials, deformation theory of quadratic Lie algebras, cyclic cohomology of quadratic Lie algebras, $2k$-Lie algebras, generalized Poisson brackets and so on. ","New applications of graded Lie algebras to Lie algebras, generalized Lie algebras and cohomology" " In low-resource natural language processing (NLP), the key problems are a lack of target language training data, and a lack of native speakers to create it. Cross-lingual methods have had notable success in addressing these concerns, but in certain common circumstances, such as insufficient pre-training corpora or languages far from the source language, their performance suffers. In this work we propose a complementary approach to building low-resource Named Entity Recognition (NER) models using ``non-speaker'' (NS) annotations, provided by annotators with no prior experience in the target language. We recruit 30 participants in a carefully controlled annotation experiment with Indonesian, Russian, and Hindi. We show that use of NS annotators produces results that are consistently on par or better than cross-lingual methods built on modern contextual representations, and have the potential to outperform with additional effort. We conclude with observations of common annotation patterns and recommended implementation practices, and motivate how NS annotations can be used in addition to prior methods for improved performance. For more details, http://cogcomp.org/page/publication_view/941 ",Building Low-Resource NER Models Using Non-Speaker Annotation " Speaker identification (SID) in the household scenario (e.g., for smart speakers) is an important but challenging problem due to limited number of labeled (enrollment) utterances, confusable voices, and demographic imbalances. Conventional speaker recognition systems generalize from a large random sample of speakers, causing the recognition to underperform for households drawn from specific cohorts or otherwise exhibiting high confusability. In this work, we propose a graph-based semi-supervised learning approach to improve household-level SID accuracy and robustness with locally adapted graph normalization and multi-signal fusion with multi-view graphs. Unlike other work on household SID, fairness, and signal fusion, this work focuses on speaker label inference (scoring) and provides a simple solution to realize household-specific adaptation and multi-signal fusion without tuning the embeddings or training a fusion network. Experiments on the VoxCeleb dataset demonstrate that our approach consistently improves the performance across households with different customer cohorts and degrees of confusability. ",Graph-based Multi-View Fusion and Local Adaptation: Mitigating Within-Household Confusability for Speaker Identification " The branching factor of a game is the average number of new states reachable from a given state. It is a widely used metric in AI research on board games, but less often computed or discussed for videogames. This paper provides estimates for the branching factors of 103 Atari 2600 games, as implemented in the Arcade Learning Environment (ALE). Depending on the game, ALE exposes between 3 and 18 available actions per frame of gameplay, which is an upper bound on branching factor. This paper shows, based on an enumeration of the first 1 million distinct states reachable in each game, that the average branching factor is usually much lower, in many games barely above 1. In addition to reporting the branching factors, this paper aims to clarify what constitutes a distinct state in ALE. ",Estimates for the Branching Factors of Atari Games " In this article, we study the ideals of mid $p$-summing operators. We obtain representation of these operator ideals by tensor norms. These tensor norms are defined by using a particular kind of sequential dual of the class of mid $p$-summable sequences. As a consequence, we prove a characterization of the adjoints of weakly and absolutely mid $p$-summing operators in terms of the operators that are defined by the transformation of dual spaces of certain vector-valued sequence spaces. ",Ideals of mid p-summing operators: a tensor product approach " We study the max-plus or tropical analogue of the notion of polar: the polar of a cone represents the set of linear inequalities satisfied by its elements. We establish an analogue of the bipolar theorem, which characterizes all the inequalities satisfied by the elements of a tropical convex cone. We derive this characterization from a new separation theorem. We also establish variants of these results concerning systems of linear equalities. ",The tropical analogue of polar cones " Machine learning tasks may admit multiple competing models that achieve similar performance yet produce conflicting outputs for individual samples -- a phenomenon known as predictive multiplicity. We demonstrate that fairness interventions in machine learning optimized solely for group fairness and accuracy can exacerbate predictive multiplicity. Consequently, state-of-the-art fairness interventions can mask high predictive multiplicity behind favorable group fairness and accuracy metrics. We argue that a third axis of ``arbitrariness'' should be considered when deploying models to aid decision-making in applications of individual-level impact. To address this challenge, we propose an ensemble algorithm applicable to any fairness intervention that provably ensures more consistent predictions. ",Arbitrariness Lies Beyond the Fairness-Accuracy Frontier " Motivated by the improved results from the HPQCD lattice collaboration on the hadronic matrix elements entering $\Delta M_{s,d}$ in $B_{s,d}^0-\bar B_{s,d}^0$ mixings and the increase of the experimental branching ratio for $B_s\to\mu^+\mu^-$, we update our 2016 analysis of various flavour observables in four 331 models, M1, M3, M13 and M16 based on the gauge group $SU(3)_C\times SU(3)_L\times U(1)_X$. These four models, which are distinguished by the quantum numbers, are selected among 24 331 models through their consistency with the electroweak precision tests and simultaneously by the relation $C_9^\text{NP}=-b\, C_{10}^\text{NP}$ with $b\ge 2$, which after new result on $B_s\to\mu^+\mu^-$ from CMS is favoured over the popular relation $C_9^\text{NP}=- C_{10}^\text{NP}$ predicted by several leptoquark models. In this context we investigate in particular the dependence of various observables on $|V_{cb}|$, varying it in the broad range $[0.0386,\,0.043]$, that encompasses both its inclusive and exclusive determinations. Imposing the experimental constraints from $\varepsilon_K$, $\Delta M_s$, $\Delta M_d$ and the mixing induced CP asymmetries $S_{\psi K_S}$ and $S_{\psi K_S}$, we investigate for which values of $|V_{cb}|$ the four models can be made compatible with these data and what is the impact on $B$ and $K$ branching ratios. In particular we analyse NP contributions to the Wilson coefficients $C_9$ and $C_{10}$ and the decays $B_{s,d}\to\mu^+\mu^-$, $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_L\rightarrow\pi^0\nu\bar\nu$. This allows us to illustrate how the value of $|V_{cb}|$ determined together with other parameters of these models is infected by NP contributions and compare it with the one obtained recently under the assumption of the absence of NP in $\varepsilon_K$, $\Delta M_s$, $\Delta M_d$ and $ S_{\psi K_S}$. ","331 Model Predictions for Rare $B$ and $K$ Decays, and $\Delta F=2$ Processes: an Update" " Stack Overflow (SO) platform has a huge dataset of questions and answers driven by interactions between users. But the count of unanswered questions is continuously rising. This issue is common across various community Question & Answering platforms (Q&A) such as Yahoo, Quora and so on. Clustering is one of the approaches used by these communities to address this challenge. Specifically, Intent-based clustering could be leveraged to answer unanswered questions using other answered questions in the same cluster and can also improve the response time for new questions. It is here, we propose SOCluster, an approach and a tool to cluster SO questions based on intent using a graph-based clustering approach. We selected four datasets of 10k, 20k, 30k & 40k SO questions without code-snippets or images involved, and performed intent-based clustering on them. We have done a preliminary evaluation of our tool by analyzing the resultant clusters using the commonly used metrics of Silhouette coefficient, Calinkski-Harabasz Index, & Davies-Bouldin Index. We performed clustering for 8 different threshold similarity values and analyzed the intriguing trends reflected by the output clusters through the three evaluation metrics. At 90% threshold similarity, it shows the best value for the three evaluation metrics on all four datasets. The source code and tool are available for download on Github at: https://github.com/Liveitabhi/SOCluster, and the demo can be found here: https://youtu.be/uyn8ie4h3NY. ",SOCluster- Towards Intent-based Clustering of Stack Overflow Questions using Graph-Based Approach One-loop mass shifts to the classical masses of stable kinks arising in a massive non-linear ${\mathbb S}^2$-sigma model are computed. Ultraviolet divergences are controlled using the heat kernel/zeta function regularization method. A comparison between the results achieved from exact and high-temperature asymptotic heat traces is analyzed in depth. ,On the semiclassical mass of ${\mathbb S}^2$-kinks " Massive starburst galaxies in the early Universe are estimated to have depletion times of $\sim 100$ Myr and thus be able to convert their gas very quickly into stars, possibly leading to a rapid quenching of their star formation. For these reasons, they are considered progenitors of massive early-type galaxies (ETGs). In this paper, we study two high-$z$ starbursts, AzTEC/C159 ($z\simeq 4.57$) and J1000+0234 ($z\simeq 4.54$), observed with ALMA in the [CII] 158-$\mu$m emission line. These observations reveal two massive and regularly rotating gaseous discs. A 3D modelling of these discs returns rotation velocities of about $500$ km/s and gas velocity dispersions as low as $\approx 20$ km/s, leading to very high ratios between regular and random motion ($V/\sigma {\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$}} 20$), at least in AzTEC/C159. The mass decompositions of the rotation curves show that both galaxies are highly baryon-dominated with gas masses of $\approx 10^{11}M_{\odot}$, which, for J1000+0234, is significantly higher than previous estimates. We show that these high-$z$ galaxies overlap with $z=0$ massive ETGs in the ETG analogue of the stellar-mass Tully-Fisher relation once their gas is converted into stars. This provides dynamical evidence of the connection between massive high-$z$ starbursts and ETGs, although the transformation mechanism from fast rotating to nearly pressure-supported systems remains unclear. ",Fast rotating and low-turbulence discs at $z\simeq 4.5$: Dynamical evidence of their evolution into local early-type galaxies " We extend our previously developed general approach (1) to study a phenomenological model in which the simulated packing of hard, attractive spheres on a prolate spheroid surface with convexity constraints produces structures identical to those of prolate virus capsid structures. Our simulation approach combines the traditional Monte Carlo method with the method of random sampling on an ellipsoidal surface and a convex hull searching algorithm. Using this approach we study the assembly and structural origin of non-icosahedral, elongated virus capsids, such as two aberrant flock house virus (FHV) particles and the prolate prohead of bacteriophage phi29, and discuss the implication of our simulation results in the context of recent experimental findings. ",Simulation Studies of A Phenomenological Model for the Assembly of Elongated Virus Capsids " Bell's inequality fundamentally changed our understanding of quantum mechanics. Bell's insight that non-local correlations between quantum systems cannot be explained classically can be verified experimentally, and has numerous applications in modern quantum information. Today, the CHSH inequality is probably the most well-known Bell inequality and it has given us a wealth of understanding in what differentiates the classical from the quantum world. Yet, there are certainly other means of quantifying ""Bell non-locality without inequalities"" such as the famous Hardy's paradox. As such, one may wonder whether these are entirely different approaches to non-locality. For this anniversary issue, we unify the perspective of the CHSH inequality and Hardy Paradox into one family of non-local games which include both as special cases. ",A unified view on Hardy's paradox and the CHSH inequality " The validation and verification of automated driving functions (ADFs) is a challenging task on the journey of making those functions available to the public beyond the current research context. Simulation is a valuable building block for scenario-based testing that can help to model traffic situations that are relevant for ADFs. In addition to the surrounding traffic and environment of the ADF under test, the logical description and automated generation of concrete road networks have an important role. We aim to reduce efforts for manual map generation and to improve the automated testing process during development. Hence, this paper proposes a method to analyze real road networks and extract relevant parameters for the variation of synthetic simulation maps that correspond to real-world properties. Consequently, characteristics for inner-city junctions are selected from Here HD map. Then, parameter distributions are determined, analyzed and used to generate variations of road networks in the OpenDRIVE standard. The presented methodology enables efficient road network modeling which can be used for large scale simulations. The developed road network generation tool is publicly available on GitHub. ",Road Network Variation Based on HD Map Analysis for the Simulative Safety Assurance of Automated Vehicles " We analyze the time dependent response of strongly scattering media (SSM) to ultra-short pulses of light. A random walk technique is used to model the optical scattering of ultra-short pulses of light propagating through media with random shapes and various packing densities. The pulse spreading was found to be strongly dependent on the average particle size, particle size distribution, and the packing fraction. We also show that the intensity as a function of time-delay can be used to analyze the particle size distribution and packing fraction of an optically thick sample independently of the presence of absorption features. Finally, we propose an all new way to measure the shape of ultra-short pulses that have propagated through a SSM. ",Using ultra-short pulses to determine particle size and density distributions " Metabolic networks, formed by a series of metabolic pathways, are made of intracellular and extracellular reactions that determine the biochemical properties of a cell, and by a set of interactions that guide and regulate the activity of these reactions. Most of these pathways are formed by an intricate and complex network of chain reactions, and can be represented in a human readable form using graphs which describe the cell cycle checkpoint pathways. This paper proposes a method to represent Molecular Interaction Maps (graphical representations of complex metabolic networks) in Linear Temporal Logic. The logical representation of such networks allows one to reason about them, in order to check, for instance, whether a graph satisfies a given property $\phi$, as well as to find out which initial conditons would guarantee $\phi$, or else how can the the graph be updated in order to satisfy $\phi$. Both the translation and resolution methods have been implemented in a tool capable of addressing such questions thanks to a reduction to propositional logic which allows exploiting classical SAT solvers. ",A framework for modelling Molecular Interaction Maps " In this paper, we highlight the importance of the boundary effects on the construction of quasi-periodic vortex patches solutions close to Rankine vortices and whose existence is not known in the whole space due to the resonances of the linear frequencies. Availing of the lack of invariance by radial dilation of Euler equations in the unit disc and using a Nash-Moser implicit function iterative scheme we show the existence of such structures when the radius of the Rankine vortex belongs to a suitable massive Cantor-like set with almost full Lebesgue measure. ",Boundary effects on the emergence of quasi-periodic solutions for Euler equations " We present results of spectral analysis of Ginga data obtained during the decline phase after the 1989 outburst of GS 2023+338 (V404 Cyg). Our analysis includes detailed modelling of the effects of X-ray reflection/reprocessing. We have found that (1) the contribution of the reprocessed component (both continuum and line) corresponds to the solid angle of the reprocessor as seen from the X-ray source of Omega \approx (0.4-0.5) 2\pi, (2) the reprocessed component (both line and continuum) is broadened (``smeared'') by kinematic and relativistic effects, as expected from the accretion disk reflection. We discuss the constraints these results give on various possible system geometries. ",Relativistically smeared X-ray reprocessed components in the GINGA spectra of GS 2023+338 " An overview on the prospects for Higgs Boson searches with the CMS detector is presented. Projections have been made to estimate the potential to a possible discovery or exclusion of the Higgs Boson during the run at a center of mass energy of 7 TeV at the LHC, with a recorded integrated luminosity of approximately 1 fb-1, conditions expected by the end of 2011 ",Prospects for the Higgs Boson Searches with CMS " Feature sizes in integrated circuits have decreased substantially over time, and it has become increasingly difficult to three-dimensionally image these complex circuits after fabrication. This can be important for process development, defect analysis, and detection of unexpected structures in externally sourced chips, among other applications. Here, we report on a non-destructive, tabletop approach that addresses this imaging problem through x-ray tomography, which we uniquely realize with an instrument that combines a scanning electron microscope (SEM) with a transition-edge sensor (TES) x-ray spectrometer. Our approach uses the highly focused SEM electron beam to generate a small x-ray generation region in a carefully designed target layer that is placed over the sample being tested. With the high collection efficiency and resolving power of a TES spectrometer, we can isolate x-rays generated in the target from background and trace their paths through regions of interest in the sample layers, providing information about the various materials along the x-ray paths through their attenuation functions. We have recently demonstrated our approach using a 240 Mo/Cu bilayer TES prototype instrument on a simplified test sample containing features with sizes of $\sim$1 $\mu$m. Currently, we are designing and building a 3000 Mo/Au bilayer TES spectrometer upgrade, which is expected to improve the imaging speed by factor of up to 60 through a combination of increased detector number and detector speed. ",Design of a 3000-pixel transition-edge sensor x-ray spectrometer for microcircuit tomography " We investigate the spin alignment of the dark matter halos by considering a mechanism somewhat similar to tidal locking. We dubbed it Tidal Locking Theory (TLT). While Tidal Torque Theory is responsible for the initial angular momentum of the dark matter halos, the Tidal locking Theory explains the angular momentum evolution during non-linear ages. Our previous work showed that close encounters between haloes could drastically change their angular momentum. The current manuscript argues that the tidal locking theory predicts partial alignment between speed and the spin direction for the large high-speed halos. To examine this prediction, we use the IllustrisTNG simulation and look for the alignment of the halos' rotation axis. We find that the excess probability of alignment between spin and speed is about 10 percent at $z=0$ for fast haloes; with velocities larger than twice the median. We show that tidal torque theory predicts that the spin of a halo tends to be aligned with the middle eigendirection of the tidal tensor. Moreover, we find that the halos at $z=10$ are preferentially aligned with the middle eigendirection of the tidal tensor with an excess probability of 15 percent. We show that tidal torque theory fails to predict correct alignment at $z=0$ while it works almost flawlessly at $z=10$. ",Spin Alignment of Dark Matter Halos: Mad Halos " Difference schemes for the time-fractional diffusion equation with variable coefficients and nonlocal boundary conditions containing real parameters $\alpha$ and $\beta$ are considered. By the method of energy inequalities, for the solution of the difference problem, we obtain a priori estimates, which imply the stability and convergence of these difference schemes. ",Stability and convergence of difference schemes approximating a two-parameter nonlocal boundary value problem for time-fractional diffusion equation " We examine N=1 supersymmetric gauge theories which confine in the presence of a tree-level superpotential. We show the confining spectra which satisfy the 't Hooft anomaly matching conditions and give a simple method to find the confining superpotential. Using this method we fix the confining superpotentials in the simplest cases, and show how these superpotentials are generated by multi-instanton effects in the dual theory. These new type of confining theories may be useful for model building, since the size of the matter content is not restricted by an index constraint. Therefore, one expects that a large variety of new confining spectra can be obtained using such models. ",New Confining N=1 Supersymmetric Gauge Theories " Within the functional calculi of Bochner-Phillips and Hirsch, we describe the operators of distributed order differentiation and integration as functions of the classical differentiation and integration operators respectively. ",Distributed Order Calculus: an Operator-Theoretic Interpretation We present a measurement of the W boson mass in proton-antiproton collisions at \sqrt{s} = 1.8 TeV based on a data sample of 82 pb^-1 integrated luminosity collected by the D0 detector at the Fermilab Tevatron. We utilize e \nu events in which the electron shower is close to the phi edge of one of the 32 modules in the D0 central calorimeter. The electromagnetic calorimenter response and resolution in this region differs from that in the rest of the module and electrons in this region were not previously utilized. We determine the calorimeter response and resolution in this region using Z -> ee events. We extract the W boson mass by fitting to the transverse mass and to the electron and neutrino transverse momentum distributions. The result is combined with previous D0 results to obtain an improved measurement of the W boson mass: m_W = 80.483 +- 0.084 GeV. ,Improved D0 W Boson Mass Determination " The Roman coronagraph instrument will demonstrate high-contrast imaging technology, enabling the imaging of faint debris disks, the discovery of inner dust belts, and planets. Polarization studies of debris disks provide information on dust grains' size, shape, and distribution. The Roman coronagraph uses a polarization module comprising two Wollaston prism assemblies to produce four orthogonally polarized images ($I_{0}$, $I_{90}$, $I_{45}$, and $I_{135}$), each measuring 3.2 arcsecs in diameter and separated by 7.5 arcsecs in the sky. The expected RMS error in the linear polarization fraction measurement is 1.66\% per resolution element of 3 by 3 pixels. We present a mathematical model to simulate the polarized intensity images through the Roman CGI, including the instrumental polarization and other uncertainties. We use disk modeling software, MCFOST, to model $q$, $u$, and polarization intensity of the debris disk, Epsilon-Eridani. The polarization intensities are convolved with the coronagraph throughput incorporating the PSF morphology. We include model uncertainties, detector noise, speckle noise, and jitter. The final polarization fraction of 0.4$\pm$0.0251 is obtained after the post-processing. ",Simulations of polarimetric observations of debris disks through the Roman Coronagraph Instrument " It is well known that for some tasks, labeled data sets may be hard to gather. Therefore, we wished to tackle here the problem of having insufficient training data. We examined learning methods from unlabeled data after an initial training on a limited labeled data set. The suggested approach can be used as an online learning method on the unlabeled test set. In the general classification task, whenever we predict a label with high enough confidence, we treat it as a true label and train the data accordingly. For the semantic segmentation task, a classic example for an expensive data labeling process, we do so pixel-wise. Our suggested approaches were applied on the MNIST data-set as a proof of concept for a vision classification task and on the ADE20K data-set in order to tackle the semi-supervised semantic segmentation problem. ",Improved Training for Self-Training by Confidence Assessments " The most sensitive method to measure the CKM angle gamma is to exploit interference in B(+/-)->DK(+/-) decays, with the D-meson decaying to a hadronic final state. The analysis of quantum-correlated decays of the psi(3770) at CLEO-c provides invaluable information on the strong-phase difference between the D0 and D0bar across the Dalitz plane. Results from analyses of the decays D->K0PiPi and D->K0KK will be presented. ",The crucial role of CLEO-c in the measurement of gamma " In the Coulomb gauge of QCD, the Hamiltonian contains a non-linear Christ-Lee term, which may alternatively be derived from a careful treatment of ambiguous Feynman integrals at 2-loop order. We investigate how and if UV divergences from higher order graphs can be consistently absorbed by renormalization of the Christ-Lee term. We find that they cannot. ",Renormalization in Coulomb gauge QCD " The tangential G-band in the Raman spectra of a metallic single-wall carbon nanotube shows two peaks: a higher frequency component having the Lorentzian shape and a lower-frequency component of lower intensity with a Breit-Wigner-Fano (BWF)-type lineshape. This interesting feature has been analyzed on the basis of phonon-plasmon coupling in a nanotube. It is shown that while the gapless semi-acoustic plasmon cannot account for the observed spectra as claimed by other investigators, the low-lying optical plasmon corresponding to the tangential motion of the electrons on the nanotube surface can explain the observed features. In particular, this theory can explain occurrence of both the Lorentzian and BWF lineshapes in the G-band Raman spectra of metallic single-wall carbon nanotubes. Furthermore, the theory shows that the BWF peak moves to higher frequency, has a lower intensity and a lower half width at higher diameters of the nanotube. All these features are in agreement with experimental observations. ",Theory of the tangential G-band feature in the Raman spectra of metallic carbon nanotubes " Molecular hydrogen is the most abundant molecular species in the Universe. While no doubts exist that it is mainly formed on the interstellar dust grain surfaces, many details of this process remain poorly known. In this work, we focus on the fate of the energy released by the H$_2$ formation on the dust icy mantles, how it is partitioned between the substrate and the newly formed H$_2$, a process that has a profound impact on the interstellar medium. We carried out state-of-art \textit{ab-initio} molecular dynamics simulations of H$_2$ formation on periodic crystalline and amorphous ice surface models. Our calculations show that up to two thirds of the energy liberated in the reaction ($\sim$300 kJ/mol $\sim$3.1 eV) is absorbed by the ice in less than 1 ps. The remaining energy ($\sim$140 kJ/mol $\sim$1.5 eV) is kept by the newly born H$_2$. Since it is ten times larger than the H$_2$ binding energy on the ice, the new H$_2$ molecule will eventually be released into the gas-phase. The ice water molecules within $\sim$4 \AA ~from the reaction site acquire enough energy, between 3 and 14 kJ/mol (360--1560 K), to potentially liberate other frozen H$_2$ and, perhaps, frozen CO molecules. If confirmed, the latter process would solve the long standing conundrum of the presence of gaseous CO in molecular clouds. Finally, the vibrational state of the newly formed H$_2$ drops from highly excited states ($\nu = 6$) to low ($\nu \leq 2$) vibrational levels in a timescale of the order of ps. ",H2 formation on interstellar grains and the fate of reaction energy " The discovery of superconducting nickelates reignited hope for elucidating the high-$T_{\textrm{c}}$ superconductivity mechanism in the isostructural cuprates. While in the cuprates, the superconducting gap opens up on a single-band of the quasi-2D Fermi surface, the nickelates are known to have 3D nature of electronic structure with multi-band. This raises a serious question about the role of 2D nature for the high-$T_{\textrm{c}}$ superconductivity. Here, employing dynamical mean field theory combined with GW method, we found the Kondo effect driven by the strong correlation of Nd-4$f$ and Ni-3$d$ electrons emerging at low temperature. The Kondo effect modifies the topology of the Fermi surface leading to 3D multi-band nature. Remarkably, the Kondo effect is easily destroyed by lattice modulation, leading to the quasi-2D nature. Our findings clearly explain the inconsistent occurrence of superconductivity and distinct electrical resistivity behavior between NdNiO$_{2}$ bulk and films. ",Impacts of f-d Kondo cloud on superconductivity of nickelates " We propose and analyze a structure-preserving parametric finite element method (SP-PFEM) to simulate the motion of closed curves governed by area-conserved generalized mean curvature flow in two dimensions (2D). We first present a variational formulation and rigorously prove that it preserves two fundamental geometric structures of the flows, i.e., (a) the conservation of the area enclosed by the closed curve; (b) the decrease of the perimeter of the curve. Then the variational formulation is approximated by using piecewise linear parametric finite elements in space to develop the semi-discrete scheme. With the help of the discrete Cauchy's inequality and discrete power mean inequality, the area conservation and perimeter decrease properties of the semi-discrete scheme are shown. On this basis, by combining the backward Euler method in time and a proper approximation of the unit normal vector, a structure-preserving fully discrete scheme is constructed successfully, which can preserve the two essential geometric structures simultaneously at the discrete level. Finally, numerical experiments test the convergence rate, area conservation, perimeter decrease and mesh quality, and depict the evolution of curves. Numerical results indicate that the proposed SP-PFEM provides a reliable and powerful tool for the simulation of area-conserved generalized mean curvature flow in 2D. ",A structure-preserving parametric finite element method for area-conserved generalized mean curvature flow " Whether Quantum Chromodynamics (QCD) exhibits a phase transition at finite temperature and density is an open question. It is important for hydrodynamic modeling of heavy ion collisions and neutron star mergers. Lattice QCD simulations have definitively shown that the transition from hadrons to quarks and gluons is a crossover when the baryon chemical potential is zero or small. We combine the parametric scaling equation of state, usually associated with the 3D Ising model, with a background equation of state based on a smooth crossover from hadrons to quarks and gluons. Comparison to experimental data from the Beam Energy Scan II at the Relativistic Heavy Ion Collider or in heavy ion experiments at other accelerators may allow the critical exponents and amplitudes in the scaling equation of state to be determined for QCD if a critical point exists. ",Extending a Scaling Equation of State to QCD " We obtained constraints on a 12 parameter extended cosmological scenario including non-phantom dynamical dark energy (NPDDE) with CPL parametrization. We also include the six $\Lambda$CDM parameters, number of relativistic neutrino species ($N_{\textrm{eff}}$) and sum over active neutrino masses ($\sum m_{\nu}$), tensor-to-scalar ratio ($r_{0.05}$), and running of the spectral index ($n_{run}$). We use CMB Data from Planck 2015; BAO Measurements from SDSS BOSS DR12, MGS, and 6dFS; SNe Ia Luminosity Distance measurements from the Pantheon Sample; CMB B-mode polarization data from BICEP2/Keck collaboration (BK14); Planck lensing data; and a prior on Hubble constant ($73.24\pm1.74$ km/sec/Mpc) from local measurements (HST). We have found strong bounds on the sum of the active neutrino masses. For instance, a strong bound of $\sum m_{\nu} <$ 0.123 eV (95\% C.L.) comes from Planck+BK14+BAO. Although we are in such an extended parameter space, this bound is stronger than a bound of $\sum m_{\nu} <$ 0.158 eV (95\% C.L.) obtained in $\Lambda \textrm{CDM}+\sum m_{\nu}$ with Planck+BAO. Varying $A_{\textrm{lens}}$ instead of $r_{0.05}$ however leads to weaker bounds on $\sum m_{\nu}$. Inclusion of the HST leads to the standard value of $N_{\textrm{eff}} = 3.045$ being discarded at more than 68\% C.L., which increases to 95\% C.L. when we vary $A_{\textrm{lens}}$ instead of $r_{0.05}$, implying a small preference for dark radiation, driven by the $H_0$ tension. ",Strong Bounds on Sum of Neutrino Masses in a 12 Parameter Extended Scenario with Non-Phantom Dynamical Dark Energy ($w(z)\geq -1$) " Physics-informed neural networks (PINNs) hold the potential for supplementing the existing set of techniques for solving differential equations that emerge in the study of black hole quasinormal modes. The present research investigated them by studying black hole perturbation equations with known analytical solutions and thus could be framed as inverse problems in PINNs. Our main goal was to test the accuracy of PINNs in computing unknown quasinormal frequencies within the differential equations. The black hole perturbation scenarios that we considered included near extremal Schwarzschild-de Sitter and Reissner-Nordstr\""{o}m-de Sitter black holes, and a toy problem resembling them. For these cases, it was shown that PINNs could compute the QNFs with up to 4 digit decimal accuracy for the lowest multipole number, $l$, and lowest mode number, $n$. ",Investigating a New Approach to Quasinormal Modes: Physics-Informed Neural Networks " The vast majority of techniques to train fair models require access to the protected attribute (e.g., race, gender), either at train time or in production. However, in many important applications this protected attribute is largely unavailable. In this paper, we develop methods for measuring and reducing fairness violations in a setting with limited access to protected attribute labels. Specifically, we assume access to protected attribute labels on a small subset of the dataset of interest, but only probabilistic estimates of protected attribute labels (e.g., via Bayesian Improved Surname Geocoding) for the rest of the dataset. With this setting in mind, we propose a method to estimate bounds on common fairness metrics for an existing model, as well as a method for training a model to limit fairness violations by solving a constrained non-convex optimization problem. Unlike similar existing approaches, our methods take advantage of contextual information -- specifically, the relationships between a model's predictions and the probabilistic prediction of protected attributes, given the true protected attribute, and vice versa -- to provide tighter bounds on the true disparity. We provide an empirical illustration of our methods using voting data. First, we show our measurement method can bound the true disparity up to 5.5x tighter than previous methods in these applications. Then, we demonstrate that our training technique effectively reduces disparity while incurring lesser fairness-accuracy trade-offs than other fair optimization methods with limited access to protected attributes. ",Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features " Born-Infeld determinantal gravity formulated in Weitzenbock spacetime is discussed in the context of Friedmann-Robertson-Walker (FRW) cosmologies. It is shown how the standard model big bang singularity is absent in certain spatially flat FRW spacetimes, where the high energy regime is characterized by a de Sitter inflationary stage of geometrical character, i.e., without the presence of the inflaton field. This taming of the initial singularity is also achieved for some spatially curved FRW manifolds where the singularity is replaced by a de Sitter stage or a big bounce of the scale factor depending on certain combinations of free parameters appearing in the action. Unlike other Born-Infeld-like theories in vogue, the one here presented is also capable of deforming vacuum general relativistic solutions. ",Nonsingular Promises from Born-Infeld Gravity " In this paper, we show that the G-normality of X and Y can be characterized according to the form of f such that the distribution of {\lambda}+f({\lambda})Y does not depend on {\lambda}, where Y is an independent copy of X and {\lambda} is in the domain of f. Without the condition that Y is identically distributed with X, we still have a similar argument. ",A note on characterizations of G-normal distribution %auto-ignore The paper has been withdrawn by authors. The completely revised version can be found under the following preprint-no: hep-ph/9910282. ,Semileptonic decays and axial-vector constants of the octet baryons " In cosmological framework, Noether symmetry technique has revealed a useful tool in order to examine exact solutions. In this work, we first introduce the Jordan-frame Lagrangian and apply the conformal transformation in order to obtain the Lagrangian equivalent to Einstein-frame form. We then analyse the dynamics of the field in the cosmological alpha-attractors using the Noether sysmetry approach by focusing on the single field scenario in the Einstein-frame form. We show that with a Noether symmetry the coresponding dynamical system can be completely integrated and the potential exhibited by the symmetry can be exactly obtained. With the proper choice of parameters, the behavior of the scale factor displays an exponential (de Sitter) behavior at the present epoch. Moreover, we discover that the Hubble parameters strongly depends on the initial values of parameters exhibited by the Noether symmetry. Interestingly, it can retardedly evolve and becomes a constant in the present epoch in all cases. ",Noether symmetry approach in the cosmological alpha-attractors " In this paper we consider the general fractional equation \sum_{j=1}^m \lambda_j \frac{\partial^{\nu_j}}{\partial t^{\nu_j}} w(x_1,..., x_n ; t) = -c^2 (-\Delta)^\beta w(x_1,..., x_n ; t), for \nu_j \in (0,1], \beta \in (0,1] with initial condition w(x_1,..., x_n ; 0)= \prod_{j=1}^n \delta (x_j). The solution of the Cauchy problem above coincides with the distribution of the n-dimensional process \bm{S}_n^{2\beta} \mathcal{L} c^2 {L}^{\nu_1,..., \nu_m} (t) \r, t>0, where \bm{S}_n^{2\beta} is an isotropic stable process independent from {L}^{\nu_1,..., \nu_m}(t) which is the inverse of {H}^{\nu_1,..., \nu_m} (t) = \sum_{j=1}^m \lambda_j^{1/\nu_j} H^{\nu_j} (t), t>0, with H^{\nu_j}(t) independent, positively-skewed stable r.v.'s of order \nu_j. The problem considered includes the fractional telegraph equation as a special case as well as the governing equation of stable processes. The composition \bm{S}_n^{2\beta} (c^2 {L}^{\nu_1,..., \nu_m} (t)), t>0, supplies a probabilistic representation for the solutions of the fractional equations above and coincides for \beta = 1 with the n-dimensional Brownian motion at the time {L}^{\nu_1,..., \nu_m} (t), t>0. The iterated process {L}^{\nu_1,..., \nu_m}_r (t), t>0, inverse to {H}^{\nu_1,..., \nu_m}_r (t) =\sum_{j=1}^m \lambda_j^{1/\nu_j} _1H^{\nu_j} (_{2}H^{\nu_j} (_3H^{\nu_j} (... _{r}H^{\nu_j} (t)...))), t>0, permits us to construct the process \bm{S}_n^{2\beta} (c^2 {L}^{\nu_1,..., \nu_m}_r (t)), t>0, the distribution of which solves a space-fractional generalized telegraph equation. For r \to \infty and \beta = 1 we obtain a distribution which represents the n-dimensional generalisation of the Gauss-Laplace law and solves the equation \sum_{j=1}^m \lambda_j w(x_1,..., x_n) = c^2 \sum_{j=1}^n \frac{\partial^2}{\partial x_j^2} w(x_1,..., x_n). ",Space-time fractional equations and the related stable processes at random time " Zeckendorf proved that every positive integer has a unique partition as a sum of non-consecutive Fibonacci numbers. Similarly, every natural number can be partitioned into a sum of non-consecutive terms of the Lucas sequence, although such partitions need not be unique. In this paper, we prove that a natural number can have at most two distinct non-consecutive partitions in the Lucas sequence, find all positive integers with a fixed term in their partition, and calculate the limiting value of the proportion of natural numbers that are not uniquely partitioned into the sum of non-consecutive terms in the Lucas sequence. ",On Zeckendorf Related Partitions Using the Lucas Sequence " We report results of inelastic-neutron-scattering measurements of low-energy spin-wave excitations in two structurally distinct families of iron-pnictide parent compounds: Na(1-{\delta})FeAs and BaFe2As2. Despite their very different values of the ordered magnetic moment and N\'eel temperatures, T_N, in the antiferromagnetic state both compounds exhibit similar spin gaps of the order of 10 meV at the magnetic Brillouin-zone center. The gap opens sharply below T_N, with no signatures of a precursor gap at temperatures between the orthorhombic and magnetic phase transitions in Na(1-{\delta})FeAs. We also find a relatively weak dispersion of the spin-wave gap in BaFe2As2 along the out-of-plane momentum component, q_z. At the magnetic zone boundary (q_z = 0), spin excitations in the ordered state persist down to 20 meV, which implies a much smaller value of the effective out-of-plane exchange interaction, J_c, as compared to previous estimates based on fitting the high-energy spin-wave dispersion to a Heisenberg-type model. ",Similar zone-center gaps in the low-energy spin-wave spectra of NaFeAs and BaFe2As2 " We estimate the accuracy with which the coefficient of the CP even dimension six operators involving Higgs and two vector bosons (HVV) can be measured at linear $e^+ e^-$ colliders. Using the optimal observables method for the kinematic distributions, our analysis is based on the five different processes. First is the WW fusion process in the t-channel ($e^+e^- \to \bar{\nu}_e \nu_e H$), where we use the rapidity y and the transverse momentum $\pT$ of the Higgs boson as observables. Second is the ZH pair production process in the s-channel, where we use the scattering angle of the Z and the Z decay angular distributions, reproducing the results of the previous studies. Third is the t-channel ZZ, fusion processes ($e^+e^- \to e^+e^ -H$), where we use the energy and angular distributions of the tagged $e^+$ and $e^-$. In the fourth, we consider the rapidity distribution of the untagged $e^+e^-H$ events, which can be approximated well as the $\gamma \gamma$ fusion of the bremsstrahlung photons from $e^+$ and $e^-$ beams. As the last process,we consider the single tagged $e^+e^- H$ events, which probe the $\gamma e^{\pm} \to H e^{\pm}$ process. All the results are presented in such a way that statistical errors of the constraints on the effective couplings and their correlations are read off when all of them are allowed to vary simultaneously, for each of the above processes, for $m_H=120 $ GeV, at $\sqrt{s}=250\GEV$, $350\GEV$ $500\GEV$ and $1\TEV$, with and without $e^-$ beam polarization of 80%. ",Measuring the Higgs-Vector boson Couplings at Linear $e^{+} e^{-}$ Collider " We construct an efficient zero-temperature semi-local density functional to dynamically simulate an electron bubble passing through superfluid 4He under various pressures and electric fields up to nanosecond timescale. Our simulated drift velocity can be quantitatively compared to experiments particularly when pressure approaches zero. We find that the high-speed bubble experiences remarkable expansion and deformation before vortex nucleation occurs. Accompanied by vortex-ring shedding, drastic surface vibration is generated leading to intense phonon radiation into the liquid. The amount of energy dissipated by these phonons is found to be greater than the amount carried away solely by the vortex rings. These results may enrich our understanding about the vortex nucleation induced energy dissipation in this fascinating system. ",Vortex Nucleation Induced Phonon Radiation from a Moving Electron Bubble in Superfluid 4He " The Resource Description Framework (RDF) is a Semantic Web standard that provides a data language, simply called RDF, as well as a lightweight ontology language, called RDF Schema. We investigate embeddings of RDF in logic and show how standard logic programming and description logic technology can be used for reasoning with RDF. We subsequently consider extensions of RDF with datatype support, considering D entailment, defined in the RDF semantics specification, and D* entailment, a semantic weakening of D entailment, introduced by ter Horst. We use the embeddings and properties of the logics to establish novel upper bounds for the complexity of deciding entailment. We subsequently establish two novel lower bounds, establishing that RDFS entailment is PTime-complete and that simple-D entailment is coNP-hard, when considering arbitrary datatypes, both in the size of the entailing graph. The results indicate that RDFS may not be as lightweight as one may expect. ",Logical Foundations of RDF(S) with Datatypes " Brown, Preston, and Singleton (BPS) produced an analytic calculation for energy exchange processes for a weakly to moderately coupled plasma: the electron-ion temperature equilibration rate and the charged particle stopping power. These precise calculations are accurate to leading and next-to-leading order in the plasma coupling parameter, and to all orders for two-body quantum scattering within the plasma. Classical molecular dynamics can provide another approach that can be rigorously implemented. It is therefore useful to compare the predictions from these two methods, particularly since the former is theoretically based and the latter numerically. An agreement would provide both confidence in our theoretical machinery and in the reliability of the computer simulations. The comparisons can be made cleanly in the purely classical regime, thereby avoiding the arbitrariness associated with constructing effective potentials to mock up quantum effects. We present here the classical limit of the general result for the temperature equilibration rate presented in BPS. We examine the validity of the m_electron/m_ion --> 0 limit used in BPS to obtain a very simple analytic evaluation of the long-distance, collective effects in the background plasma. ",Temperature equilibration in a fully ionized plasma: electron-ion mass ratio effects " As the world is transitioning towards highly renewable energy systems, advanced tools are needed to analyze such complex networks. Energy system design is, however, challenged by real-world objective functions consisting of a blurry mix of technical and socioeconomic agendas, with limitations that cannot always be clearly stated. As a result, it is highly likely that solutions which are techno-economically suboptimal will be preferable. Here, we present a method capable of determining the continuum containing all techno-economically near-optimal solutions, moving the field of energy system modeling from discrete solutions to a new era where continuous solution ranges are available. The presented method is applied to study a range of technical and socioeconomic metrics on a model of the European electricity system. The near-optimal region is found to be relatively flat allowing for solutions that are slightly more expensive than the optimum but better in terms of equality, land use, and implementation time. ",Modeling all alternative solutions for highly renewable energy systems " Sequence analysis is an increasingly popular approach for analysing life courses represented by ordered collections of activities experienced by subjects over time. Here, we analyse a survey data set containing information on the career trajectories of a cohort of Northern Irish youths tracked between the ages of 16 and 22. We propose a novel, model-based clustering approach suited to the analysis of such data from a holistic perspective, with the aims of estimating the number of typical career trajectories, identifying the relevant features of these patterns, and assessing the extent to which such patterns are shaped by background characteristics. Several criteria exist for measuring pairwise dissimilarities among categorical sequences. Typically, dissimilarity matrices are employed as input to heuristic clustering algorithms. The family of methods we develop instead clusters sequences directly using mixtures of exponential-distance models. Basing the models on weighted variants of the Hamming distance metric permits closed-form expressions for parameter estimation. Simultaneously allowing the component membership probabilities to depend on fixed covariates and accommodating sampling weights in the clustering process yields new insights on the Northern Irish data. In particular, we find that school examination performance is the single most important predictor of cluster membership. ",Clustering Longitudinal Life-Course Sequences Using Mixtures of Exponential-Distance Models " The ""SP theory of intelligence"", with its realisation in the ""SP computer model"", aims to simplify and integrate observations and concepts across AI-related fields, with information compression as a unifying theme. This paper describes how abstract structures and processes in the theory may be realised in terms of neurons, their interconnections, and the transmission of signals between neurons. This part of the SP theory -- ""SP-neural"" -- is a tentative and partial model for the representation and processing of knowledge in the brain. In the SP theory (apart from SP-neural), all kinds of knowledge are represented with ""patterns"", where a pattern is an array of atomic symbols in one or two dimensions. In SP-neural, the concept of a ""pattern"" is realised as an array of neurons called a ""pattern assembly"", similar to Hebb's concept of a ""cell assembly"" but with important differences. Central to the processing of information in the SP system is the powerful concept of ""multiple alignment"", borrowed and adapted from bioinformatics. Processes such as pattern recognition, reasoning and problem solving are achieved via the building of multiple alignments, while unsupervised learning -- significantly different from the ""Hebbian"" kinds of learning -- is achieved by creating patterns from sensory information and also by creating patterns from multiple alignments in which there is a partial match between one pattern and another. Short-lived neural structures equivalent to multiple alignments will be created via an inter-play of excitatory and inhibitory neural signals. The paper discusses several associated issues, with relevant empirical evidence. ",The SP theory of intelligence and the representation and processing of knowledge in the brain " Social networks have the surprising property of being ""searchable"": Ordinary people are capable of directing messages through their network of acquaintances to reach a specific but distant target person in only a few steps. We present a model that offers an explanation of social network searchability in terms of recognizable personal identities: sets of characteristics measured along a number of social dimensions. Our model defines a class of searchable networks and a method for searching them that may be applicable to many network search problems, including the location of data files in peer-to-peer networks, pages on the World Wide Web, and information in distributed databases. ",Identity and Search in Social Networks " Nekrashevych algebras of self-similar group actions are natural generalizations of the classical Leavitt algebras. They are discrete analogues of the corresponding Nekrashevych $C^\ast$-algebras. In particular, Nekrashevych, Clark, Exel, Pardo, Sims and Starling have studied the question of simplicity of Nekrashevych algebras, in part, because non-simplicity of the complex algebra implies non-simplicity of the $C^\ast$-algebra. In this paper we give necessary and sufficient conditions for the Nekrashevych algebra of a contracting group to be simple. Nekrashevych algebras of contracting groups are finitely presented. We give an algorithm which on input the nucleus of the contracting group, outputs all characteristics of fields over which the corresponding Nekrashevych algebra is simple. Using our methods, we determine the fields over which the Nekrashevych algebras of a number of well-known contracting groups are simple including the Basilica group, Gupta-Sidki groups, GGS-groups, multi-edge spinal groups, \v{S}uni\'{c} groups associated to polynomials (this latter family includes the Grigorchuk group, Grigorchuk-Erschler group and Fabrykowski-Gupta group) and self-replicating spinal automaton groups. ",On the simplicity of Nekrashevych algebras of contracting self-similar groups " The general form of the solutions of the Kac--Bernstein functional equation $$ f(x+y)g(x-y)=f(x)f(y)g(x)g(-y), \ x, y\in X, $$ on an arbitrary Abelian group $X$ in the class of positive functions is obtained. We also study the solutions of this equation in the class of complex-valued functions that do not vanish and satisfy the Hermitian condition. ",Solution of the Kac--Bernstein functional equation on Abelian groups in the class of positive functions " I argue that we have good reason for being realist about quantum states. Though a research programme of attempting to construct a plausible theory that accounts for quantum phenomena without ontic quantum states is well-motivated, that research programme is confronted by considerable obstacles. Two theorems are considered that place restrictions on a theory of that sort: a theorem due to Barrett, Cavalcanti, Lal, and Maroney, and an extension, by the author, of the Pusey-Barrett-Rudolph theorem, that employs an assumption weaker than their Cartesian Product Assumption. These theorems have assumptions, of course. If there were powerful evidence against the conclusion that quantum states correspond to something in physical reality, it might be reasonable to reject these assumptions. But the situation we find ourselves in is the opposite: there is no evidence at all supporting irrealism about quantum states. ",On the Status of Quantum State Realism " At this paper, it is considered to find a way for defining non-commutative spaces by ordinary commutative ones and vice versa. A novel parameter which has not been considered so far is represented. This parameter describes equivalent spaces. Also, we searched concepts of these new parameters with one problem. Noncommutativity in total space is important here because it could explain more concepts. As we knew SW method (Seiberg-Witten) explained noncommutativity so here, we showed that it was not suitable for some conditions.in the end we considered Hamiltonian of free particle in new noncommutativity and we found concepts of new parameters. ",New parameters of Non-commutativity in Quantum Mechanics " Multifield models of inflation with nonminimal couplings are in excellent agreement with the recent results from {\it Planck}. Across a broad range of couplings and initial conditions, such models evolve along an effectively single-field attractor solution and predict values of the primordial spectral index and its running, the tensor-to-scalar ratio, and non-Gaussianities squarely in the observationally most-favored region. Such models also can amplify isocurvature perturbations, which could account for the low power recently observed in the CMB power spectrum at low multipoles. Future measurements of primordial isocurvature perturbations and the tensor-to-scalar ratio may serve to distinguish between the currently viable possibilities. ",Multifield Inflation after Planck: The Case for Nonminimal Couplings " We analysed a population of bright-red (BR) stars in the dwarf irregular galaxy Leo A by using multicolour photometry data obtained with the Subaru/Suprime-Cam ($B$, $V$, $R$, $I$, $H\alpha$) and HST/ACS ($F475W$ & $F814W$) instruments. In order to separate the Milky Way (MW) and Leo A populations of red stars, we developed a photometric method, which enabled us to study the spatial distribution of BR stars within the Leo A galaxy. We found a significant difference in the scale-length (S-L) of radial distributions of the ""young"" and ""old"" red giant branch (RGB) stars -- $0.'82 \pm 0.'04$ and $1.'53 \pm 0.'03$, respectively. Also, we determined the S-L of BR stars of $0.'85 \pm 0.'05$, which closely matches that of the ""young"" RGB stars. Additionally, we found a sequence of peculiar RGB stars and 8 dust-enshrouded stars in the Leo A galaxy. ",Bright-red stars in the dwarf irregular galaxy Leo A " Open-text (or open-domain) semantic parsers are designed to interpret any statement in natural language by inferring a corresponding meaning representation (MR). Unfortunately, large scale systems cannot be easily machine-learned due to lack of directly supervised data. We propose here a method that learns to assign MRs to a wide range of text (using a dictionary of more than 70,000 words, which are mapped to more than 40,000 entities) thanks to a training scheme that combines learning from WordNet and ConceptNet with learning from raw text. The model learns structured embeddings of words, entities and MRs via a multi-task training process operating on these diverse sources of data that integrates all the learnt knowledge into a single system. This work ends up combining methods for knowledge acquisition, semantic parsing, and word-sense disambiguation. Experiments on various tasks indicate that our approach is indeed successful and can form a basis for future more sophisticated systems. ",Towards Open-Text Semantic Parsing via Multi-Task Learning of Structured Embeddings " We describe the competitive motion of (N + 1) incompressible immiscible phases within a porous medium as the gradient flow of a singular energy in the space of non-negative measures with prescribed mass endowed with some tensorial Wasserstein distance. We show the convergence of the approximation obtained by a minimization schem\`e a la [R. Jordan, D. Kinder-lehrer \& F. Otto, SIAM J. Math. Anal, 29(1):1--17, 1998]. This allow to obtain a new existence result for a physically well-established system of PDEs consisting in the Darcy-Muskat law for each phase, N capillary pressure relations, and a constraint on the volume occupied by the fluid. Our study does not require the introduction of any global or complementary pressure. ",Incompressible immiscible multiphase flows in porous media: a variational approach " Despite the growing popularity of human mobility studies that collect GPS location data, the problem of determining the minimum required length of GPS monitoring has not been addressed in the current statistical literature. In this paper we tackle this problem by laying out a theoretical framework for assessing the temporal stability of human mobility based on GPS location data. We define several measures of the temporal dynamics of human spatiotemporal trajectories based on the average velocity process, and on activity distributions in a spatial observation window. We demonstrate the use of our methods with data that comprise the GPS locations of 185 individuals over the course of 18 months. Our empirical results suggest that GPS monitoring should be performed over periods of time that are significantly longer than what has been previously suggested. Furthermore, we argue that GPS study designs should take into account demographic groups. KEYWORDS: Density estimation; global positioning systems (GPS); human mobility; spatiotemporal trajectories; temporal dynamics ",A statistical framework for measuring the temporal stability of human mobility patterns " In this letter, a low-complexity iterative detector with frequency domain equalization is proposed for generalized spatial modulation (GSM) aided single carrier (SC) transmissions operating in frequency selective channels. The detector comprises three main separate tasks namely, multiple-input multiple-output (MIMO) equalization, active antenna detection and symbol wise demodulation. This approach makes the detector suitable for a broad range of MIMO configurations, which includes single-user and multiuser scenarios, as well as arbitrary signal constellations. Simulation results show that the receiver can cope with the intersymbol interference induced by severe time dispersive channels and operate in difficult underdetermined scenarios where the total number of transmitter antennas is substantially larger than the number of receiver antennas. ",Frequency Domain Equalization for Single and Multiuser Generalized Spatial Modulation Systems in Time Dispersive Channels " Weakly spin-orbit coupled electron and hole spins in organic light-emitting diodes (OLEDs) constitute near-perfect two-level systems to explore the interaction of light and matter in the ultrastrong-drive regime. Under such highly non-perturbative conditions, the frequency at which the spin oscillates between states, the Rabi frequency, becomes comparable to its natural resonance frequency, the Larmor frequency. For such conditions, we develop an intuitive understanding of the emergence of hybrid light-matter states, illustrating how dipole-forbidden multiple-quantum transitions at integer and fractional g-factors arise. A rigorous theoretical treatment of the phenomena comes from a Floquet-style solution to the time-dependent Hamiltonian of the electron-hole spin pair under resonant drive. To probe these phenomena experimentally requires both the development of a magnetic-resonance setup capable of supporting oscillating driving fields comparable in magnitude to the static field defining the Zeeman splitting; and an organic semiconductor which is characterized by minimal inhomogeneous broadening so as to allow the non-linear light-matter interactions to be resolved. The predicted exotic resonance features associated with the Floquet states are indeed found experimentally in measurements of spin-dependent steady-state OLED current under resonant drive, demonstrating that complex hybrid light-matter spin excitations can be formed and probed at room temperature. The spin-Dicke state arising under strong drive is insensitive to power broadening so that the Bloch-Siegert shift of the resonance becomes apparent, implying long coherence times of the dressed spin state with potential applicability for quantum sensing. ",Floquet spin states in OLEDs " The phenomenon of universality is one of the most striking in many-body physics. Despite having sometimes wildly different microscopic constituents, systems can nonetheless behave in precisely the same way, with only the variable names interchanged. The canonical examples are those of liquid boiling into vapor and quantum spins aligning into a ferromagnet; despite their obvious differences, they nonetheless both obey quantitatively the same scaling laws, and are thus in the same universality class. Remarkable though this is, universality is generally a phenomenon limited to thermodynamic equilibrium, most commonly present at transitions between different equilibrium phases. Once out of equilibrium, the fate of universality is much less clear. Can strongly non-equilibrium systems behave universally, and are their universality classes different from those familiar from equilibrium? How is quantum mechanics important? This dissertation attempts to address these questions, at least in a small way, by showing and analyzing universal phenomena in several classes of non-equilibrium quantum systems. ",Universality in Non-Equilibrium Quantum Systems " Let $t_{i,j}$ be the coefficient of $x^iy^j$ in the Tutte polynomial $T(G;x,y)$ of a connected bridgeless and loopless graph $G$ with order $n$ and size $m$. It is trivial that $t_{0,m-n+1}=1$ and $t_{n-1,0}=1$. In this paper, we obtain expressions of another eight extreme coefficients $t_{i,j}$'s with $(i,j)=(0,m-n)$,$(0,m-n-1)$,$(n-2,0)$,$(n-3,0)$,$(1,m-n)$,$(1,m-n-1)$,$(n-2,1)$ and $(n-3,1)$ in terms of small substructures of $G$. Among them, the former four can be obtained by using coefficients of the highest, second highest and third highest terms of chromatic or flow polynomials, and vice versa. We also discuss their duality property and their specializations to extreme coefficients of the Jones polynomial. ",Several extreme coefficients of the Tutte polynomial of graphs " A radiative transfer code is used to model the spectral energy distributions of 57 mass-losing Asymptotic Giant Branch (AGB) stars and red supergiants (RSGs) in the Large Magellanic Cloud (LMC) for which ISO spectroscopic and photometric data are available. As a result we derive mass-loss rates and bolometric luminosities. A gap in the luminosity distribution around M_bol = -7.5 mag separates AGB stars from RSGs. The luminosity distributions of optically bright carbon stars, dust-enshrouded carbon stars and dust-enshrouded M-type stars have only little overlap, suggesting that the dust-enshrouded AGB stars are at the very tip of the AGB and will not evolve significantly in luminosity before mass loss ends their AGB evolution. Derived mass-loss rates span a range from Mdot about 10^-7 to 10^-3 M_sun/yr. More luminous and cooler stars are found to reach higher mass-loss rates. The highest mass-loss rates exceed the classical limit set by the momentum of the stellar radiation field, L/c, by a factor of a few due to multiple scattering of photons in the circumstellar dust envelope. Mass-loss rates are lower than the mass consumption rate by nuclear burning, Mdot_nuc, for most of the RSGs. Two RSGs have Mdot >> Mdot_nuc, however, suggesting that RSGs shed most of their stellar mantles in short phases of intense mass loss. Stars on the thermal pulsing AGB may also experience episodes of intensified mass loss, but their quiescent mass-loss rates are usually already higher than Mdot_nuc. ",Mass-loss rates and luminosity functions of dust-enshrouded AGB stars and red supergiants in the LMC The packing of hard-core particles in contact with their neighbors is considered as the simplest model of disordered particulate media. We formulate the statically determinate problem which allows analytical investigation of the statistical distribution of contact force magnitude. The toy-model of the Boltzmann type equation for the probability of contact force distribution is formulated and studied. An experimentally observed exponential distribution is derived. ,The toy-model of the Boltzmann type equation for the contact force distribution in disordered packings of particles " We discuss the fundamental (relative) 3-classes of knots (or hyperbolic links), and provide diagrammatic descriptions of the push-forwards with respect to every link-group representation. The point is an observation of a bridge between the relative group homology and quandle homology from the viewpoints of Inoue--Kabaya map \cite{IK}. Furthermore, we give an algorithm to algebraically describe the fundamental 3-class of any hyperbolic knot. ",On the fundamental 3-classes of knot group representations We prove a central limit theorem for the real and imaginary part and the absolute value of the Riemann zeta-function sampled along a vertical line in the critical strip with respect to an ergodic transformation similar to the Boolean transformation. This result complements a result by Steuding who has proven a strong law of large numbers for the same system. As a side result we state a general central limit theorem for a class of unbounded observables on the real line over the same ergodic transformation. The proof is based on the transfer operator method. ,A Central limit theorem for the Birkhoff sum of the Riemann zeta-function over a Boolean type transformation " We highlight a general theory to engineer arbitrary Hermitian tight-binding lattice models in electrical LC circuits, where the lattice sites are replaced by the electrical nodes, connected to its neighbors and to the ground by capacitors and inductors. In particular, by supplementing each node with $n$ subnodes, where the phases of the current and voltage are the $n$ distinct roots of \emph{unity}, one can in principle realize arbitrary hopping amplitude between the sites or nodes via the \emph{shift capacitor coupling} between them. This general principle is then implemented to construct a plethora of topological models in electrical circuits, \emph{topolectric circuits}, where the robust zero-energy topological boundary modes manifest through a large boundary impedance, when the circuit is tuned to the resonance frequency. The simplicity of our circuit constructions is based on the fact that the existence of the boundary modes relies only on the Clifford algebra of the corresponding Hermitian matrices entering the Hamiltonian and not on their particular representation. This in turn enables us to implement a wide class of topological models through rather simple topolectric circuits with nodes consisting of only two subnodes. We anchor these outcomes from the numerical computation of the on-resonance impedance in circuit realizations of first-order ($m=1$), such as Chern and quantum spin Hall insulators, and second- ($m=2$) and third- ($m=3$) order topological insulators in different dimensions, featuring sharp localization on boundaries of codimensionality $d_c=m$. Finally, we subscribe to the \emph{stacked topolectric circuit} construction to engineer three-dimensional Weyl, nodal-loop, quadrupolar Dirac and Weyl semimetals, respectively displaying surface and hinge localized impedance. ",Topolectric circuits: Theory and construction " Recently proposed model of foam impact on the air sea drag coefficient Cd has been employed for the estimation of the efficient foam-bubble radius Rb variation with wind speed U10 in hurricane conditions. The model relates Cd (U10) with the efficient roughness length Zeff (U10) represented as a sum of aerodynamic roughness lengths of the foam free and foam covered sea surfaces Zw (U10 ), and Zf (U10) weighted with the foam coverage coefficient. This relation is treated for known phenomenological distributions Cd (U10), Zw (U10) at strong wind speeds as an inverse problem for the efficient roughness parameter of foam-covered sea surface Zf (U10). ",Correlation between Foam-Bubble Size and Drag Coefficient in Hurricane Conditions " This paper estimates free energy, average mutual information, and minimum mean square error (MMSE) of a linear model under two assumptions: (1) the source is generated by a Markov chain, (2) the source is generated via a hidden Markov model. Our estimates are based on the replica method in statistical physics. We show that under the posterior mean estimator, the linear model with Markov sources or hidden Markov sources is decoupled into single-input AWGN channels with state information available at both encoder and decoder where the state distribution follows the left Perron-Frobenius eigenvector with unit Manhattan norm of the stochastic matrix of Markov chains. Numerical results show that the free energies and MSEs obtained via the replica method are closely approximate to their counterparts achieved by the Metropolis-Hastings algorithm or some well-known approximate message passing algorithms in the research literature. ",Replica Analysis of the Linear Model with Markov or Hidden Markov Signal Priors " We analyze HST spectra and Chandra observations of a sample of 21 LINERs, at least 18 of which genuine AGN. We find a correlation between the X-rays and emission lines luminosities, extending over three orders of magnitude and with a dispersion of 0.36 dex; no differences emerge between LINERs with and without broad lines, or between radio-loud and radio-quiet sources. The presence of such a strong correlation is remarkable considering that for half of the sample the X-ray luminosity can not be corrected for local absorption. This connection is readily understood since the X-ray light is associated with the same source producing the ionizing photons at the origin of the line emission. This implies that we have a direct view of the LINERs nuclei in the X-rays: the circumnuclear, high column density structure (the torus) is absent in these sources. Such a conclusion is also supported by mid-infrared data. We suggest that this is due to the general paucity of gas and dust in their nuclear regions that causes also their low rate of accretion and low bolometric luminosity. ",The naked nuclei of LINERs " Motivated by the developing mathematics of deep learning, we build universal functions approximators of continuous maps between arbitrary Polish metric spaces $\mathcal{X}$ and $\mathcal{Y}$ using elementary functions between Euclidean spaces as building blocks. Earlier results assume that the target space $\mathcal{Y}$ is a topological vector space. We overcome this limitation by ``randomization'': our approximators output discrete probability measures over $\mathcal{Y}$. When $\mathcal{X}$ and $\mathcal{Y}$ are Polish without additional structure, we prove very general qualitative guarantees; when they have suitable combinatorial structure, we prove quantitative guarantees for H\""{o}lder-like maps, including maps between finite graphs, solution operators to rough differential equations between certain Carnot groups, and continuous non-linear operators between Banach spaces arising in inverse problems. In particular, we show that the required number of Dirac measures is determined by the combinatorial structure of $\mathcal{X}$ and $\mathcal{Y}$. For barycentric $\mathcal{Y}$, including Banach spaces, $\mathbb{R}$-trees, Hadamard manifolds, or Wasserstein spaces on Polish metric spaces, our approximators reduce to $\mathcal{Y}$-valued functions. When the Euclidean approximators are neural networks, our constructions generalize transformer networks, providing a new probabilistic viewpoint of geometric deep learning. ",An Approximation Theory for Metric Space-Valued Functions With A View Towards Deep Learning " Parallel server systems in transportation, manufacturing, and computing heavily rely on dynamic routing using connected cyber components for computation and communication. Yet, these components remain vulnerable to random malfunctions and malicious attacks, motivating the need for fault-tolerant dynamic routing that are both traffic-stabilizing and cost-efficient. In this paper, we consider a parallel server system with dynamic routing subject to reliability and stability failures. For the reliability setting, we consider an infinite-horizon Markov decision process where the system operator strategically activates protection mechanism upon each job arrival based on traffic state observations. We prove an optimal deterministic threshold protecting policy exists based on dynamic programming recursion of the HJB equation. For the security setting, we extend the model to an infinite-horizon stochastic game where the attacker strategically manipulates routing assignment. We show that both players follow a threshold strategy at every Markov perfect equilibrium. For both failure settings, we also analyze the stability of the traffic queues under control. Finally, we develop approximate dynamic programming algorithms to compute the optimal/equilibrium policies, supplemented with numerical examples and experiments for validation and illustration. ",Cost-aware Defense for Parallel Server Systems against Reliability and Security Failures " We measure phonon dispersion and linewidth in a single crystal of MgB_2 along the Gamma-A, Gamma-M and A-L directions using inelastic X-Ray scattering. We use Density Functional Theory to compute the effect of both electron-phonon coupling and anharmonicity on the linewidth, obtaining excellent agreement with experiment. Anomalous broadening of the E_2g phonon mode is found all along Gamma-A. The dominant contribution to the linewidth is always the electron-phonon coupling. ",Phonon dispersion and lifetimes in MgB2 " We fabricate and characterize superconducting through-silicon vias and electrodes suitable for superconducting quantum processors. We measure internal quality factors of a million for test resonators excited at single-photon levels, on chips with superconducting vias used to stitch ground planes on the front and back sides of the chips. This resonator performance is on par with the state of the art for silicon-based planar solutions, despite the presence of vias. Via stitching of ground planes is an important enabling technology for increasing the physical size of quantum processor chips, and is a first step toward more complex quantum devices with three-dimensional integration. ",Qubit-compatible substrates with superconducting through-silicon vias " We consider the high-dimensional heteroscedastic regression model, where the mean and the log variance are modeled as a linear combination of input variables. Existing literature on high-dimensional linear regres- sion models has largely ignored non-constant error variances, even though they commonly occur in a variety of applications ranging from biostatis- tics to finance. In this paper we study a class of non-convex penalized pseudolikelihood estimators for both the mean and variance parameters. We show that the Heteroscedastic Iterative Penalized Pseudolikelihood Optimizer (HIPPO) achieves the oracle property, that is, we prove that the rates of convergence are the same as if the true model was known. We demonstrate numerical properties of the procedure on a simulation study and real world data. ",Variance function estimation in high-dimensions The expansion of an initially confined Bose-Einstein condensate into either free space or a tilted optical lattice is investigated in a mean-field approach. The effect of the interactions is to enhance or suppress the transport depending on the sign and strength of the interactions. These effects are discusses in detail in view of recent experiments probing non-equilibrium transport of ultracold quantum gases. ,Mean-field transport of a Bose-Einstein condensate " We used X-ray tomography to characterize the geometry of all bubbles in a liquid foam of average liquid fraction $\phi_l\approx 17 %$ and to follow their evolution, measuring the normalized growth rate $\mathcal{G}=V^{-{1/3}}\frac{dV} {dt}$ for 7000 bubbles. While $\mathcal{G}$ does not depend only on the number of faces of a bubble, its average over $f-$faced bubbles scales as $G_f\sim f-f_0$ for large $f$s at all times. We discuss the dispersion of $\mathcal{G}$ and the influence of $V$ on $\mathcal{G}$. ","Experimental growth law for bubbles in a ""wet"" 3D liquid foam" " By decomposing the random walk path, we construct a multitype branching process with immigration in random environment for corresponding random walk with bounded jumps in random environment. Then we give two applications of the branching structure. Firstly, we specify the explicit invariant density by a method different with the one used in Br\'emont [3] and reprove the law of large numbers of the random walk by a method known as the environment viewed from particles"". Secondly, the branching structure enables us to prove a stable limit law, generalizing the result of Kesten-Kozlov-Spitzer [11] for the nearest random walk in random environment. As a byproduct, we also prove that the total population of a multitype branching process in random environment with immigration before the first regeneration belongs to the domain of attraction of some \kappa -stable law. ",Branching structure for an (L-1) random walk in random environment and its applications " Thorium ions exhibit unique nuclear properties with high relevance for testing symmetries of nature, and Paul traps feature an ideal experimental platform for performing high precision quantum logic spectroscopy. Loading of stable or long-lived isotopes is well-established and relies on ionization from an atomic beam. A different approach allows trapping short-lived isotopes available as alpha-decay daughters, which recoil from a thin sample of the precursor nuclide. A prominent example is the short-lived $^{229\text{m}}$Th, populated in a decay of long-lived $^{233}$U. Here, ions are provided by an external source and are decelerated to be available for trapping. Such setups offer the option to trap various isotopes and charge states of thorium. Investigating this complex procedure, we demonstrate the observation of single $^{232}$Th$^+$ ions trapped, embedded into and sympathetically cooled via Coulomb interactions by co-trapped $^{40}$Ca$^+$ ions. Furthermore, we discuss different options for a non-destructive identification of the sympathetically cooled thorium ions in the trap, and describe in detail our chosen experimental method, identifying mass and charge of thorium ions from the positions of calcium ions, as their fluorescence is imaged on a CCD camera. These findings are verified by means of a time-of-flight signal when extracting ions of different mass-to-charge ratio from the Paul trap and steering them into a detector. ","Catching, trapping and in-situ-identification of thorium ions inside Coulomb crystals of $^{40}$Ca$^+$ ions" DNS and laboratory experiments show that the spatial distribution of straining stagnation points in homogeneous isotropic 3D turbulence has a fractal structure with dimension D_s = 2. In Kinematic Simulations the time exponent gamma in Richardson's law and the fractal dimension D_s are related by gamma = 6/D_s. The Richardson constant is found to be an increasing function of the number of straining stagnation points in agreement with pair duffusion occuring in bursts when pairs meet such points in the flow. ,Richardson's pair diffusion and the stagnation point structure of turbulence We search for the charmless B^0 decay with final state particles p Lambdabar pi^- gamma using the full data sample that contains 772 * 10^6 B Bar pairs collected at the Upsilon(4S) resonance with the Belle detector at the KEKB asymmetric-energy e^+ e^- collider. This decay is predicted to proceed predominantly via the b to s gamma radiative penguin process with a high energy photon. No significant signal is found. We set an upper limit of 6.5 * 10^-7 for the branching fraction of B^0 to p Lambdabar pi^- gamma at the 90% confidence level. ,Search for B0 to p Lambdabar pi- gamma at Belle " In this paper we study current accumulations in 3D ""tilted"" nulls formed by a folding of the spine and fan. A non-zero component of current parallel to the fan is required such that the null's fan plane and spine are not perpendicular. Our aims are to provide valid magnetohydrostatic equilibria and to describe the current accumulations in various cases involving finite plasma pressure.To create our equilibrium current structures we use a full, non-resistive, magnetohydrodynamic (MHD) code so that no reconnection is allowed. A series of experiments are performed in which a perturbed 3D tilted null relaxes towards an equilibrium via real, viscous damping forces. Changes to the initial plasma pressure and to magnetic parameters are investigated systematically.An initially tilted fan is associated with a non-zero Lorentz force that drives the fan and spine to collapse towards each other, in a similar manner to the collapse of a 2D X-point. In the final equilibrium state for an initially radial null with only the current perpendicular to the spine, the current concentrates along the tilt axis of the fan and in a layer about the null point with a sharp peak at the null itself. The continued growth of this peak indicates that the system is in an asymptotic regime involving an infinite time singularity at the null. When the initial tilt disturbance (current perpendicular to the spine) is combined with a spiral-type disturbance (current parallel to the spine), the final current density concentrates in three regions: one on the fan along its tilt axis and two around the spine, above and below the fan. The increased area of current accumulation leads to a weakening of the singularity formed at the null. The 3D spine-fan collapse with generic current studied here provides the ideal setup for non-steady reconnection studies. ",Magnetohydrodynamics dynamical relaxation of coronal magnetic fields. IV. 3D tilted nulls " Recently it has been shown that quantum cryptography beyond pure entanglement distillation is possible and a paradigm for the associated protocols has been established. Here we systematically generalize the whole paradigm to the multipartite scenario. We provide constructions of new classes of multipartite bound entangled states, i.e., those with underlying twisted GHZ structure and nonzero distillable cryptographic key. We quantitatively estimate the key from below with help of the privacy squeezing technique. ",Multipartite secret key distillation and bound entanglement The basic properties of locally finite triangulated categories are discussed. The focus is on Auslander--Reiten theory and the lattice of thick subcategories. ,Report on locally finite triangulated categories " We consider the standard site percolation model on the three dimensional cubic lattice. Starting solely with the hypothesis that $\theta(p)>0$, we prove that, for any $\alpha>0$, there exists $\kappa>0$ such that, with probability larger than $1-1/n^\alpha$, every pair of vertices inside the box $\Lambda(n)$ are joined by a path having at most $\kappa(\ln n)^2$ closed sites. ",The travel time in a finite box in supercritical Bernoulli percolation " To a symmetric, relatively ample line bundle on an abelian scheme one can associate a linear combination of the determinant bundle and the relative canonical bundle, which is a torsion element in the Picard group of the base. We improve the bound on the order of this element found by Faltings and Chai. In particular, we obtain an optimal bound when the degree of the line bundle d is odd and the set of residue characteristics of the base does not intersect the set of primes p dividing d, such that $p\equiv -1\mod(4)$ and p<2g, where g is the relative dimension of the abelian scheme. Also, we show that in some cases these torsion elements generate the entire torsion subgroup in the Picard group of the corresponding moduli stack. ",Determinant bundles for abelian schemes " Writing generic data processing pipelines requires that the algorithmic code does not ever have to know about data formats of files, or the locations of those files. At LSST we have a software system known as ""the Data Butler,"" that abstracts these details from the software developer. Scientists can specify the dataset they want in terms they understand, such as filter, observation identifier, date of observation, and instrument name, and the Butler translates that to one or more files which are read and returned to them as a single Python object. Conversely, once they have created a new dataset they can give it back to the Butler, with a label describing its new status, and the Butler can write it in whatever format it has been configured to use. All configuration is in YAML and supports standard defaults whilst allowing overrides. ",Abstracting the storage and retrieval of image data at the LSST " Infrared spectrum (IR) of Herbig Ae young stars was reproduced and classified by hydrocarbon pentagon-hexagon combined molecules by the quantum chemical calculation. Observed IR list by B. Acke et al. was categorized to four classes. Among 53 Herbig Ae stars, 26 samples show featured IR pattern named Type-D, which shows common IR peaks at 6.2, 8.3, 9.2, 10.0, 11.3, 12.1, and 14.0 micrometer. Typical star is HD144432. Calculation on di-cation molecule (C12H8)2+ having hydrocarbon one pentagon and two hexagons shows best coincidence at 6.1, 8.2, 9.2, 9.9, 11.3, 12.2, and 14.1 micrometer. There are some variation in Type-D. Spectrum of HD37357 was explained by a mixture with di-cation (C12H8)2+ and tri-cation (C12H8)3+. Ubiquitously observed spectrum Type-B was observed in 12 samples of Acke's list. In case of HD85567, observed 16 peaks were precisely reproduced by a single molecule (C23H12)2+. There is a mixture case with Type-B and Type-D. Typical example was HD142527. In this study, we could identify hidden carrier molecules for all types of IR in Herbig Ae stars. ",Herbig Ae Young Star's Infrared Spectrum Identified By Hydrocarbon Pentagon-Hexagon Combined Molecules A systematic description of the Wess-Zumino-Witten model is presented. The symplectic method plays the major role in this paper and also gives the relationship between the WZW model and the Chern-Simons model. The quantum theory is obtained to give the projective representation of the Loop group. The Gauss constraints for the connection whose curvature is only focused on several fixed points are solved. The Kohno connection and the Knizhnik-Zamolodchikov equation are derived. The holonomy representation and $\check R$-matrix representation of braid group are discussed. ,Symplectic Approach of Wess-Zumino-Witten Model and Gauge Field Theories " We review the application of non abelian discrete groups to the theory of neutrino masses and mixing, which is strongly suggested by the agreement of the Tri-Bimaximal mixing pattern with experiment. After summarizing the motivation and the formalism, we discuss specific models, based on A4, S4 and other finite groups, and their phenomenological implications, including lepton flavor violating processes, leptogenesis and the extension to quarks. In alternative to Tri-Bimaximal mixing the application of discrete flavor symmetries to quark-lepton complementarity and Bimaximal Mixing is also considered. ",Discrete Flavor Symmetries and Models of Neutrino Mixing " M87 is one of the nearest radio galaxies with a prominent jet extending from sub-pc to kpc-scales. Because of its proximity and large mass of the central black hole, it is one of the best radio sources to study jet formation. We aim at studying the physical conditions near the jet base at projected separations from the BH of $\sim7-100$ Schwarzschild radii ($R_{\rm sch}$). Global mm-VLBI Array (GMVA) observations at 86 GHz ($\lambda=3.5\,$mm) provide an angular resolution of $\sim50\mu$as, which corresponds to a spatial resolution of only $7~R_{\rm sch}$ and reach the small spatial scale. We use five GMVA data sets of M87 obtained during 2004--2015 and present new high angular resolution VLBI maps at 86GHz. In particular, we focus on the analysis of the brightness temperature, the jet ridge lines, and the jet to counter-jet ratio. The imaging reveals a parabolically expanding limb-brightened jet which emanates from a resolved VLBI core of $\sim(8-13) R_{\rm sch}$ size. The observed brightness temperature of the core at any epoch is $\sim(1-3)\times10^{10}\,$K, which is below the equipartition brightness temperature and suggests magnetic energy dominance at the jet base. We estimate the diameter of the jet at its base to be $\sim5 R_{\rm sch}$ assuming a self-similar jet structure. This suggests that the sheath of the jet may be anchored in the very inner portion of the accretion disk. The image stacking reveals faint emission at the center of the edge-brightened jet on sub-pc scales. We discuss its physical implication within the context of the spine-sheath structure of the jet. ",The limb-brightened jet of M87 down to 7 Schwarzschild radii scale " Two basic results on the S-rings over an abelian group are the Schur theorem on multipliers and the Wielandt theorem on primitive S-rings over groups with a cyclic Sylow subgroup. None of these theorems is directly generalized to the non-abelian case. Nevertheless, we prove that they are true for the central S-rings, i.e., for those which are contained in the center of the group ring of the underlying group (such S-rings naturally arise in the supercharacter theory). We also generalize the concept of a B-group introduced by Wielandt, and show that any Camina group is a generalized B-group whereas with few exceptions, no simple group is of this type. ",The Schur-Wielandt theory for central S-rings " In this paper we consider the integral orthogonal group with respect to the quadratic form of signature $(2,3)$ given by $\left(\begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix}\right) \perp \left(\begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix}\right) \perp (-2N)$ for squarefree $N\in \mathbb{N}$. The associated Hecke algebra is commutative and the tensor product of its primary components, which turn out to be polynomial rings over $\mathbb{Z}$ in $2$ algebraically independent elements. The integral orthogonal group is isomorphic to the paramodular group of degree $2$ and level $N$, more precisely to its maximal discrete normal extension. The results can be reformulated in the paramodular setting by virtue of an explicit isomorphism. The Hecke algebra of the non-maximal paramodular group inside $\mathrm{Sp}(2;\mathbb{Q})$ fails to be commutative if $N> 1$. ","The Hecke algebras for the orthogonal group $SO(2,3)$ and the paramodular group of degree $2$" " Many academics use yearly publication numbers to quantify academic interest for their research topic. While such visualisations are ubiquitous in grant applications, manuscript introductions, and review articles, they fail to account for the rapid growth in scientific publications. As a result, any search term will likely show an increase in supposed ""academic interest"". One proposed solution is to normalise yearly publication rates by field size, but this is arduous and difficult. Here, we propose an simpler index that normalises keywords of interest by a ubiquitous and innocuous keyword, such as ""banana"". Alternatively, one could opt for field-specific keywords or hierarchical structures (e.g. PubMed's Medical Subject Headings, MeSH) to compute ""interest market share"". Using this approach, we uncovered plausible trends in academic interest in examples from the medical literature. In neuroimaging, we found that not the supplementary motor area (as was previously claimed), but the prefrontal cortex is the most interesting part of the brain. In cancer research, we found a contemporary preference for cancers with high prevalence and clinical severity, and notable declines in interest for more treatable or likely benign neoplasms. Finally, we found that interest in respiratory viral infections spiked when strains showed potential for pandemic involvement, with SARS-CoV-2 and the COVID-19 pandemic being the most extreme example. In sum, the time is ripe for a quick and easy method to quantify trends in academic interest for anecdotal purposes. We provide such a method, along with software for researchers looking to implement it in their own writing. ",Banana for scale: Gauging trends in academic interest by normalising publication rates to common and innocuous keywords " The purpose of this Comment is to show that the solutions to the zero energy Schr\""odinger equations for monomial central potentials discussed in a recently published Letter, may also be obtained from the corresponding free particle solutions in a straight forwardly way, using an algorithm previously devised by us. New solutions to the zero energy Schr\""odinger equation are also exhibited. ","Comment on ""Quantum bound states with zero binding energy""" " We compare the interaction parameters measured on LaMnO$_3$ to single site dynamical mean field estimates of the critical correlation strength needed to drive a Mott transition, finding that the total correlation strength (electron-electron plus electron-lattice) is very close to but slightly larger than the critical value, while if the electron lattice interaction is neglected the model is metallic. Our results emphasize the importance of additional physics including the buckling of the Mn-O-Mn bonds. ",LaMnO$_3$ is a Mott Insulator: an precise definition and an evaluation of the local interaction strength " The low energy losses in the superconducting magnetic levitation make it attractive for exciting applications in physics. Recently, superconducting magnetic levitation has been realized as novel mechanical transduction for the individual spin qubit in the nitrogen-vacancy center [1]. Furthermore, the Meissner has been proposed for the study of modified gravitational wave detection [2]. Meissner levitation within the microwave cavity could open avenues for the novel cavity optomechanical system, readout for quantum object such as the transmon, and magnon, gravitational wave detection, and magnetomechanics [3]. This work characterized magnetic levitation within a microwave. It also discusses possibilities, challenges, and room temperature and cryogenic experiments of the cavity-magnet system. ","Magnetic levitation within a microwave cavity: characterization, challenges, and possibilities" " A thin tube is an $n$-dimensional space which is very thin in $n-1$ directions, compared to the remaining direction, for example the $\epsilon$-neighborhood of a curve or an embedded graph in $\R^n$ for small $\epsilon$. The Laplacian on thin tubes and related operators have been studied in various contexts, with different goals but overlapping techniques. In this survey we explain some of these contexts, methods and results, hoping to encourage more interaction between the disciplines mentioned in the title. ","Thin tubes in mathematical physics, global analysis and spectral geometry" " Because of the infrared renormalons, it is difficult to get power accuracy in the traditional approach to the Wilson's operator product expansion. Based on a new perturbative renormalization scheme for power-divergent operators, I propose a practical version of the OPE that allows to calculate power corrections to desired accuracy. The method is applied to the expansion of the vector-current correlation function in QCD vacuum, in which field theoretical status of the gluon condensate is discussed. ",WILSON'S EXPANSION WITH POWER ACCURACY " We introduce a new approach for computing optimal equilibria via learning in games. It applies to extensive-form settings with any number of players, including mechanism design, information design, and solution concepts such as correlated, communication, and certification equilibria. We observe that optimal equilibria are minimax equilibrium strategies of a player in an extensive-form zero-sum game. This reformulation allows to apply techniques for learning in zero-sum games, yielding the first learning dynamics that converge to optimal equilibria, not only in empirical averages, but also in iterates. We demonstrate the practical scalability and flexibility of our approach by attaining state-of-the-art performance in benchmark tabular games, and by computing an optimal mechanism for a sequential auction design problem using deep reinforcement learning. ",Computing Optimal Equilibria and Mechanisms via Learning in Zero-Sum Extensive-Form Games " We present high accuracy relativistic coupled cluster calculations of the P-odd interaction coefficient $W_A$ describing the nuclear anapole moment effect on the molecular electronic structure. The molecule under study, BaF, is considered a promising candidate for the measurement of the nuclear anapole moment, and the preparation for the experiment is now underway [Altunas et al., Phys. Rev. Lett. 120, 142501 (2018)]. Influence of various computational parameters (size of the basis set, treatment of relativistic effects, and treatment of electron correlation) on the calculated $W_A$ coefficient is investigated and a recommended value of 147.7 Hz with an estimated uncertainty of 1.5% is proposed. ",The nuclear anapole moment interaction in BaF from relativistic coupled cluster theory " Experiments require human decisions in the design process, which in turn are reformulated and summarized as inputs into a system (computational or otherwise) to generate the experimental design. I leverage this system to promote a language of experimental designs by proposing a novel computational framework, called ""the grammar of experimental designs"", to specify experimental designs based on an object-oriented programming system that declaratively encapsulates the experimental structure. The framework aims to engage human cognition by building experimental designs with modular functions that modify a targeted singular element of the experimental design object. The syntax and semantics of the framework are built upon consideration from multiple perspectives. While the core framework is language-agnostic, the framework is implemented in the `edibble` R-package. A range of examples is shown to demonstrate the utility of the framework. ",Towards a unified language in experimental designs propagated by a software framework We successfully demonstrate coexistence of record-high 11.2 Tb/s (56x200Gb/s) classical channels with a discrete-variable-QKD channel over a multicore fibre. Continuous secret key generation is confirmed together with classical channel performance below the SDFEC limit and a minimum quantum channel spacing of 17nm in the C-band. ,Coexistence of 11.2Tb/s Carrier-Grade Classical Channels and a DV-QKD Channel over a 7-Core Multicore Fibre " The magnetic behavior of molecular Single-Chain Magnets is investigated in the framework of a one-dimensional Ising model with single spin-flip Glauber dynamics. Opportune modifications to the original theory are required in order to account for reciprocal non-collinearity of local anisotropy axes and the crystallographic (laboratory) frame. The extension of Glauber's theory to the case of a collinear Ising ferrimagnetic chain is also discussed. Within this formalism, both the dynamics of magnetization reversal in zero field and the response of the system to a weak magnetic field, oscillating in time, are studied. Depending on the geometry, selection rules are found for the occurrence of slow relaxation of the magnetization at low temperatures, as well as for resonant behavior of the a.c. susceptibility as a function of temperature at low frequencies. The present theory applies successfully to some real systems, namely Mn-, Dy-, and Co-based molecular magnetic chains, showing that Single-Chain-Magnet behavior is not only a feature of collinear ferro- and ferrimagnetic, but also of canted antiferromagnetic chains. ",Selection rules for Single-Chain-Magnet behavior in non-collinear Ising systems " The geodesic properties of the extraordinary vacuum string solution in (4+1) dimensions are analyzed by using Hamilton-Jacobi method. The geodesic motions show distinct properties from those of the static one. Especially, any freely falling particle can not arrive at the horizon or singularity. There exist stable null circular orbits and bouncing timelike and null geodesics. To get into the horizon {or singularity}, a particle need to follow a non-geodesic trajectory. We also analyze the orbit precession to show that the precession angle has distinct features for each geometry such as naked singularity, black string, and wormhole. ",Geodesic motions in extraordinary string geometry " We study rational Lagrangian immersions in a cotangent bundle, based on the microlocal theory of sheaves. We construct a sheaf quantization of a rational Lagrangian immersion and investigate its properties in Tamarkin category. Using the sheaf quantization, we give an explicit bound for the displacement energy and a Betti/cup-length estimate for the number of the intersection points of the immersion and its Hamiltonian image by a purely sheaf-theoretic method. ",Sheaf quantization and intersection of rational Lagrangian immersions " We investigate the charge fluctuations of a single-electron box (metallic grain) coupled to a lead via a smaller quantum dot in the Kondo regime. The most interesting aspect of this problem resides in the interplay between spin Kondo physics stemming from the screening of the spin of the small dot and orbital Kondo physics emerging when charging states of the grain with (charge) Q=0 and Q=e are almost degenerate. Combining Wilson's numerical renormalization-group method with perturbative scaling approaches we push forward our previous work [K. Le Hur and P. Simon, Phys. Rev. B 67, 201308R (2003)]. We emphasize that for symmetric and slightly asymmetric barriers, the strong entanglement of charge and spin flip events in this setup inevitably results in a non trivial stable SU(4) Kondo fixed point near the degeneracy points of the grain. By analogy with a small dot sandwiched between two leads, the ground state is Fermi-liquid like which considerably smears out the Coulomb staircase behavior and hampers the Matveev logarithmic singularity to arise. Most notably, the associated Kondo temperature $T_K^{SU(4)}$ might be raised compared to that in the conductance experiments through a small quantum dot $(\sim 1K)$ which makes the observation of our predictions a priori accessible. We discuss the robustness of the SU(4) correlated state against the inclusion of an external magnetic field, a deviation from the degeneracy points, particle-hole symmetry in the small dot, asymmetric tunnel junctions and comment on the different crossovers. ",Maximized Orbital and Spin Kondo effects in a single-electron transistor " The split property of a pure state for a certain cut of a quantum spin system can be understood as the entanglement between the two subsystems being weak. From this point of view, we may say that if it is not possible to transform a state $\omega$ via sufficiently local automorphisms (in a sense that we will make precise) into a state satisfying the split property, then the state $\omega$ has a long-range entanglement. It is well known that in 1D, gapped ground states have the split property with respect to cutting the system into left and right half-chains. In 2D, however, the split property fails to hold for interesting models such as Kitaev's toric code. In fact, we will show that this failure is the reason that anyons can exist in that model. There is a folklore saying that the existence of anyons, like in the toric code model, implies long-range entanglement of the state. In this paper, we prove this folklore in an infinite dimensional setting. More precisely, we show that long-range entanglement, in a way that we will define precisely, is a necessary condition to have non-trivial superselection sectors. Anyons in particular give rise to such non-trivial sectors. States with the split property for cones, on the other hand, do not admit non-trivial sectors. A key technical ingredient of our proof is that under suitable assumptions on locality, the automorphisms generated by local interactions can be 'approximately factorized': they can be written as the tensor product of automorphisms localized in a cone and its complement respectively, followed by an automorphism acting near the 'boundary' of $\Lambda$, and conjugation with a unitary. This result may be of independent interest. This technique also allows us to prove that the approximate split property, a weaker version of the split property that is satisfied in e.g. the toric code, is stable under applying such automorphisms. ",The split and approximate split property in 2D systems: stability and absence of superselection sectors We present here an efficient method which systematically reduces the rank of the augmented space and thereby helps to implement augmented space recursion for any real calculation. Our method is based on the symmetry of the Hamiltonian in the augmented space and keeping recursion basis vectors in the irreducible subspace of the Hilbert space. ,Symmetry reduction in the augmented space recursion formalism for random binary alloys " Short-range audio channels have a few distinguishing characteristics: ease of use, low deployment costs, and easy to tune frequencies, to cite a few. Moreover, thanks to their seamless adaptability to the security context, many techniques and tools based on audio signals have been recently proposed. However, while the most promising solutions are turning into valuable commercial products, acoustic channels are increasingly used also to launch attacks against systems and devices, leading to security concerns that could thwart their adoption. To provide a rigorous, scientific, security-oriented review of the field, in this paper we survey and classify methods, applications, and use-cases rooted on short-range audio channels for the provisioning of security services---including Two-Factor Authentication techniques, pairing solutions, device authorization strategies, defense methodologies, and attack schemes. Moreover, we also point out the strengths and weaknesses deriving from the use of short-range audio channels. Finally, we provide open research issues in the context of short-range audio channels security, calling for contributions from both academia and industry. ","Short-Range Audio Channels Security: Survey of Mechanisms, Applications, and Research Challenges" " We present NLTE abundances and atmospheric parameters for three metal-poor stars ([Fe/H]<-2) in Coma Berenices ultra-faint dwarf galaxy (UFD). The derived results are based on new photometric observations obtained with the 2.5-m telescope of the SAI MSU Caucasian observatory and spectra from the archive of the 10-m Keck telescope. Effective temperatures were determined from V-I, V-K, V-J colours. For each star, Teffs derived from different colours agree within 20K. Surface gravities (log g) were calculated using a relation between log g, MV, BC, distance, Teff, and mass, adopted as 0.8MSun. The NLTE abundances for Na, Mg, Ca, Ti, Fe, Ni, Sr, and Ba were determined. A revision of atmospheric parameters and abundances, based on new photometric observations and accurate modelling of spectral line formation resulted in reinterpretation of the star formation history in Coma Berenices UFD. The derived chemical abundance patterns for three stars differ from each other and from that, which is typical for the MW halo stars of similar [Fe/H]. The S1 star shows solar [alpha/Fe] and unprecedentedly low [Na/Mg] of -1.46, which is the lowest value among metal-poor stars known to date. Abundance pattern of the S1 star is well reproduced by nucleosynthesis model in metal-free massive stars explosion. The stars S2 and S3, in contrast to S1, show high [alpha/Fe] ratios, for example, [Mg/Fe]=0.8 in S2, while stars with -3.5<[Fe/H]<-2 in the other dwarf galaxies and the MW show a typical ratio [Ca,Mg/Fe] of 0.3 dex. All the three stars show low Sr and Ba abundances. These peculiarities in chemical composition argue for a small number of nucleosynthesis events contributed to chemical abundances of these stars. A wide range of metallicity, 0.65 dex, observed in Coma Berenices is likely produced by inhomogeneous mixing of the interstellar medium, but not an increase in [Fe/H] during a prolonged star formation. ",Chemical composition of metal-poor stars in Coma Berenices ultra-faint dwarf galaxy as a proxy to individual chemical enrichment events " Surrogate endpoint (SE) for overall survival in cancer patients is essential to improving the efficiency of oncology drug development. In practice, we may discover a new patient level association with survival, based on one or more clinical or biological features, in a discovery cohort; and then measure the trial level association across studies in a meta-analysis to validate the SE. To understand how well various patient level metrics would indicate the eventual trial level association, we considered causal biological trajectories based on bi-exponential functions, modeled the strength of their impact on survival hazards via a parameter {\alpha}, and simulated the trajectories and survival times in randomized trials simultaneously. We set an early time point in the trials when the trajectory measurement became the SE value. From simulated discovery cohorts, we compared patient level metrics including C index, integrated brier score, and log hazard ratio between SE values and survival times. We assembled multiple simulated studies to enable meta-analyses to estimate the trial level association. Across all the simulation scenarios considered here, we found tight correlations among the three patient level metrics and similar correlations between any of them and the trial level metric. Despite the continual increase in {\alpha}, both patient and trial level metrics often plateaued together; their association always decreased quickly as {\alpha} increased. This suggests that incorporating additional biological factors into a composite SE is likely to have diminishing returns on improving both patient level and trial level association. ",Metrics to find a surrogate endpoint of OS in metastatic oncology trials: a simulation study " Modern robotic systems are endowed with superior mobility and mechanical skills that make them suited to be employed in real-world scenarios, where interactions with heavy objects and precise manipulation capabilities are required. For instance, legged robots with high payload capacity can be used in disaster scenarios to remove dangerous material or carry injured people. It is thus essential to develop planning algorithms that can enable complex robots to perform motion and manipulation tasks accurately. In addition, online adaptation mechanisms with respect to new, unknown environments are needed. In this work, we impose that the optimal state-input trajectories generated by Model Predictive Control (MPC) satisfy the Lyapunov function criterion derived in adaptive control for robotic systems. As a result, we combine the stability guarantees provided by Control Lyapunov Functions (CLFs) and the optimality offered by MPC in a unified adaptive framework, yielding an improved performance during the robot's interaction with unknown objects. We validate the proposed approach in simulation and hardware tests on a quadrupedal robot carrying un-modeled payloads and pulling heavy boxes. ",Adaptive CLF-MPC With Application To Quadrupedal Robots " Advanced diffractive films may afford advantages over passive reflective surfaces for a variety space missions that use solar or laser in-space propulsion. Three cases are compared: Sun-facing diffractive sails, Littrow diffraction configurations, and conventional reflective sails. A simple Earth-to-Mars orbit transfer at a constant attitude with respect to the sun-line finds no penalty for transparent diffractive sails. Advantages of the latter approach include actively controlled metasails and the reuse of photons. ",Radiation Pressure on a Diffractive Sailcraft " The LIGO gravitational wave detectors have recently reached their design sensitivity and finished a two-year science run. During this period one year of data with unprecedented sensitivity has been collected. I will briefly describe the status of the LIGO detectors and the overall quality of the most recent science run. I also will present results of a search for inspiral waveforms in gravitational wave data coincident with the short gamma ray burst detected on 1st February 2007, with its sky location error box overlapping a spiral arms of M31. No gravitational wave signals were detected and a binary merger in M31 can be excluded at the 99% confidence level. ",GRB-triggered searches for gravitational waves in LIGO data " Homogeneity tests and interval estimations of the risk difference between two groups are of general interest under paired Bernoulli settings with the presence of stratification effects. Dallal [1] proposed a model by parameterizing the probability of an occurrence at one site given an occurrence at the other site. Based on this model, we propose three test statistics and evaluate their performances regarding type I error controls and powers. Confidence intervals of a common risk difference with satisfactory coverage probabilities and interval length are constructed. Our simulation results show that the score test is the most robust and the profile likelihood confidence interval outperforms other methods proposed. Data from a study of acute otitis media is used to illustrate our proposed procedures. ",Homogeneity Tests and Interval Estimations of Risk Differences for Stratified Bilateral and Unilateral Correlated Data " In this work we investigate how to extract alternating time bounds from 'focussed' proof systems. Our main result is the obtention of fragments of MALLw (MALL with weakening) complete for each level of the polynomial hierarchy. In one direction we encode QBF satisfiability and in the other we encode focussed proof search, and we show that the composition of the two encodings preserves quantifier alternation, yielding the required result. By carefully composing with well-known embeddings of MALLw into MALL, we obtain a similar delineation of MALL formulas, again carving out fragments complete for each level of the polynomial hierarchy. This refines the well-known results that both MALLw and MALL are PSPACE-complete. A key insight is that we have to refine the usual presentation of focussing to account for deterministic computations in proof search, which correspond to invertible rules that do not branch. This is so that we may more faithfully associate phases of focussed proof search to their alternating time complexity. This presentation seems to uncover further dualities at the level of proof search than usual presentations, so could be of further proof theoretic interest in its own right. ",From QBFs to MALL and back via focussing: fragments of multiplicative additive linear logic for each level of the polynomial hierarchy " We determine the expectation value of the gauge invariant operator Tr [F^2+... ] for N=4 SU(N) SYM, in the presence of an infinitely heavy static particle in the symmetric representation of SU(N). We carry out the computation in the context of the AdS/CFT correspondence, by considering the perturbation of the dilaton field caused by the presence of a D3 brane dual to such an external probe. We find that the effective chromo-electric charge of the probe has exactly the same expression as the one recently found in the computation of energy loss by radiation. ",Gluonic fields of a static particle to all orders in 1/N " A hyperplane arrangement is said to satisfy the ``Riemann hypothesis'' if all roots of its characteristic polynomial have the same real part. This property was conjectured by Postnikov and Stanley for certain families of arrangements which are defined for any irreducible root system and was proved for the root system $A_{n-1}$. The proof is based on an explicit formula for the characteristic polynomial, which is of independent combinatorial significance. Here our previous derivation of this formula is simplified and extended to similar formulae for all but the exceptional root systems. The conjecture follows in these cases. ",Extended Linial Hyperplane Arrangements for Root Systems and a Conjecture of Postnikov and Stanley " Coupled clocks are a classic example of a synchronization system leading to periodic collective oscillations. This phenomenon already attracted the attention of Christian Huygens back in 1665,who described it as a kind of ""sympathy"" among oscillators. In this work we describe the formation of two types of laser frequency combs as a system of oscillators coupled through the beating of the lasing modes. We experimentally show two completely different types of synchronizations in a quantum dot laser { in-phase and splay states. Both states can be generated in the same device, just by varying the damping losses of the system. This effectively modifes the coupling among the oscillators. The temporal output of the laser is characterized using both linear and quadratic autocorrelation techniques. Our results show that both pulses and frequency-modulated states can be generated on demand. These findings allow to connect laser frequency combs produced by amplitude-modulated and frequency-modulated lasers, and link these to pattern formation in coupled systems such as Josephson-junction arrays. ",In-phase and anti-phase synchronization in a laser frequency comb " We study the structure of generalized parton distributions in impact parameter space with the aim of determining the size and role of small transverse separations components in the quarks wave function. We analyze the relation between transverse momentum components and transverse separations. Wave functions with large transverse momentum components can simultaneously reproduce the behavior of the Dirac form factor at large momentum transfer, and of the deep inelastic structure functions at Bjorken x -> 1. The presence of large momentum components does not ensure, however, the dominance of small transverse distances at large x. We suggest that experiments measuring the attenuation of hadrons in the nuclear medium, or the onset of color transparency, can provide an alternative source of information on generalized parton distributions, by mapping out the behavior of the transverse components of the wave function. ",Generalized Parton Distributions and Color Transparency " In Raman spectroscopy of graphite and graphene, the $D$ band at $\sim 1355$cm$^{-1}$ is used as the indication of the dirtiness of a sample. However, our analysis suggests that the physics behind the $D$ band is closely related to a very clear idea for describing a molecule, namely bonding and antibonding orbitals in graphene. In this paper, we review our recent work on the mechanism for activating the $D$ band at a graphene edge. ",The origin of Raman D Band: Bonding and Antibonding Orbitals in Graphene " A meta-analysis is performed of the literature on evolution in cosmic star-formation rate density from redshift unity to the present day. The measurements are extremely diverse, including radio, infrared, and ultraviolet broad-band photometric indicators, and visible and near-ultraviolet line-emission indicators. Although there is large scatter among indicators at any given redshift, virtually all studies find a significant decrease from redshift unity to the present day. This is the most heterogeneously confirmed result in the study of galaxy evolution. When comoving star-formation rate density is treated as being proportional to $(1+z)^{\beta}$, the meta-analysis gives a best-fit exponent and conservative confidence interval of $\beta= 2.7\pm 0.7$ in a world model with $(\Omega_M,\Omega_{\Lambda})=(0.3,0.7)$ and $\beta= 3.3\pm 0.8$ in $(\Omega_M,\Omega_{\Lambda})=(1.0,0.0)$. In either case these evolutionary trends are strong enough that the bulk of the stellar mass at the present day ought to be in old ($>6 \mathrm{Gyr}$) populations. ",A meta-analysis of cosmic star-formation history " In a social network, agents are intelligent and have the capability to make decisions to maximize their utilities. They can either make wise decisions by taking advantages of other agents' experiences through learning, or make decisions earlier to avoid competitions from huge crowds. Both these two effects, social learning and negative network externality, play important roles in the decision process of an agent. While there are existing works on either social learning or negative network externality, a general study on considering both these two contradictory effects is still limited. We find that the Chinese restaurant process, a popular random process, provides a well-defined structure to model the decision process of an agent under these two effects. By introducing the strategic behavior into the non-strategic Chinese restaurant process, in Part I of this two-part paper, we propose a new game, called Chinese Restaurant Game, to formulate the social learning problem with negative network externality. Through analyzing the proposed Chinese restaurant game, we derive the optimal strategy of each agent and provide a recursive method to achieve the optimal strategy. How social learning and negative network externality influence each other under various settings is also studied through simulations. ",Chinese Restaurant Game - Part I: Theory of Learning with Negative Network Externality " We study the problem of large-scale network embedding, which aims to learn latent representations for network mining applications. Previous research shows that 1) popular network embedding benchmarks, such as DeepWalk, are in essence implicitly factorizing a matrix with a closed form, and 2)the explicit factorization of such matrix generates more powerful embeddings than existing methods. However, directly constructing and factorizing this matrix---which is dense---is prohibitively expensive in terms of both time and space, making it not scalable for large networks. In this work, we present the algorithm of large-scale network embedding as sparse matrix factorization (NetSMF). NetSMF leverages theories from spectral sparsification to efficiently sparsify the aforementioned dense matrix, enabling significantly improved efficiency in embedding learning. The sparsified matrix is spectrally close to the original dense one with a theoretically bounded approximation error, which helps maintain the representation power of the learned embeddings. We conduct experiments on networks of various scales and types. Results show that among both popular benchmarks and factorization based methods, NetSMF is the only method that achieves both high efficiency and effectiveness. We show that NetSMF requires only 24 hours to generate effective embeddings for a large-scale academic collaboration network with tens of millions of nodes, while it would cost DeepWalk months and is computationally infeasible for the dense matrix factorization solution. The source code of NetSMF is publicly available (https://github.com/xptree/NetSMF). ",NetSMF: Large-Scale Network Embedding as Sparse Matrix Factorization " Assume that $X_{1}, \ldots, X_{N}$ is an $\varepsilon$-contaminated sample of $N$ independent Gaussian vectors in $\mathbb{R}^d$ with mean $\mu$ and covariance $\Sigma$. In the strong $\varepsilon$-contamination model we assume that the adversary replaced an $\varepsilon$ fraction of vectors in the original Gaussian sample by any other vectors. We show that there is an estimator $\widehat \mu$ of the mean satisfying, with probability at least $1 - \delta$, a bound of the form \[ \|\widehat{\mu} - \mu\|_2 \le c\left(\sqrt{\frac{\operatorname{Tr}(\Sigma)}{N}} + \sqrt{\frac{\|\Sigma\|\log(1/\delta)}{N}} + \varepsilon\sqrt{\|\Sigma\|}\right), \] where $c > 0$ is an absolute constant and $\|\Sigma\|$ denotes the operator norm of $\Sigma$. In the same contaminated Gaussian setup, we construct an estimator $\widehat \Sigma$ of the covariance matrix $\Sigma$ that satisfies, with probability at least $1 - \delta$, \[ \left\|\widehat{\Sigma} - \Sigma\right\| \le c\left(\sqrt{\frac{\|\Sigma\|\operatorname{Tr}(\Sigma)}{N}} + \|\Sigma\|\sqrt{\frac{\log(1/\delta)}{N}} + \varepsilon\|\Sigma\|\right). \] Both results are optimal up to multiplicative constant factors. Despite the recent significant interest in robust statistics, achieving both dimension-free bounds in the canonical Gaussian case remained open. In fact, several previously known results were either dimension-dependent and required $\Sigma$ to be close to identity, or had a sub-optimal dependence on the contamination level $\varepsilon$. As a part of the analysis, we derive sharp concentration inequalities for central order statistics of Gaussian, folded normal, and chi-squared distributions. ",Statistically Optimal Robust Mean and Covariance Estimation for Anisotropic Gaussians " There is an approximately 9% discrepancy, corresponding to 2.4sigma, between two independent constraints on the expansion rate of the universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations, and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension - strengthened by the recent Planck results - is partially relieved and the concordance of the standard model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z>0.010) or 1.3% (limiting it to z>0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the standard model may prove necessary. ",Cosmic variance and the measurement of the local Hubble parameter " We reconsider Chern-Simons gauge theory on a Seifert manifold M, which is the total space of a nontrivial circle bundle over a Riemann surface, possibly with orbifold points. As shown in previous work with Witten, the path integral technique of non-abelian localization can be used to express the partition function of Chern-Simons theory in terms of the equivariant cohomology of the moduli space of flat connections on M. Here we extend this result to apply to the expectation values of Wilson loop operators which wrap the circle fibers of M. Under localization, such a Wilson loop operator reduces naturally to the Chern character of an associated universal bundle over the moduli space. Along the way, we demonstrate that the stationary-phase approximation to the Wilson loop path integral is exact for torus knots, an observation made empirically by Lawrence and Rozansky prior to this work. ",Localization for Wilson Loops in Chern-Simons Theory " In this paper, we focus on the problem of integrating Energy-based Models (EBM) as guiding priors for motion optimization. EBMs are a set of neural networks that can represent expressive probability density distributions in terms of a Gibbs distribution parameterized by a suitable energy function. Due to their implicit nature, they can easily be integrated as optimization factors or as initial sampling distributions in the motion optimization problem, making them good candidates to integrate data-driven priors in the motion optimization problem. In this work, we present a set of required modeling and algorithmic choices to adapt EBMs into motion optimization. We investigate the benefit of including additional regularizers in the learning of the EBMs to use them with gradient-based optimizers and we present a set of EBM architectures to learn generalizable distributions for manipulation tasks. We present multiple cases in which the EBM could be integrated for motion optimization and evaluate the performance of learned EBMs as guiding priors for both simulated and real robot experiments. ",Learning Implicit Priors for Motion Optimization The degree of entanglement is determined for an arbitrary state of a broad class of PT-symmetric bipartite composite systems. Subsequently we quantify the rate with which entangled states are generated and show that this rate can be characterized by a small set of parameters. These relations allow one in principle to improve the ability of these systems to entangle states. It is also noticed that many relations resemble corresponding ones in conventional quantum mechanics. ,Entanglement Efficiencies in PT-Symmetric Quantum Mechanics " I propose to compare the redshift-space density field directly to the REAL-SPACE velocity field. Such a comparison possesses all of the advantages of the conventional redshift-space analyses, while at the same time it is free of their disadvantages. In particular, the model-dependent reconstruction of the density field in real space is unnecessary, and so is the reconstruction of the velocity field in redshift space. The redshift-space velocity field can be reconstructed only at the linear order, because only at this order it is irrotational. Unlike the conventional redshift-space density--velocity comparisons, the comparison proposed here does not have to be restricted to the linear regime. Nonlinear effects can then be used to break the Omega-bias degeneracy plaguing the analyses based on the linear theory. I present a degeneracy-breaking method for the case of nonlinear but local bias. ",Redshift-space density versus real-space velocity comparison " The transition density and current provide valuable insight into the nature of nuclear vibrations. Nuclear vorticity is a quantity related to the transverse transition current. In this work, we study the evolution of the strength distribution, related to density fluctuations, and the vorticity strength distribution, as the neutron drip line is approached. Our results on the isoscalar, natural-parity multipole response of Ni isotopes, obtained by using a self-consistent Skyrme-Hartree-Fock + Continuum RPA model, indicate that, close to the drip line, the low-energy response is dominated by L>1 vortical transitions. ",Nuclear vorticity and the low-energy nuclear response - Towards the neutron drip line " We present observations of the star formation region NGC 7129 taken with the Multiband Imaging Photometer for Spitzer (MIPS). A significant population of sources, likely pre-main sequence members of the young stellar cluster, is revealed outside the central photoionization region. Combining with Infrared Array Camera (IRAC) and ground-based near-infrared images, we have obtained colors and spectral energy distributions for some 60 objects. The [3.6]-[4.5] vs. [8]-[24] color-color plane shows sources clustered at several different loci, which roughly correspond to the archetypal evolutionary sequence Class 0, I, II, and III. We obtain preliminary classifications for 36 objects, and find significant numbers of both Class I and II objects. Most of the pre-main sequence candidates are associated with the densest part of the molecular cloud surrounding the photoionization region, indicating active star formation over a broad area outside the central cluster. We discuss three Class II candidates that exhibit evidence of inner disk clearing, which would be some of the youngest known examples of a transition from accretion to optically thin quiescent disks. ",The 24-Micron View of Embedded Star Formation in NGC 7129 " For a classical system of noninteracting particles we establish recursive integral equations for the density of states on the microcanonical ensemble. The recursion can be either on the number of particles or on the dimension of the system. The solution of the integral equations is particularly simple when the single-particle density of states in one dimension follows a power law. Otherwise it can be obtained using a Laplace transform method. Since the Laplace transform of the microcanonical density of states is the canonical partition function, it factorizes for a system of noninteracting particles and the solution of the problem is straightforward. The results are illustrated on several classical examples. ",Recursive calculation of the microcanonical density of states We discuss the smoothness and strict convexity of the solution of the $L_p$ Minkowski problem when $p<1$ and the given measure has a positive density function. ,Smoothness in the $l_p$ Minkowski problem for $p<1$ In this short note we formulate an action for N D0-branes that is manifestly invariant under gauged Galilean transformations. We also find its canonical form and determine first class constraints that are generators of gauge transformations. ,Action for N D0-Branes Invariant Under Gauged Galilean Transformations " We solve a long-standing open problem with its own long history dating back to the celebrated works of Klein and Ramanujan. This problem concerns the invariant decomposition formulas of the Hauptmodul for $\Gamma_0(p)$ under the action of finite simple groups $PSL(2, p)$ with $p=5, 7, 13$. The cases of $p=5$ and $7$ were solved by Klein and Ramanujan. Little was known about this problem for $p=13$. Using our invariant theory for $PSL(2, 13)$, we solve this problem. This leads to a new expression of the classical elliptic modular function of Klein: $j$-function in terms of theta constants associated with $\Gamma(13)$. Moreover, we find an exotic modular equation, i.e., it has the same form as Ramanujan's modular equation of degree $13$, but with different kinds of modular parametrizations, which gives the geometry of the classical modular curve $X(13)$. ","Dedekind $\eta$-function, Hauptmodul and invariant theory" " This paper proposes interference mitigation techniques for provisioning ultrareliable low-latency wireless communication in an industrial automation setting, where multiple transmissions from controllers to actuators interfere with each other. Channel fading and interference are key impairments in wireless communication. This paper leverages the recently proposed ``Occupy CoW'' protocol that efficiently exploits the broadcast opportunity and spatial diversity through a two-hop cooperative communication strategy among distributed receivers to combat deep fading, but points out that because this protocol avoids interference by frequency division orthogonal transmission, it is not scalable in terms of bandwidth required for achieving ultrareliability, when multiple controllers simultaneously communicate with multiple actuators (akin to the downlink of a multicell network). The main observation of this paper is that full frequency reuse in the first phase, together with successive decoding and cancellation of interference, can improve the performance of this strategy notably. We propose two protocols depending on whether interference cancellation or avoidance is implemented in the second phase, and show that both outperform Occupy CoW in terms of the required bandwidth and power for achieving ultrareliability at practical values of the transmit power. ",Interference Mitigation for Ultrareliable Low-Latency Wireless Communication " Chandra X-ray Observatory grating spectra of the supergiant X-ray Binary 4U 1700-37 reveal emission lines from hydrogen and helium-like S, Si, Mg, and Ne in the 4-13 A range. The spectrum also shows fluorescent lines from S, Si, and a prominent Fe K alpha line at 1.94 A. The lines contribute to the previously unaccounted ""soft excess"" in the flux in this range at orbital phi~0.7. The X-ray source was observed during intermittent flaring, and the strengths of the lines vary with the source state. The widths of the lines (FWHM approximately 1000-2000 km/s) can result from either Compton scattering or Doppler shifts. Power spectra of the hard X-rays show red noise and the soft X-rays and lines show in addition quasiperiodic oscillations (QPOs) and a power-spectral break. Helium-like triplets of Si and Mg suggest that the gas is not in a pure photoionization equilibrium. We discuss whether resonant scattering could affect the line ratios or whether a portion of the wind may be heated to temperatures T~10^6 K. ",Chandra Grating Spectroscopy of the X-ray Binary 4U 1700-37 in a Flaring State We have generated more than 100 watts of radial polarized beam from a Yb fiber laser using a photonics crystal segmented half-wave-plate. We demonstrated the high power handling capability of such a photonics crystal segmented half-wave-plate and show that it is a promising external radial polarization converter for high power Yb fiber laser used in laser cutting industry. ,High power radially polarized light generated from photonic crystal segmented half-wave-plate " A method is proposed to test for the nature of the pseudogap phase in cuprates using the recently developed technique of Fourier transform scanning tunneling spectroscopy. We show that the observed quasiparticle interference patterns depend critically on the quasiparticle coherence factors, making it possible to distinguish between the pseudogap dominated by superconducting fluctuations and by various particle-hole condensates. ",Theory of the quasiparticle interference patterns in the pseudogap phase of the cuprate superconductors " We study twisted Jacobi manifolds, a concept that we had introduced in a previous Note. Twisted Jacobi manifolds can be characterized using twisted Dirac-Jacobi, which are sub-bundles of Courant-Jacobi algebroids. We show that each twisted Jacobi manifold has an associated Lie algebroid with a 1-cocycle. We introduce the notion of quasi-Jacobi bialgebroid and we prove that each twisted Jacobi manifold has a quasi-Jacobi bialgebroid canonically associated. Moreover, the double of a quasi-Jacobi bialgebroid is a Courant-Jacobi algebroid. Several examples of twisted Jacobi manifolds and twisted Dirac-Jacobi structures are presented. ","Twisted Jacobi manifolds, twisted Dirac-Jacobi structures and quasi-Jacobi bialgebroids" This paper considers the problem of robust $H_{\infty}$ control using decentralized state feedback controllers for a class of large-scale systems with Markov jump parameters. A sufficient condition is developed to design controllers using local system states and local system operation modes. The sufficient condition is given in terms of rank constrained linear matrix inequalities. An illustrative numerical example is given to demonstrate the developed theory. ,Local Mode Dependent Decentralized $H_{\infty}$ Control of Uncertain Markovian Jump Large-scale Systems " We study dust attenuation and stellar mass of $\rm z\sim 0.6$ star-forming galaxies using new SWIRE observations in IR and GALEX observations in UV. Two samples are selected from the SWIRE and GALEX source catalogs in the SWIRE/GALEX field ELAIS-N1-00 ($\Omega = 0.8$ deg$^2$). The UV selected sample has 600 galaxies with photometric redshift (hereafter photo-z) $0.5 \leq z \leq 0.7$ and NUV$\leq 23.5$ (corresponding to $\rm L_{FUV} \geq 10^{9.6} L_\sun$). The IR selected sample contains 430 galaxies with $f_{24\mu m} \geq 0.2$ mJy ($\rm L_{dust} \geq 10^{10.8} L_\sun$) in the same photo-z range. It is found that the mean $\rm L_{dust}/L_{FUV}$ ratios of the z=0.6 UV galaxies are consistent with that of their z=0 counterparts of the same $\rm L_{FUV}$. For IR galaxies, the mean $\rm L_{dust}/L_{FUV}$ ratios of the z=0.6 LIRGs ($\rm L_{dust} \sim 10^{11} L_\sun$) are about a factor of 2 lower than local LIRGs, whereas z=0.6 ULIRGs ($\rm L_{dust} \sim 10^{12} L_\sun$) have the same mean $\rm L_{dust}/L_{FUV}$ ratios as their local counterparts. This is consistent with the hypothesis that the dominant component of LIRG population has changed from large, gas rich spirals at z$>0.5$ to major-mergers at z=0. The stellar mass of z=0.6 UV galaxies of $\rm L_{FUV} \leq 10^{10.2} L_\sun$ is about a factor 2 less than their local counterparts of the same luminosity, indicating growth of these galaxies. The mass of z=0.6 UV lunmous galaxies (UVLGs: $\rm L_{FUV} > 10^{10.2} L_\sun$) and IR selected galaxies, which are nearly exclusively LIRGs and ULIRGs, is the same as their local counterparts. ",IR and UV Galaxies at z=0.6 -- Evolution of Dust Attenuation and Stellar Mass as Revealed by SWIRE and GALEX " Among the perovskite oxide family, KTaO$_3$ (KTO) has recently attracted considerable interest as a possible system for the realization of the Rashba effect. In this work, we improvise a novel conducting interface by juxtaposing KTO with another insulator, namely LaVO$_3$ (LVO) and report planar Hall effect (PHE) and anisotropic magnetoresistance (AMR) measurements. This interface exhibits a signature of strong spin-orbit coupling. Our experimental observation of two fold AMR at low magnetic fields can be intuitively understood using a phenomenological theory for a Rashba spin-split system. At high fields ($\sim$8 T), we see a two fold to four fold transition in the AMR that could not be explained using only Rashba spin-split energy spectra. We speculate that it might be generated through an intricate process arising from the interplay between strong spin-orbit coupling, broken inversion symmetery, relativistic conduction electron and possible uncompensated localized vanadium spins. ",Planar Hall Effect and Anisotropic Magnetoresistance in a polar-polar interface of LaVO$_3$-KTaO$_3$ with strong spin-orbit coupling " The model independent bounds on new neutral vector resonances masses, couplings and widths presented at arxiv:1112.0316 are updated with an integrated luminosity of L=4.7 fb^-1 from ATLAS and L=4.6 fb^-1 from CMS. These exclusion limits correspond to the most stringent existing bounds on the production of new neutral spin-1 resonances that decay to electroweak gauge boson pairs and that are associated to the electroweak symmetry breaking sector in several extensions of the Standard Model. ",Update of the Present Bounds on New Neutral Vector Resonances from Electroweak Gauge Boson Pair Production at the LHC " Building on previous work by the author and Robin Deeley, we give a thorough presentation of the techniques developed for synchronizing dynamical systems in the special case of synchronizing shift spaces. Following work of Thomsen, we give a construction of the homoclinic and heteroclinic $C^\ast$-algebras of a shift space in terms of Bratteli diagrams. Lastly we present several specific examples which demonstrate these techniques. For the even shift we give a complete computation of all the associated invariants. We also present an example of a strictly non-sofic synchronizing shift. In particular we discuss the rank of the $K$-theory of the homoclinic algebra of a shift space and its implications. We also give a construction for producing from any minimal shift a synchronizing shift whose set of non-synchronizing points is exactly the original minimal shift. ",Synchronizing Dynamical Systems: Shift Spaces and $K$-Theory " Using a more accurate effective Hamiltonian governing the time evolution in the particle-antiparticle subspace of states than the one obtained within the Lee-Oehme--Yang approach we show that in the case of particles created at the initial instant t = 0 only the masses of a stable particle and its antiparticle are the same at all t > 0 in a CPT invariant system, whereas the masses of an unstable particle and its antiparticle are equal only at t = 0 and then during their time evolution they become slightly different for times t >> 0 if CP symmetry is violated but CPT symmetry holds. This property is used to show that if the baryon number B is not conserved then the asymmetry between numbers of unstable baryons and antibaryons can arise in a CPT invariant system at t >> 0 even in the thermal equilibrium state of this system. ",A contribution to the discussion of the matter-antimatter asymmetry problem " Surface photometry and a 21cm HI line spectrum of the giant double-ringed galaxy ESO 474-G26 are presented. The morphology of this system is unique among the 30,000 galaxies with >B15. Two almost orthogonal optical rings with diameters of 60 and 40 kpc surround the central body (assuming H0=70 km/s/Mpc). The outer one is an equatorial ring, while the inner ring lies in a nearly polar plane. The rings have blue optical colors typical of late-type spirals. Both appear to be rotating around the central galaxy, so that this system can be considered as a kinematically confirmed polar ring galaxy. Its observational characteristics are typical of galaxy merger remnants. Although the central object has a surface brightness distribution typical of elliptical galaxies, it has a higher surface brightness for its effective radius than ordinary ellipticals. Possible origins of this galaxy are discussed and numerical simulations are presented that illustrate the formation of the two rings in the merging process of two spiral galaxies, in which the observed appearance of ESO 474-G26 appears to be a transient stage. ",Galaxy transmutations: The double ringed galaxy ESO 474-G26 " The recurrent nova T Pyx underwent its sixth historical outburst in 2011, and became the subject of an intensive multi-wavelength observational campaign. We analyze data from the Swift and Suzaku satellites to produce a detailed X-ray light curve augmented by epochs of spectral information. X-ray observations yield mostly non-detections in the first four months of outburst, but both a super-soft and hard X-ray component rise rapidly after Day 115. The super-soft X-ray component, attributable to the photosphere of the nuclear-burning white dwarf, is relatively cool (~45 eV) and implies that the white dwarf in T Pyx is significantly below the Chandrasekhar mass (~1 M_sun). The late turn-on time of the super-soft component yields a large nova ejecta mass (>~10^-5 M_sun), consistent with estimates at other wavelengths. The hard X-ray component is well fit by a ~1 keV thermal plasma, and is attributed to shocks internal to the 2011 nova ejecta. The presence of a strong oxygen line in this thermal plasma on Day 194 requires a significantly super-solar abundance of oxygen and implies that the ejecta are polluted by white dwarf material. The X-ray light curve can be explained by a dual-phase ejection, with a significant delay between the first and second ejection phases, and the second ejection finally released two months after outburst. A delayed ejection is consistent with optical and radio observations of T Pyx, but the physical mechanism producing such a delay remains a mystery. ",The 2011 Outburst of Recurrent Nova T Pyx: X-ray Observations Expose the White Dwarf Mass and Ejection Dynamics " Lymph node metastasis (LNM) is a significant prognostic factor in patients with head and neck cancer, and the ability to predict it accurately is essential for treatment optimization. PET and CT imaging are routinely used for LNM identification. However, uncertainties of LNM always exist especially for small size or reactive nodes. Radiomics and deep learning are the two preferred imaging-based strategies for node malignancy prediction. Radiomics models are built based on handcrafted features, and deep learning can learn the features automatically. We proposed a hybrid predictive model that combines many-objective radiomics (MO-radiomics) and 3-dimensional convolutional neural network (3D-CNN) through evidential reasoning (ER) approach. To build a more reliable model, we proposed a new many-objective radiomics model. Meanwhile, we designed a 3D-CNN that fully utilizes spatial contextual information. Finally, the outputs were fused through the ER approach. To study the predictability of the two modalities, three models were built for PET, CT, and PET&CT. The results showed that the model performed best when the two modalities were combined. Moreover, we showed that the quantitative results obtained from the hybrid model were better than those obtained from MO-radiomics and 3D-CNN. ",Predicting Lymph Node Metastasis in Head and Neck Cancer by Combining Many-objective Radiomics and 3-dimensioal Convolutional Neural Network through Evidential Reasoning " We have measured non-zero closure phases for about 29% of our sample of 56 nearby Asymptotic Giant Branch (AGB) stars, using the 3-telescope Infrared Optical Telescope Array (IOTA) interferometer at near-infrared wavelengths (H band) and with angular resolutions in the range 5-10 milliarcseconds. These nonzero closure phases can only be generated by asymmetric brightness distributions of the target stars or their surroundings. We discuss how these results were obtained, and how they might be interpreted in terms of structures on or near the target stars. We also report measured angular sizes and hypothesize that most Mira stars would show detectable asymmetry if observed with adequate angular resolution. ",First Surface-resolved Results with the IOTA Imaging Interferometer: Detection of Asymmetries in AGB stars " Ill-posed inverse problems are ubiquitous in applications. Under- standing of algorithms for their solution has been greatly enhanced by a deep understanding of the linear inverse problem. In the applied communities ensemble-based filtering methods have recently been used to solve inverse problems by introducing an artificial dynamical sys- tem. This opens up the possibility of using a range of other filtering methods, such as 3DVAR and Kalman based methods, to solve inverse problems, again by introducing an artificial dynamical system. The aim of this paper is to analyze such methods in the context of the ill-posed linear inverse problem. Statistical linear inverse problems are studied in the sense that the observational noise is assumed to be derived via realization of a Gaussian random variable. We investigate the asymptotic behavior of filter based methods for these inverse problems. Rigorous convergence rates are established for 3DVAR and for the Kalman filters, including minimax rates in some instances. Blowup of 3DVAR and a variant of its basic form is also presented, and optimality of the Kalman filter is discussed. These analyses reveal a close connection between (iterative) regularization schemes in deterministic inverse problems and filter based methods in data assimilation. Numerical experiments are presented to illustrate the theory. ",Filter Based Methods For Statistical Linear Inverse Problems " The spin in a rotating frame has attracted a lot of attentions recently, as it deeply relates to both fundamental physics such as pseudo-magnetic field and geometric phase, and applications such as gyroscopic sensors. However, previous studies only focused on adiabatic limit, where the rotating frequency is much smaller than the spin frequency. Here we propose to use a levitated nano-diamond with a built-in nitrogen-vacancy (NV) center to study the dynamics and the geometric phase of a rotating electron spin without adiabatic approximation. We find that the transition between the spin levels appears when the rotating frequency is comparable to the spin frequency at zero magnetic field. Then we use Floquet theory to numerically solve the spin energy spectrum, study the spin dynamics and calculate the geometric phase under a finite magnetic field, where the rotating frequency to fulfill the resonant transition condition could be greatly reduced. ",Nonadiabatic dynamics and geometric phase of an ultrafast rotating electron spin " In this paper, we consider the IoT data discovery problem in very large and growing scale networks. Specifically, we investigate in depth the routing table summarization techniques to support effective and space-efficient IoT data discovery routing. Novel summarization algorithms, including alphabetical based, hash based, and meaning based summarization and their corresponding coding schemes are proposed. The issue of potentially misleading routing due to summarization is also investigated. Subsequently, we analyze the strategy of when to summarize in order to balance the tradeoff between the routing table compression rate and the chance of causing misleading routing. For experimental study, we have collected 100K IoT data streams from various IoT databases as the input dataset. Experimental results show that our summarization solution can reduce the routing table size by 20 to 30 folds with 2-5% increase in latency when compared with similar peer-to-peer discovery routing algorithms without summarization. Also, our approach outperforms DHT based approaches by 2 to 6 folds in terms of latency and traffic. ",Into Summarization Techniques for IoT Data Discovery Routing " This paper proposes an approach to build a high-quality text-to-speech (TTS) system for technical domains using data augmentation. An end-to-end (E2E) system is trained on hidden Markov model (HMM) based synthesized speech and further fine-tuned with studio-recorded TTS data to improve the timbre of the synthesized voice. The motivation behind the work is that issues of word skips and repetitions are usually absent in HMM systems due to their ability to model the duration distribution of phonemes accurately. Context-dependent pentaphone modeling, along with tree-based clustering and state-tying, takes care of unseen context and out-of-vocabulary words. A language model is also employed to reduce synthesis errors further. Subjective evaluations indicate that speech produced using the proposed system is superior to the baseline E2E synthesis approach in terms of intelligibility when combining complementing attributes from HMM and E2E frameworks. The further analysis highlights the proposed approach's efficacy in low-resource scenarios. ",HMM-based data augmentation for E2E systems for building conversational speech synthesis systems " In this paper, we investigate the asymptotic behavior of solutions toward a multiwave pattern of the Cauchy problem for the scalar viscous conservation law where the far field states are prescribed. Especially, we deal with the case when the flux function is convex or concave but linearly degenerate on some interval, and also the viscosity is a nonlinearly degenerate one (p-Laplacian type viscosity). When the corresponding Riemann problem admits a Riemann solution which consists of rarefaction waves and contact discontinuity, it is proved that the solution of the Cauchy problem tends toward the linear combination of the rarefaction waves and contact wave for p-Laplacian type viscosity as the time goes to infinity. This is the first result concerning the asymptotics toward multiwave pattern for the Cauchy problem of the scalar conservation law with nonlinear viscosity. The proof is given by a technical energy methods and the careful estimates for the interactions between the nonlinear waves. ",Asymptotic behavior of solutions toward a multiwave pattern to the Cauchy problem for the scalar conservation law with degenerate flux and viscosity " We consider a non-atomic congestion game where each decision maker performs selfish optimization over states of a common MDP. The decision makers optimize for their own expected costs, and influence each other through congestion effects on the state-action costs. We analyze on the sensitivity of MDP congestion game equilibria to uncertainty and perturbations in the state-action costs by applying an implicit function type analysis. The occurrence of a stochastic Braess paradox is defined, analyzed based on sensitivity of game equilibria and demonstrated in simulation. We further analyze how the introduction of stochastic dynamics affects the magnitude of Braess paradox in comparison to deterministic dynamics. ",Sensitivity Analysis for Markov Decision Process Congestion Games " Let $M$ be a cancellative and commutative monoid (written additively). The monoid $M$ is atomic if every non-invertible element can be written as a sum of irreducible elements (often called atoms in the literature). Weaker versions of atomicity have been recently introduced and investigated, including the properties of being nearly atomic, almost atomic, quasi-atomic, and Furstenberg. In this paper, we investigate the atomic structure of lattice monoids, (i.e., submonoids of a finite-rank free abelian group), putting special emphasis on the four mentioned atomic properties. ",Atomicity in Rank-2 Lattice Monoids " We study price formation in intraday electricity markets in the presence of intermittent renewable generation. We consider the setting where a major producer may interact strategically with a large number of small producers. Using stochastic control theory we identify the optimal strategies of agents with market impact and exhibit the Nash equilibrium in closed form in the asymptotic framework of mean field games with a major player. This is a companion paper to [F\'eron, Tankov, and Tinsi, Price formation and optimal trading in intraday electricity markets, arXiv:2009.04786, 2020], where a similar model is developed in the setting of identical agents. ",Price formation and optimal trading in intraday electricity markets with a major player " We study the impact of the multi-lepton searches at the LHC on supersymmetric models with compressed mass spectra. For such models the acceptances of the usual search strategies are significantly reduced due to requirement of large effective mass and missing E_T. On the other hand, lepton searches do have much lower thresholds for missing E_T and p_T of the final state objects. Therefore, if a model with a compressed mass spectrum allows for multi-lepton final states, one could derive constraints using multi-lepton searches. For a class of simplified models we study the exclusion limits using ATLAS multi-lepton search analyses for the final states containing 2-4 electrons or muons with a total integrated luminosity of 1-2/fb at \sqrt{s}=7 TeV. We also modify those analyses by imposing additional cuts, so that their sensitivity to compressed supersymmetric models increase. Using the original and modified analyses, we show that the exclusion limits can be competitive with jet plus missing E_T searches, providing exclusion limits up to gluino masses of 1 TeV. We also analyse the efficiencies for several classes of events coming from different intermediate state particles. This allows us to assess exclusion limits in similar class of models with different cross sections and branching ratios without requiring a Monte Carlo simulation. ",Constraining compressed supersymmetry using leptonic signatures " Stationary solutions of the Chern-Simons effective field theory for the fractional quantum Hall systems with edges are presented for Hall bar, disk and annulus. In the infinitely long Hall bar geometry (non compact case), the charge density is shown to be monotonic inside the sample. In sharp contrast, spatial oscillatory modes of charge density are found for the two circular geometries, which indicate that in systems with compact geometry, charge and current exist also far from the edges. ",Charge and current oscillations in Fractional quantum Hall systems with edges " This paper studies the data-driven reconstruction of firing rate dynamics of brain activity described by linear-threshold network models. Identifying the system parameters directly leads to a large number of variables and a highly non-convex objective function. Instead, our approach introduces a novel reformulation that incorporates biological organizational features and turns the identification problem into a scalar variable optimization of a discontinuous, non-convex objective function. We prove that the minimizer of the objective function is unique and establish that the solution of the optimization problem leads to the identification of all the desired system parameters. These results are the basis to introduce an algorithm to find the optimizer by searching the different regions corresponding to the domain of definition of the objective function. To deal with measurement noise in sampled data, we propose a modification of the original algorithm whose identification error is linearly bounded by the magnitude of the measurement noise. We demonstrate the effectiveness of the proposed algorithms through simulations on synthetic and experimental data. ",Efficient Reconstruction of Neural Mass Dynamics Modeled by Linear-Threshold Networks " In this paper, we investigate the algorithmic stability of unsupervised representation learning with deep generative models, as a function of repeated re-training on the same input data. Algorithms for learning low dimensional linear representations -- for example principal components analysis (PCA), or linear independent components analysis (ICA) -- come with guarantees that they will always reveal the same latent representations (perhaps up to an arbitrary rotation or permutation). Unfortunately, for non-linear representation learning, such as in a variational auto-encoder (VAE) model trained by stochastic gradient descent, we have no such guarantees. Recent work on identifiability in non-linear ICA have introduced a family of deep generative models that have identifiable latent representations, achieved by conditioning on side information (e.g. informative labels). We empirically evaluate the stability of these models under repeated re-estimation of parameters, and compare them to both standard VAEs and deep generative models which learn to cluster in their latent space. Surprisingly, we discover side information is not necessary for algorithmic stability: using standard quantitative measures of identifiability, we find deep generative models with latent clusterings are empirically identifiable to the same degree as models which rely on auxiliary labels. We relate these results to the possibility of identifiable non-linear ICA. ",I Don't Need u: Identifiable Non-Linear ICA Without Side Information " We present an extension of the Hardy--Littlewood inequality for multilinear forms. More precisely, let $\mathbb{K}$ be the real or complex scalar field and $m,k$ be positive integers with $m\geq k\,$ and $n_{1},\dots ,n_{k}$ be positive integers such that $n_{1}+\cdots +n_{k}=m$. ($a$) If $(r,p)\in (0,\infty )\times \lbrack 2m,\infty ]$ then there is a constant $D_{m,r,p,k}^{\mathbb{K}}\geq 1$ (not depending on $n$) such that $$ \left( \sum_{i_{1},\dots ,i_{k}=1}^{n}\left| T\left( e_{i_{1}}^{n_{1}},\dots ,e_{i_{k}}^{n_{k}}\right) \right| ^{r}\right) ^{% \frac{1}{r}}\leq D_{m,r,p,k}^{\mathbb{K}} \cdot n^{max\left\{ \frac{% 2kp-kpr-pr+2rm}{2pr},0\right\} }\left| T\right| $$ for all $m$-linear forms $T:\ell_{p}^{n}\times \cdots \times \ell_{p}^{n}\rightarrow \mathbb{K}$ and all positive integers $n$. Moreover, the exponent $max\left\{ \frac{2kp-kpr-pr+2rm}{2pr},0\right\} $ is optimal. ($b$) If $(r, p) \in (0, \infty) \times (m, 2m]$ then there is a constant $% D_{m,r,p, k}^{\mathbb{K}}\geq 1$ (not depending on $n$) such that $$ \left( \sum_{i_{1},\dots ,i_{k}=1}^{n }\left| T\left( e_{i_{1}}^{n_{1}},\dots ,e_{i_{k}}^{n_{k}}\right) \right| ^{r }\right) ^{% \frac{1}{r }}\leq D_{m,r,p, k}^{\mathbb{K}} \cdot n^{ max \left\{\frac{% p-rp+rm}{pr}, 0\right\}}\left| T\right| $$ for all $m$-linear forms $T:\ell_{p}^{n}\times \cdots \times \ell_{p}^{n}\rightarrow \mathbb{K}$ and all positive integers $n$. Moreover, the exponent $max \left\{\frac{p-rp+rm}{pr}, 0\right\}$ is optimal. The case $k=m$ recovers a recent result due to G. Araujo and D. Pellegrino. ",Summability of multilinear forms on classical sequence spaces " The quantum Frobenius map and it splitting are shown to descend to corresponding maps for generalized $q$-Schur algebras at a root of unity. We also define analogs of $q$-Schur algebras for any affine algebra, and prove the corresponding results for them. ",q-Schur algebras and quantum Frobenius " Sensitivity methods for the analysis of the outputs of discrete Bayesian networks have been extensively studied and implemented in different software packages. These methods usually focus on the study of sensitivity functions and on the impact of a parameter change to the Chan-Darwiche distance. Although not fully recognized, the majority of these results heavily rely on the multilinear structure of atomic probabilities in terms of the conditional probability parameters associated with this type of network. By defining a statistical model through the polynomial expression of its associated defining conditional probabilities, we develop a unifying approach to sensitivity methods applicable to a large suite of models including extensions of Bayesian networks, for instance context-specific and dynamic ones, and chain event graphs. By then focusing on models whose defining polynomial is multilinear, our algebraic approach enables us to prove that the Chan-Darwiche distance is minimized for a certain class of multi-parameter contemporaneous variations when parameters are proportionally covaried. ","Sensitivity analysis, multilinearity and beyond" " Most software that runs on computers undergoes processing by compilers. Since compilers constitute the fundamental infrastructure of software development, their correctness is paramount. Over the years, researchers have invested in analyzing, understanding, and characterizing the bug features over mainstream compilers. These studies have demonstrated that compilers correctness requires greater research attention, and they also pave the way for compiler fuzzing. To improve compilers correctness, researchers have proposed numerous compiler fuzzing techniques. These techniques were initially developed for testing traditional compilers such as GCC/LLVM and have since been generalized to test various newly developed, domain-specific compilers, such as graphics shader compilers and deep learning (DL) compilers. In this survey, we provide a comprehensive summary of the research efforts for understanding and addressing compilers defects. Specifically, this survey mainly covers two aspects. First, it covers researchers investigation and expertise on compilers bugs, such as their symptoms and root causes. The compiler bug studies cover GCC/LLVM, JVM compilers, and DL compilers. In addition, it covers researchers efforts in designing fuzzing techniques, including constructing test programs and designing test oracles. Besides discussing the existing work, this survey outlines several open challenges and highlights research opportunities. ",A Survey of Modern Compiler Fuzzing " We have analyzed the projected galaxy distributions in a subset of the ENACS/ COSMOS cluster sample. We made Maximum-Likelihood fits to the distribution of COSMOS galaxies for 4 theoretical profiles, with `cores' (generalized King- and Hubble-profiles) and with `cusps' (generalized Navarro et al., or NFW, and de Vaucouleurs profiles). We use the Likelihood ratio to investigate whether the observations are better described by profiles with cusps or with cores. Taking the King and NFW profiles as model of either class, we find that about 75% of the clusters are better fitted by the King profile than by the NFW profile. However, for the individual clusters the preference for the King profile is rarely significant. When we limit ourselves to the central regions it appears that the significance increases drastically, with 65% of the clusters showing a strong preference for a King over an NFW profile. We constructed composite clusters from the COSMOS and ENACS data, taking special care to avoid the creation or the destruction of cusps. We scale by different ways (projected distances, core radii, r_{200}).In all three cases we find that the King profile is clearly preferred over the NFW profile. However, this preference is not shared by the brightest galaxies. We conclude that these galaxies are represented almost equally well by King and NFWprofiles, but that the distribution of the fainter galaxies clearly shows a core rather than a cusp. Finally, we compared the outer slope of the galaxy distributions in ourclusters with results for model calculations with different cosmological parameters. We conclude that the observed profile slope indicates a low value for Omega_0. This is consistent with the direct estimate of Omega_0 based on the M/L ratios of the individual clusters. ","ENACSVII, Galaxy density profiles of rich clusters of galaxies" " Based on the fact that the mass difference between the chiral partners is an order parameter of chiral phase transition and that the chiral order parameter reduces substantially at the chemical freeze-out point in ultra-relativistic heavy ion collisions, we argue that the production ratio of $K_1$ over $K^*$ in such collisions should be substantially larger than that predicted in the statistical hadronization model. We further show that while the enhancement effect might be contaminated by the relatively larger decrease of $K_1$ meson than $K^*$ meson during the hadronic phase, the signal will be visible through a systematic study on centrality as the kinetic freeze-out temperature is higher and the hadronic life time shorter in peripheral collisions than in central collisions. ",$K_1/K^*$ enhancement as a signature of chiral symmetry restoration in heavy ion collisions " We investigate two theoretical pseudomagnon-based models for a bilayer quantum Hall system (BQHS) at total filling factor $\nu_t = 1$. We find a unifying framework which elucidates the different approximations that are made. We also consider the effect of an in-plane magnetic field in BQHSs at $\nu_t = 1$, by deriving an equation for the ground state energy from the underlying microscopic physics. Although this equation is derived for small in-plane fields, its predictions agree with recent experimental findings at stronger in-plane fields, for low electron densities. We also take into account finite-temperature effects by means of a renormalisation group analysis, and find that they are small at the temperatures that were investigated experimentally. ",Bilayer quantum Hall system at $\nu_t = 1$: pseudospin models and in-plane magnetic field " The effects of the Galactic bar on the velocity distribution of old disc stars in the Solar neighbourhood are investigated using high-resolution 2D test particle simulations. A detailed orbital analysis reveals that the structure of the U-V distribution in these simulations is closely related to the phase-space extent of regular and chaotic orbits. At low angular momentum and for a sufficiently strong bar, stars mainly follow chaotic orbits which may cross the corotation radius and the U-V contours follow lines of constant Jacobi's integral except near the regions occupied by weakly populated eccentric regular orbits. These properties can naturally account for the observed outward motion of the local high asymmetric drift stars. ",Order and Chaos in the Local Disc Stellar Kinematics " Two-dimensional materials-based field-effect transistors (2DM-FETs) exhibit both ambipolar and unipolar transport types. To physically and compactly cover both cases, we put forward a quasi-Fermi-level phase space (QFLPS) approach to model the ambipolar effect in our previous work. This work aims to further improve the QFLPS model's numerical aspect so that the model can be implanted into the standard circuit simulator. We first rigorously derive the integral-free formula for the drain-source current to achieve this goal. It is more friendly to computation than the integral form. Besides, it explicitly gives the correlation terms between the electron and hole components. Secondly, to work out the boundary values required by the new expressions, we develop a fast evaluation algorithm for the surface electrostatic potential based on the zero-temperature limit property of the 2DM-FET system. By calibrating the model with the realistic device data of black phosphorus (BP) and monolayer molybdenum disulfide (ML-MoS2) FETs, the completed algorithm is tested against practical cases. The results show a typical superiority to the benchmark algorithm by two orders of magnitude in time consumption can be achieved while keeping a high accuracy with 7 to 9 significant digits. ",An efficient model algorithm for two-dimensional field-effect transistors " We construct the mean thermal Sunyaev-Zel'dovich (tSZ) Comptonization y profile around Luminous Red Galaxies (LRGs) in the redshift range 0.16 < z < 0.47 from the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7) using the Planck y map. The mean central tSZ signal for the full sample is y ~ 1.8 * 10^(-7) and we detect tSZ emission out to ~30 arcmin, which is well beyond the 10 arcmin angular resolution of the y map and well beyond the virial radii of the LRGs. We compare the measured profile with predictions from the cosmo-OWLS suite of cosmological hydrodynamical simulations. This comparison agrees well for models that include feedback from active galactic nuclei (AGN), but not with hydrodynamic models without this energetic feedback mechanism. This suggests that an additional heating mechanism is required over SNe feedback and star formation to explain the y data profile. We also compare our results with predictions based on the halo model with a universal pressure profile (UPP) giving the y signal. The predicted profile is consistent with the data, but only if we account for the clustering of haloes via a two-halo term and if halo masses are estimated using the mean stellar-to-halo mass (SHM) relation of Coupon et al. (2015) or Wang et al.(2016) estimated from gravitational lensing measurements. We also discuss the importance of scatter in the SHM relation on the model predictions. ",Probing hot gas around luminous red galaxies through the Sunyaev-Zel'dovich effect " We present a semiclassical two-fluid model for an interacting Bose gas confined in an anisotropic harmonic trap and solve it in the experimentally relevant region for a spin-polarized gas of Rb-87 atoms, obtaining the temperature dependence of the internal energy and of the condensate fraction. Our results are in agreement with recent experimental observations by Ensher et al. ",Internal energy and condensate fraction of a trapped interacting Bose gas " The $ B_3 - L_2$ $ Z' $ model may explain certain features of the fermion mass spectrum as well as the $b \rightarrow s \mu^+ \mu^-$ anomalies. The $ Z' $ acquires its mass via a TeV-scale scalar field, the flavon, whose vacuum expectation value spontaneously breaks the family non-universal gauged $ U(1)_{B_3 - L_2} $ symmetry. We review the key features of the model, with an emphasis on its scalar potential and the flavon field, and use experimental data and perturbativity arguments to place bounds upon the Higgs-flavon mixing angle. Finally, we discuss flavonstrahlung as a means to discover the flavon experimentally and compute flavonstrahlung cross-sections at current and future colliders. ",Searching for the flavon at current and future colliders " A general `quantum history theory' can be characterised by the space of histories and by the space of decoherence functionals. In this note we consider the situation where the space of histories is given by the lattice of projection operators on an infinite dimensional Hilbert space $H$. We study operator representations for decoherence functionals on this space of histories. We first give necessary and sufficient conditions for a decoherence functional being representable by a trace class operator on $H \otimes H$, an infinite dimensional analogue of the Isham-Linden-Schreckenberg representation for finite dimensions. Since this excludes many decoherence functionals of physical interest, we then identify the large and physically important class of decoherence functionals which can be represented, canonically, by bounded operators on $H \otimes H$. ",On Tracial Operator Representations of Quantum Decoherence Functionals " Consider $M$-estimation in a semiparametric model that is characterized by a Euclidean parameter of interest and an infinite-dimensional nuisance parameter. As a general purpose approach to statistical inferences, the bootstrap has found wide applications in semiparametric $M$-estimation and, because of its simplicity, provides an attractive alternative to the inference approach based on the asymptotic distribution theory. The purpose of this paper is to provide theoretical justifications for the use of bootstrap as a semiparametric inferential tool. We show that, under general conditions, the bootstrap is asymptotically consistent in estimating the distribution of the $M$-estimate of Euclidean parameter; that is, the bootstrap distribution asymptotically imitates the distribution of the $M$-estimate. We also show that the bootstrap confidence set has the asymptotically correct coverage probability. These general conclusions hold, in particular, when the nuisance parameter is not estimable at root-$n$ rate, and apply to a broad class of bootstrap methods with exchangeable bootstrap weights. This paper provides a first general theoretical study of the bootstrap in semiparametric models. ",Bootstrap consistency for general semiparametric $M$-estimation " Despite the fact that image captioning models have been able to generate impressive descriptions for a given image, challenges remain: (1) the controllability and diversity of existing models are still far from satisfactory; (2) models sometimes may produce extremely poor-quality captions. In this paper, two novel methods are introduced to solve the problems respectively. Specifically, for the former problem, we introduce a control signal which can control the macroscopic sentence attributes, such as sentence quality, sentence length, sentence tense and number of nouns etc. With such a control signal, the controllability and diversity of existing captioning models are enhanced. For the latter problem, we innovatively propose a strategy that an image-text matching model is trained to measure the quality of sentences generated in both forward and backward directions and finally choose the better one. As a result, this strategy can effectively reduce the proportion of poorquality sentences. Our proposed methods can be easily applie on most image captioning models to improve their overall performance. Based on the Up-Down model, the experimental results show that our methods achieve BLEU- 4/CIDEr/SPICE scores of 37.5/120.3/21.5 on MSCOCO Karpathy test split with cross-entropy training, which surpass the results of other state-of-the-art methods trained by cross-entropy loss. ",Macroscopic Control of Text Generation for Image Captioning " We show that certain heterotic string amplitudes are given in terms of correlators of the twisted topological (2,0) SCFT, corresponding to the internal sector of the N=1 spacetime supersymmetric background. The genus g topological partition function $F^g$ corresponds to a term in the effective action of the form $W^{2g}$, where W is the gauge or gravitational superfield. We study also recursion relations related to holomorphic anomalies, showing that, contrary to the type II case, they involve correlators of anti-chiral superfields. The corresponding terms in the effective action are of the form $W^{2g}\Pi^n$, where $\Pi$ is a chiral superfield obtained by chiral projection of a general superfield. We observe that the structure of the recursion relations is that of N=1 spacetime supersymmetry Ward identity. We give also a solution of the tree level recursion relations and discuss orbifold examples. ",Topological Amplitudes in Heterotic Superstring Theory " We explore the imprint of the cosmological hydrogen recombination lines on the power spectrum of the cosmic microwave background (CMB). In particular, we focus our analysis in the three strongest lines for the Balmer, Paschen and Brackett series of hydrogen. We expect changes in the angular power spectrum due to these lines on the level of $0.3 \mu K$ for the H$\alpha$ line, being maximum at small angular scales ($\ell \approx 870$). The morphology of the signal is very rich. It leads to relatively narrow spectral features ($\Delta \nu / \nu \sim 10^{-1}$), with several regions in the power spectrum showing a characteristic change of sign of the effect as we probe different redshifts or different multipoles by measuring the power spectrum at different frequencies. It also has a very peculiar dependence on the multipole scale, connected with the details of the transfer function at the epoch of scattering. In order to compute the optical depths for these transitions, we have evolved numerically the populations of the levels of the hydrogen atom during recombination, treating simultaneously the evolution of helium. For the hydrogen atom, we follow each angular momentum state separately, up to the level n=10. Foregrounds and other frequency dependent contaminants, as Rayleigh scattering, may be a important limitation for these measurements, although the peculiar frequency and angular dependences of the effect which we are discussing might permit to separate it. Detection of this signal using future narrow-band spectral observations can give information about the details of how the cosmic recombination proceeds, and how Silk damping operates during recombination. ",The imprint of cosmological hydrogen recombination lines on the power spectrum of the CMB " One of the most celebrated achievements of modern machine learning technology is automatic classification of images. However, success is typically achieved only with major computational costs. Here we introduce TDAsweep, a machine learning tool aimed at improving the efficiency of automatic classification of images. ",TDAsweep: A Novel Dimensionality Reduction Method for Image Classification Tasks " Entanglement and uncertainty relation are two focuses of quantum theory. We relate entanglement sharing to the entropic uncertainty relation in a $(d\times d)$-dimensional system via weak measurements with different pointers. We consider both the scenarios of one-sided sequential measurements in which the entangled pair is distributed to multiple Alices and one Bob and two-sided sequential measurements in which the entangled pair is distributed to multiple Alices and Bobs. It is found that the maximum number of observers sharing the entanglement strongly depends on the measurement scenarios, the pointer states of the apparatus, and the local dimension $d$ of each subsystem, while the required minimum measurement precision to achieve entanglement sharing decreases to its asymptotic value with the increase of $d$. The maximum number of observers remain unaltered even when the state is not maximally entangled but has strong-enough entanglement. ",Sequential sharing of two-qudit entanglement based on the entropic uncertainty relation " We develop an automated spectral synthesis technique for the estimation of metallicities ([Fe/H]) and carbon abundances ([C/Fe]) for metal-poor stars, including carbon-enhanced metal-poor stars, for which other methods may prove insufficient. This technique, autoMOOG, is designed to operate on relatively strong features visible in even low- to medium-resolution spectra, yielding results comparable to much more telescope-intensive high-resolution studies. We validate this method by comparison with 913 stars which have existing high-resolution and low- to medium-resolution to medium-resolution spectra, and that cover a wide range of stellar parameters. We find that at low metallicities ([Fe/H] < -2.0), we successfully recover both the metallicity and carbon abundance, where possible, with an accuracy of ~ 0.20 dex. At higher metallicities, due to issues of continuum placement in spectral normalization done prior to the running of autoMOOG, a general underestimate of the overall metallicity of a star is seen, although the carbon abundance is still successfully recovered. As a result, this method is only recommended for use on samples of stars of known sufficiently low metallicity. For these low-metallicity stars, however, autoMOOG performs much more consistently and quickly than similar, existing techniques, which should allow for analyses of large samples of metal-poor stars in the near future. Steps to improve and correct the continuum placement difficulties are being pursued. ",Automated Determination of [Fe/H] and [C/Fe] from Low-Resolution Spectroscopy Transport of single-channel spinless interacting fermions (Luttinger liquid) through a barrier has been studied by numerically exact quantum Monte Carlo methods. A novel stochastic integration over the real-time paths allows for direct computation of nonequilibrium conductance and noise properties. We have examined the low-temperature scaling of the conductance in the crossover region between a very weak and an almost insulating barrier. ,Dynamical simulation of transport in one-dimensional quantum wires " Static spherically symmetric solutions for conformal gravity in three dimensions are found. Black holes and wormholes are included within this class. Asymptotically the black holes are spacetimes of arbitrary constant curvature, and they are conformally related to the matching of different solutions of constant curvature by means of an improper conformal transformation. The wormholes can be constructed from suitable identifications of a static universe of negative spatial curvature, and it is shown that they correspond to the conformal matching of two black hole solutions with the same mass. ",Static spherically symmetric solutions for conformal gravity in three dimensions " Global demographic and economic changes have a critical impact on the total energy consumption, which is why demographic and economic parameters have to be taken into account when making predictions about the energy consumption. This research is based on the application of a multiple linear regression model and a neural network model, in particular multilayer perceptron, for predicting the energy consumption. Data from five Balkan countries has been considered in the analysis for the period 1995-2014. Gross domestic product, total number of population, and CO2 emission were taken as predictor variables, while the energy consumption was used as the dependent variable. The analyses showed that CO2 emissions have the highest impact on the energy consumption, followed by the gross domestic product, while the population number has the lowest impact. The results from both analyses are then used for making predictions on the same data, after which the obtained values were compared with the real values. It was observed that the multilayer perceptron model predicts better the energy consumption than the regression model. ",Comparing Multilayer Perceptron and Multiple Regression Models for Predicting Energy Use in the Balkans " Green Traffic Engineering encompasses network design and traffic routing strategies that aim at reducing the power consumption of a backbone network. We argue that turning off linecards is the most effective approach to reach this goal. Thus, we investigate the problem of minimizing the number of active line cards in a network while simultaneously allowing a multi-commodity flow being routed and keeping the maximum link utilization below a certain threshold. In addition to proving this problem to be NP-hard, we present an optimal ILP-based algorithm as well as a heuristic based on 2-Segment Routing. Lastly, we evaluate both approaches on real-world networks obtained from the Repetita Framework and a globally operating Internet Service Provider. The results of this evaluation indicate that our heuristic is not only close to optimal but significantly faster than the optimal algorithm, making it viable in practice. ",Green Traffic Engineering by Line Card Minimization " We consider properties of a covariant worldvolume action for a system of N coincident Dp-branes in D=(p+2) dimensional space-time (so called codimension one branes). In the case of N coincident D0-branes in D=2 we then find a generalization of this action to a model which includes fermionic degrees of freedom and is invariant under target-space supersymmetry and worldline kappa-symmetry. We find that the type IIA D=2 superalgebra generating the supersymmetry transformations of the ND0-brane system acquires a non-trivial ""central extension"" due to a nonlinear contribution of U(N) adjoint scalar fields. Peculiarities of space-time symmetries of coincident Dp-branes are discussed. ",Coincident (Super)-Dp-Branes of Codimension One We calculate the one-loop corrections to the Kaluza-Klein gauge boson excitations in the deconstructed version of the 5D QED. Deconstruction provides a renormalizable UV completion of the 5D theory that enables to control the cut-off dependence of 5D theories and study a possible influence of UV physics on IR observables. In particular we calculate the cut-off-dependent non-leading corrections that may be phenomenologically relevant for collider physics. We also discuss the structure of the operators that are relevant for the quantum corrections to the gauge boson masses in 5D and in deconstruction. ,Loop Corrections in Higher Dimensions via Deconstruction " Multi-camera 3D object detection for autonomous driving is a challenging problem that has garnered notable attention from both academia and industry. An obstacle encountered in vision-based techniques involves the precise extraction of geometry-conscious features from RGB images. Recent approaches have utilized geometric-aware image backbones pretrained on depth-relevant tasks to acquire spatial information. However, these approaches overlook the critical aspect of view transformation, resulting in inadequate performance due to the misalignment of spatial knowledge between the image backbone and view transformation. To address this issue, we propose a novel geometric-aware pretraining framework called GAPretrain. Our approach incorporates spatial and structural cues to camera networks by employing the geometric-rich modality as guidance during the pretraining phase. The transference of modal-specific attributes across different modalities is non-trivial, but we bridge this gap by using a unified bird's-eye-view (BEV) representation and structural hints derived from LiDAR point clouds to facilitate the pretraining process. GAPretrain serves as a plug-and-play solution that can be flexibly applied to multiple state-of-the-art detectors. Our experiments demonstrate the effectiveness and generalization ability of the proposed method. We achieve 46.2 mAP and 55.5 NDS on the nuScenes val set using the BEVFormer method, with a gain of 2.7 and 2.1 points, respectively. We also conduct experiments on various image backbones and view transformations to validate the efficacy of our approach. Code will be released at https://github.com/OpenDriveLab/BEVPerception-Survey-Recipe. ",Geometric-aware Pretraining for Vision-centric 3D Object Detection " Pionic beta decay \pi^+ to \pi^0 e^+ \nu_e is analyzed in chiral perturbation theory with virtual photons and leptons. All electromagnetic corrections up to order e^2 p^2 are taken into account. Theoretical results are confronted with preliminary data from a PSI measurement and a value for the CKM matrix element |Vud| is given. Although the precision is presently still below the one of existing determinations of |Vud|, an analysis of pionic beta decay, based on a systematic treatment within a low-energy effective field theory, may become a useful alternative. ",Electromagnetic radiative corrections to pionic beta decay \pi^+ to \pi^0 e^+ \nu_e " We report on the formation of nitrogen-doped nanographenes containing five- and seven-membered rings by thermally induced cyclodehydrogenation on the Au(111) surface. Using scanning tunneling microscopy and supported by calculations, we investigated the structure of precursor and targets, as well as of intermediates. Scanning tunneling spectroscopy shows that the electronic properties of the target nanographenes are strongly influenced by the additional formation of non-hexagonal rings. ",On-surface synthesis of nitrogen-doped nanographenes with 5-7 membered rings " Measures of vascular tortuosity--how curved and twisted a vessel is--are associated with a variety of vascular diseases. Consequently, measurements of vessel tortuosity that are accurate and comparable across modality, resolution, and size are greatly needed. Yet in practice, precise and consistent measurements are problematic--mismeasurements, inability to calculate, or contradictory and inconsistent measurements occur within and across studies. Here, we present a new method of measuring vessel tortuosity that ensures improved accuracy. Our method relies on numerical integration of the Frenet-Serret equations. By reconstructing the three-dimensional vessel coordinates from tortuosity measurements, we explain how to identify and use a minimally-sufficient sampling rate based on vessel radius while avoiding errors associated with oversampling and overfitting. Our work identifies a key failing in current practices of filtering asymptotic measurements and highlights inconsistencies and redundancies between existing tortuosity metrics. We demonstrate our method by applying it to manually constructed vessel phantoms with known measures of tortuousity, and 9,000 vessels from medical image data spanning human cerebral, coronary, and pulmonary vascular trees, and the carotid, abdominal, renal, and iliac arteries. ",Improving blood vessel tortuosity measurements via highly sampled numerical integration of the Frenet-Serret equations " An infinite square well with a discontinuous step is one of the simplest systems to exhibit non-Newtonian ray-splitting periodic orbits in the semiclassical limit. This system is analyzed using both time-independent perturbation theory (PT) and periodic-orbit theory and the approximate formulas for the energy eigenvalues derived from these two approaches are compared. The periodic orbits of the system can be divided into classes according to how many times they reflect from the potential step. Different classes of orbits contribute to different orders of PT. The dominant term in the second-order PT correction is due to non-Newtonian orbits that reflect from the step exactly once. In the limit in which PT converges the periodic-orbit theory results agree with those of PT, but outside of this limit the periodic-orbit theory gives much more accurate results for energies above the potential step. ",Comparing periodic-orbit theory to perturbation theory in the asymmetric infinite square well " We investigate adaptive strategies to robustly and optimally control the COVID-19 pandemic via social distancing measures based on the example of Germany. Our goal is to minimize the number of fatalities over the course of two years without inducing excessive social costs. We consider a tailored model of the German COVID-19 outbreak with different parameter sets to design and validate our approach. Our analysis reveals that an open-loop optimal control policy can significantly decrease the number of fatalities when compared to simpler policies under the assumption of exact model knowledge. In a more realistic scenario with uncertain data and model mismatch, a feedback strategy that updates the policy weekly using model predictive control (MPC) leads to a reliable performance, even when applied to a validation model with deviant parameters. On top of that, we propose a robust MPC-based feedback policy using interval arithmetic that adapts the social distancing measures cautiously and safely, thus leading to a minimum number of fatalities even if measurements are inaccurate and the infection rates cannot be precisely specified by social distancing. Our theoretical findings support various recent studies by showing that 1) adaptive feedback strategies are required to reliably contain the COVID-19 outbreak, 2) well-designed policies can significantly reduce the number of fatalities compared to simpler ones while keeping the amount of social distancing measures on the same level, and 3) imposing stronger social distancing measures early on is more effective and cheaper in the long run than opening up too soon and restoring stricter measures at a later time. ",Robust and optimal predictive control of the COVID-19 outbreak " We construct examples of Lefschetz fibrations with prescribed singular fibers. By taking differences of pairs of such fibrations with the same singular fibers, we obtain new examples of surface bundles over surfaces with non-zero signature. From these we derive new upper bounds for the minimal genus of a surface representing a given element in the second homology of a mapping class group. ","Commutators, Lefschetz fibrations and the signatures of surface bundles" " Let $K$ be a real quadratic field. We use a symbolic coding of the action of a fundamental unit on the real $2$-torus associated to $K$ to study the family of subsets $X_t$ of norm distance $\geq t$ from the origin. As an application, we prove that inhomogeneous spectrum of $K$ contains a dense set of elements of $K$, and conclude that all isolated inhomogeneous minima lie in $K$. ",Some dynamics in real quadratic fields with applications to inhomogeneous minima " We formulate the basic postulate of pre-big bang cosmology as one of ``asymptotic past triviality'', by which we mean that the initial state is a generic perturbative solution of the tree-level low-energy effective action. Such a past-trivial ``string vacuum'' is made of an arbitrary ensemble of incoming gravitational and dilatonic waves, and is generically prone to gravitational instability, leading to the possible formation of many black holes hiding singular space-like hypersurfaces. Each such singular space-like hypersurface of gravitational collapse becomes, in the string-frame metric, the usual big-bang t=0 hypersurface, i.e. the place of birth of a baby Friedmann universe after a period of dilaton-driven inflation. Specializing to the spherically-symmetric case, we review and reinterpret previous work on the subject, and propose a simple, scale-invariant criterion for collapse/inflation in terms of asymptotic data at past null infinity. Those data should determine whether, when, and where collapse/inflation occurs, and, when it does, fix its characteristics, including anisotropies on the big bang hypersurface whose imprint could have survived till now. Using Bayesian probability concepts, we finally attempt to answer some fine-tuning objections recently moved to the pre-big bang scenario. ",Pre-big bang bubbles from the gravitational instability of generic string vacua " Using a large database (~ 215 000 records) of relevant articles, we empirically study the ""complex systems"" field and its claims to find universal principles applying to systems in general. The study of references shared by the papers allows us to obtain a global point of view on the structure of this highly interdisciplinary field. We show that its overall coherence does not arise from a universal theory but instead from computational techniques and fruitful adaptations of the idea of self-organization to specific systems. We also find that communication between different disciplines goes through specific ""trading zones"", ie sub-communities that create an interface around specific tools (a DNA microchip) or concepts (a network). ","Complex Systems Science: Dreams of Universality, Reality of Interdisciplinarity" " The papaer shows how the known, exact results for the two electron bound states can modify the ground state phase diagram of extended Hubbard model (EHM) for on-site attraction, intersite repulsion and arbitrary electron density. The main result is suppression of the superconducting state in favor of normal phase for small charge densities. ",Bound states in the phase diagram of the extended Hubbard model " Relativistic massive Lorentz electrodynamics (LED) is studied in a ``gyroscopic setup'' where the electromagnetic fields and the particle spin are the only dynamical degrees of freedom. A rigorous proof of the global existence and uniqueness of the dynamics is given for essentially the whole range of field strengths reasonable for a classical theory. For a class of rotation-reflection symmetric field data it is shown that the dynamics also satisfies the world-line equations for a non-moving Lorentz electron, thus furnishing rigorous solutions of the full system of nonlinear equations of LED. The previously proven soliton dynamics of the Lorentz electron is further illucidated by showing that rotation-reflection symmetric deviations from the soliton state of the renormalized particle die out exponentially fast through radiation damping if the electrostatic mass is smaller than the bare rest mass. ",Scattering and radiation damping in gyroscopic Lorentz electrodynamic " This paper is about the fractional Schr\""{o}dinger equation (FSE) expressed in terms of the quantum Riesz-Feller space fractional and the Caputo time fractional derivatives. The main focus is on the case of time independent potential fields as a Dirac-delta potential and a linear potential. For such type of potential fields the separation of variables method allows to split the FSE into space fractional equation and time fractional one. The results obtained in this paper contain as particular cases already known results for FSE in terms of the quantum Riesz space fractional derivative and standard Laplace operator. ","On The Space-Time Fractional Schr\""{o}dinger Equation with time independent potentials" " This work provides the first unifying theoretical framework for node (positional) embeddings and structural graph representations, bridging methods like matrix factorization and graph neural networks. Using invariant theory, we show that the relationship between structural representations and node embeddings is analogous to that of a distribution and its samples. We prove that all tasks that can be performed by node embeddings can also be performed by structural representations and vice-versa. We also show that the concept of transductive and inductive learning is unrelated to node embeddings and graph representations, clearing another source of confusion in the literature. Finally, we introduce new practical guidelines to generating and using node embeddings, which fixes significant shortcomings of standard operating procedures used today. ",On the Equivalence between Positional Node Embeddings and Structural Graph Representations " A method which we have developed for determining corotation radii, has allowed us to map in detail the radial resonant structures of barred spiral galaxies. Here we have combined this information with new determinations of the bar strength and the pitch angle of the innermost segment of the spiral arms to find relationships between these parameters of relevance to the dynamical evolution of the galaxies. We show how (1) the bar mass fraction, (2) the scaled bar angular momentum, (3) the pitch angle, and (4) the shear parameter vary along the Hubble sequence, and we also plot along the Hubble sequence (5) the scaled bar length, (6) the ratio of bar corotation radius to bar length, (7) the scaled bar pattern speed, and (8) the bar strength. It is of interest to note that the parameters (2), (5), (6), (7), and (8) all show breaks in their behaviour at type Scd. We find that bars with high shear have only small pitch angles, while bars with large pitch angles must have low shear; we also find a generally inverse trend of pitch angle with bar strength. An inference which at first seems counter-intuitive is that the most massive bars rotate most slowly but have the largest angular momenta. Among a further set of detailed results we pick out here the 2:1 ratio between the number of spiral arms and the number of corotations ouside that of the bar. These results give a guideline to theories of disc-bar evolution. ",Spiral arm formation mechanisms: Spiral Structure in Barred galaxies. Observational constraints to spiral arm formation mechanisms " A new mechanism of nuclear excitation via two-photon electron transitions (NETP) is proposed and studied theoretically. As a generic example, detailed calculations are performed for the $E1E1$ $1s2s\,^1S_0 \rightarrow 1s^2\,^1S_0$ two-photon decay of He-like $^{225}$Ac$^{87+}$ ion with the resonant excitation of the $3/2+$ nuclear state with the energy 40.09(5) keV. The probability for such a two-photon decay via the nuclear excitation is found to be $P_{\rm NETP} = 3.5 \times 10^{-9}$ and, thus, is comparable with other mechanisms, such as nuclear excitation by electron transition and by electron capture. The possibility for the experimental observation of the proposed mechanism is thoroughly discussed. ",Nuclear excitation by two-photon electron transition " A simple model that replicates the dynamics of spiking and spiking-bursting activity of real biological neurons is proposed. The model is a two-dimensional map which contains one fast and one slow variable. The mechanisms behind generation of spikes, bursts of spikes, and restructuring of the map behavior are explained using phase portrait analysis. The dynamics of two coupled maps which model the behavior of two electrically coupled neurons is discussed. Synchronization regimes for spiking and bursting activity of these maps are studied as a function of coupling strength. It is demonstrated that the results of this model are in agreement with the synchronization of chaotic spiking-bursting behavior experimentally found in real biological neurons. ",Modeling of Spiking-Bursting Neural Behavior Using Two-Dimensional Map " This paper presents a novel control approach to dealing with object slip during robotic manipulative movements. Slip is a major cause of failure in many robotic grasping and manipulation tasks. Existing works increase grip force to avoid/control slip. However, this may not be feasible when (i) the robot cannot increase the gripping force -- the max gripping force is already applied or (ii) increased force damages the grasped object, such as soft fruit. Moreover, the robot fixes the gripping force when it forms a stable grasp on the surface of an object, and changing the gripping force during real-time manipulation may not be an effective control policy. We propose a novel control approach to slip avoidance including a learned action-conditioned slip predictor and a constrained optimiser avoiding a predicted slip given a desired robot action. We show the effectiveness of the proposed trajectory adaptation method with receding horizon controller with a series of real-robot test cases. Our experimental results show our proposed data-driven predictive controller can control slip for objects unseen in training. ",Proactive slip control by learned slip model and trajectory adaptation A large fraction of papers in the climate literature includes erroneous uses of significance tests. A Bayesian analysis is presented to highlight the meaning of significance tests and why typical misuse occurs. It is concluded that a significance test very rarely provides useful quantitative information. The significance statistic is not a quantitative measure of how confident we can be of the 'reality' of a given result. ,Significance Tests in Climate Science " The increase in the number of Internet users and the strong interaction brought by Web 2.0 made the Opinion Mining an important task in the area of natural language processing. Although several methods are capable of performing this task, few use multi-label classification, where there is a group of true labels for each example. This type of classification is useful for situations where the opinions are analyzed from the perspective of the reader, this happens because each person can have different interpretations and opinions on the same subject. This paper discuss the efficiency of problem transformation methods combined with different classification algorithms for the task of multi-label classification of reactions in news texts. To do that, extensive tests were carried out on two news corpora written in Brazilian Portuguese annotated with reactions. A new corpus called BFRC-PT is presented. In the tests performed, the highest number of correct predictions was obtained with the Classifier Chains method combined with the Random Forest algorithm. When considering the class distribution, the best results were obtained with the Binary Relevance method combined with the LSTM and Random Forest algorithms. ",Multi-label Classification of User Reactions in Online News " One fundamental goal of the newly born gravitational wave astronomy is discovering the origin of the observed binary black hole mergers. Towards this end, identifying features in the growing wealth of data may help in distinguishing different formation pathways. While large uncertainties still affect the binary formation models, spin-mass relations remain characteristic features of specific classes of channels. By focusing on the effective inspiral spin $\chi_\text{eff}$, the best reconstructed spin-related merger parameter, we show that current GWTC-3 data support the hypothesis that a fraction of events may display mass-spin correlations similar to one expected by dynamical formation channels of either astrophysical or primordial nature. We quantify the Bayesian evidence in favour of those models, which are substantially preferred when compared to the Gaussian phenomenological model adopted to describe the distribution of $\chi_\text{eff}$ in the recent LIGO/Virgo/KAGRA population analyses. ",Searching for mass-spin correlations in the population of gravitational-wave events: the GWTC-3 case study " The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-the-art recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures. ",Long Short-Term Memory Over Tree Structures In this letter we describe an approach to the current algebra based in the Path Integral formalism. We use this method for abelian and non-abelian quantum field theories in 1+1 and 2+1 dimensions and the correct expressions are obtained. Our results show the independence of the regularization of the current algebras. ,Current Algebra in the Path Integral framework " This investigation deals with the analysis of stagnation point heat transfer and corresponding flow features of hydromagnetic viscous incompressible fluid over a vertical shrinking sheet. The considered sheet is assumed to be permeable and subject to addition of stagnation point to control the generated vorticity in the boundary layer. The sheet is placed on the right side of the fluid saturated porous medium which is having permeability of specified form. Nonlinear convection waves in the flow field are realized due to the envisaged nonlinear relation between density and temperature. The equations governing the nonlinear convection boundary layer flow are modeled and simplified using similarity transformations. The economized equations are solved for numerical solutions by employing the implicit finite difference scheme also known as Keller-box method. The influence of the associated parameters of the problem on velocity and temperature distributions, skin friction and rate of heat transfer are presented through graphs and tables, and qualitatively discussed. The study reveals that interaction among magnetic field, porous medium permeability and nonlinear convection parameters substantially enhance the solution range and thus endorse their control to sustain the boundary layer flow. ",Nonlinear convection stagnation point heat transfer and MHD fluid flow in porous medium towards a permeable shrinking sheet " An important problem is to determine under which circumstances a metric on a conformally compact manifold is conformal to a Poincar\'e--Einstein metric. Such conformal rescalings are in general obstructed by conformal invariants of the boundary hypersurface embedding, the first of which is the trace-free second fundamental form and then, at the next order, the trace-free Fialkow tensor. We show that these tensors are the lowest order examples in a sequence of conformally invariant higher fundamental forms determined by the data of a conformal hypersurface embedding. We give a construction of these canonical extrinsic curvatures. Our main result is that the vanishing of these fundamental forms is a necessary and sufficient condition for a conformally compact metric to be conformally related to an asymptotically Poincar\'e--Einstein metric. More generally, these higher fundamental forms are basic to the study of conformal hypersurface invariants. Because Einstein metrics necessarily have constant scalar curvature, our method employs asymptotic solutions of the singular Yamabe problem to select an asymptotically distinguished conformally compact metric. Our approach relies on conformal tractor calculus as this is key for an extension of the general theory of conformal hypersurface embeddings that we further develop here. In particular, we give in full detail tractor analogs of the classical Gauss Formula and Gauss Theorem for Riemannian hypersurface embeddings. ",Conformal Fundamental Forms and the Asymptotically Poincar\'e--Einstein Condition " We have explored the prospect of probing a neutral scalar ($H$) produced in association with one $b$-quark and decaying either invisibly or into a pair of $b$-quarks at the LHC with centre of mass energy $\sqrt s = 14$ TeV. In this regard, we adopt an effective theory approach to parameterize a $Hb\bar bg$ vertex arising from a dimension six operator that encompasses the effect of some new physics setting in at a high scale. We concentrate solely on the five-flavor scheme to ascertain the sensitivity of the 14 TeV LHC in probing such an effective coupling as a function of the scalar mass at the highest possible projected luminosity, $3000~{\rm fb}^{-1}$. Through our multivariate analysis using machine learning algorithm we show that staying within the perturbative limit of the Wilson coefficient of the effective interaction, evidence with statistical significance of $3\sigma$ can be obtained in two different signal regions for $m_H\lesssim 2$ TeV and the scale of new physics $\Lambda = 3$ TeV. ",Probing heavy scalars with an effective $Hb\bar bg$ coupling at the LHC " By application of the duality transformation, which implies interchange of active and passive electric parts of the Riemann curvature (equivalent to interchange of Ricci and Einstein tensors) it is shown that the global monopole solution in the Kaluza-Klein spacetime is dual to the corresponding vacuum solution. Further we also obtain solution dual to flat space which would in general describe a massive global monopole in 4-dimensional Euclidean space and would have massless limit analogus to the 4-dimensional dual-flat solution. ",Global monopole as dual-vacuum solution in Kaluza-Klein spacetime " The impinging solar wind and its magnetic field perturbed the Earth's magnetosphere and create magnetic storms and substorms. The Earth's magnetosphere expands (contracts) during periods of southward (northward) IMF. It is shown that these magnetospheric expansions and contractions account for poorly understood aspects of magnetic storm-substorm relationship, bifurcation of the magnetotail and the appearance of theta aurora. Quantitative theory and calculations in agreement with the suggested model of solar wind/IMF-magnetosphere coupling are presented. Pre-noon and post-noon dents on the magnetopause are expected to appear during a long period of strong northward IMF. The contracting and expanding magnetosphere accumulates solar wind plasma and its magnetic field. The accumulated energy is delivered into the polar regions and creates auroral activity intensification. ",Magnetic Storm-substorm Relationship and Some Associated Issues " The ordinary generating function of the number of complete subgraphs of $G$ is called a clique polynomial of $G$ and is denoted by $C(G,x)$. A real root of $C(G,x)$ is called a clique root of the graph $G$. Hajiabolhasan and Mehrabadi showed that the clique polynomial has always a real root in the interval $[-1,0)$. Moreover, they showed that the class of triangle-free graphs has only clique roots. Here, we generalize their result by showing that the class of $K_4$-free chordal graphs has also only clique roots. Moreover, we show that this class has always a clique root $-1$. We finally conclude the paper with several important questions and conjectures. ",Clique Polynomials and Chordal Graphs " A polydisperse granular gas made of inelastic and rough hard disks is considered. Focus is laid on the kinetic-theory derivation of the partial energy production rates and the total cooling rate as functions of the partial densities and temperatures (both translational and rotational) and of the parameters of the mixture (masses, diameters, moments of inertia, and mutual coefficients of normal and tangential restitution). The results are applied to the homogeneous cooling state of the system and the associated nonequipartition of energy among the different components and degrees of freedom. It is found that disks typically present a stronger rotational-translational nonequipartition but a weaker component-component nonequipartition than spheres. A noteworthy ""mimicry"" effect is unveiled, according to which a polydisperse gas of disks having common values of the coefficient of restitution and of the reduced moment of inertia can be made indistinguishable from a monodisperse gas in what concerns the degree of rotational-translational energy nonequipartition. This effect requires the mass of a disk of component $i$ to be approximately proportional to $2\sigma_i+\langle\sigma\rangle$, where $\sigma_i$ is the diameter of the disk and $\langle\sigma\rangle$ is the mean diameter. ","Interplay between polydispersity, inelasticity, and roughness in the freely cooling regime of hard-disk granular gases" " We consider robotic pick-and-place of partially visible, novel objects, where goal placements are non-trivial, e.g., tightly packed into a bin. One approach is (a) use object instance segmentation and shape completion to model the objects and (b) use a regrasp planner to decide grasps and places displacing the models to their goals. However, it is critical for the planner to account for uncertainty in the perceived models, as object geometries in unobserved areas are just guesses. We account for perceptual uncertainty by incorporating it into the regrasp planner's cost function. We compare seven different costs. One of these, which uses neural networks to estimate probability of grasp and place stability, consistently outperforms uncertainty-unaware costs and evaluates faster than Monte Carlo sampling. On a real robot, the proposed cost results in successfully packing objects tightly into a bin 7.8% more often versus the commonly used minimum-number-of-grasps cost. ",Robotic Pick-and-Place With Uncertain Object Instance Segmentation and Shape Completion " We study the attractive fermionic Hubbard model on a honeycomb lattice using determinantal quantum Monte Carlo simulations. By increasing the interaction strength U (relative to the hopping parameter t) at half-filling and zero temperature, the system undergoes a quantum phase transition at 5.0 < U_c/t < 5.1 from a semi-metal to a phase displaying simultaneously superfluid behavior and density order. Doping away from half-filling, and increasing the interaction strength at finite but low temperature T, the system always appears to be a superfluid exhibiting a crossover between a BCS and a molecular regime. These different regimes are analyzed by studying the spectral function. The formation of pairs and the emergence of phase coherence throughout the sample are studied as U is increased and T is lowered. ",Attractive Hubbard Model on a Honeycomb Lattice " We study the second harmonic generation (SHG) in a suspension of small spherical particles confined within a slab, assuming undepleted pump and applying (i) single scattering approximation and (ii) diffusion approximation. In the case (i), the angular diagram, the differential and total crossections of the SHG process, as well as the average cosine of SH scattering angle are calculated. In the case (ii), the average SH intensity is found to show no explicit dependence on the linear scattering properties of the suspension. The average intensity of SH wave scales as I_0 L / \Lambda_2 in both cases (i) and (ii), where I_0 is the intensity of the incident wave, L is the slab thickness, and \Lambda_2 is an intensity-dependent ""SH scattering"" length. ",Second harmonic generation in suspensions of spherical particles " The sensitivity of anomalous transport in crowded media to the form of the inter-particle interactions is investigated through computer simulations. We extend the highly simplified Lorentz model towards realistic natural systems by modeling the interactions between the tracer and the obstacles with a smooth potential. We find that the anomalous transport at the critical point happens to be governed by the same universal exponent as for hard exclusion interactions, although the mechanism of how narrow channels are probed is rather different. The scaling behavior of simulations close to the critical point confirm this exponent. Our result indicates that the simple Lorentz model may be applicable to describing the fundamental properties of long-range transport in real crowded environments. ",Anomalous transport in the soft-sphere Lorentz model " We study the possibilities to detect Majorana neutrinos in $e^- \gamma$ colliders for different center of mass energies. We study the $W^- W^- l_j^{+}(l_j^+\equiv e^+ ,\mu^+ ,\tau^+)$ final state which are, due to leptonic number violation, a clear signature for intermediate Majorana neutrino contribution. Such a signal (final lepton have the opposite charge of the initial lepton) is not possible if the heavy neutrinos are Dirac particles. In our calculation we use the helicity formalism to obtain analytic expressions for the amplitude and we have considered that the intermediate neutrinos can be either on shell or off shell. Finally we present our results for the total cross-section and for the angular distribution of the final lepton. We also include a discussion on the expected events number as a function of the input parameters. ",Signatures for Majorana neutrinos in $e^- \gamma$ collider " The effect of viscosity and of converging flows on the formation of blobs in the slow solar wind is analysed by means of resistive MHD simulations. The regions above coronal streamers where blobs are formed (Sheeley et al., 1997) are simulated using a model previously proposed by Einaudi et al. (1999). The result of our investigation is twofold. First, we demonstrate a new mechanism for enhanced momentum transfer between a forming blob and the fast solar wind surrounding it. The effect is caused by the longer range of the electric field caused by the tearing instability forming the blob. The electric field reaches into the fast solar wind and interacts with it, causing a viscous drag that is global in nature rather than local across fluid layers as it is the case in normal uncharged fluids (like water). Second, the presence of a magnetic cusp at the tip of a coronal helmet streamer causes a converging of the flows on the two sides of the streamer and a direct push of the forming island by the fast solar wind, resulting in a more efficient momentum exchange. ",Blob formation and acceleration in the solar wind: role of converging flows and viscosity " Multi-modality image fusion aims to combine different modalities to produce fused images that retain the complementary features of each modality, such as functional highlights and texture details. To leverage strong generative priors and address challenges such as unstable training and lack of interpretability for GAN-based generative methods, we propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM). The fusion task is formulated as a conditional generation problem under the DDPM sampling framework, which is further divided into an unconditional generation subproblem and a maximum likelihood subproblem. The latter is modeled in a hierarchical Bayesian manner with latent variables and inferred by the expectation-maximization (EM) algorithm. By integrating the inference solution into the diffusion sampling iteration, our method can generate high-quality fused images with natural image generative priors and cross-modality information from source images. Note that all we required is an unconditional pre-trained generative model, and no fine-tuning is needed. Our extensive experiments indicate that our approach yields promising fusion results in infrared-visible image fusion and medical image fusion. The code is available at \url{https://github.com/Zhaozixiang1228/MMIF-DDFM}. ",DDFM: Denoising Diffusion Model for Multi-Modality Image Fusion " Hecke operators relate characters of rational conformal field theories (RCFTs) with different central charges, and extend the previously studied Galois symmetry of modular representations and fusion algebras. We show that the conductor $N$ of a RCFT and the quadratic residues mod $N$ play an important role in the computation and classification of Galois permutations. We establish a field correspondence in different theories through the picture of effective central charge, which combines Galois inner automorphisms and the structure of simple currents. We then make a first attempt to extend Hecke operators to the full data of modular tensor categories. The Galois symmetry encountered in the modular data appears in the fusion and the braiding matrices as well, and yields isomorphic structures in theories related by Hecke operators. ",Galois Symmetry Induced by Hecke Relations in Rational Conformal Field Theory and Associated Modular Tensor Categories " We have observed the phenomenon of stochastic resonance on the Brillouin propagation modes of a dissipative optical lattice. Such a mode has been excited by applying a moving potential modulation with phase velocity equal to the velocity of the mode. Its amplitude has been characterized by the center-of-mass (CM) velocity of the atomic cloud. At Brillouin resonance, we studied the CM-velocity as a function of the optical pumping rate at a given depth of the potential wells. We have observed a resonant dependence of the CM velocity on the optical pumping rate, corresponding to the noise strength. This corresponds to the experimental observation of stochastic resonance in a periodic potential in the low-damping regime. ",Stochastic resonance in periodic potentials: realization in a dissipative optical lattice " Classical physical modelling with associated numerical simulation (model-based), and prognostic methods based on the analysis of large amounts of data (data-driven) are the two most common methods used for the mapping of complex physical processes. In recent years, the efficient combination of these approaches has become increasingly important. Continuum mechanics in the core consists of conservation equations that -- in addition to the always necessary specification of the process conditions -- can be supplemented by phenomenological material models. The latter are an idealized image of the specific material behavior that can be determined experimentally, empirically, and based on a wealth of expert knowledge. The more complex the material, the more difficult the calibration is. This situation forms the starting point for this work's hybrid data-driven and model-based approach for mapping a complex physical process in continuum mechanics. Specifically, we use data generated from a classical physical model by the MESHFREE software to train a Principal Component Analysis-based neural network (PCA-NN) for the task of parameter identification of the material model parameters. The obtained results highlight the potential of deep-learning-based hybrid models for determining parameters, which are the key to characterizing materials occurring naturally, and their use in industrial applications (e.g. the interaction of vehicles with sand). ",Parameter Identification by Deep Learning of a Material Model for Granular Media " This chapter aims at providing the most complete review of both the emerging concepts and the latest observational results regarding the angular momentum evolution of young low-mass stars and brown dwarfs. In the time since Protostars & Planets V, there have been major developments in the availability of rotation period measurements at multiple ages and in different star-forming environments that are essential for testing theory. In parallel, substantial theoretical developments have been carried out in the last few years, including the physics of the star-disk interaction, numerical simulations of stellar winds, and the investigation of angular momentum transport processes in stellar interiors. This chapter reviews both the recent observational and theoretical advances that prompted the development of renewed angular momentum evolution models for cool stars and brown dwarfs. While the main observational trends of the rotational history of low mass objects seem to be accounted for by these new models, a number of critical open issues remain that are outlined in this review. ",Angular momentum evolution of young low-mass stars and brown dwarfs: observations and theory " We consider the approximation of the inverse of the finite element stiffness matrix in the data sparse $\mathcal{H}$-matrix format. For a large class of shape regular but possibly non-uniform meshes including graded meshes, we prove that the inverse of the stiffness matrix can be approximated in the $\mathcal{H}$-matrix format at an exponential rate in the block rank. Since the storage complexity of the hierarchical matrix is logarithmic-linear and only grows linearly in the block-rank, we obtain an efficient approximation that can be used, e.g., as an approximate direct solver or preconditioner for iterative solvers. ",Approximating inverse FEM matrices on non-uniform meshes with $\mathcal{H}$-matrices " We describe a program to identify optical counterparts to radio sources from the VLA FIRST survey using the Cambridge APM scans of the POSS-I plates. We use radio observations covering 4150 square degrees of the north Galactic cap to a 20 cm flux density threshold of 1.0 mJy; the 382,892 sources detected all have positional uncertainties of <1"" (radius of 90% confidence). Our description of the APM catalog, derived from the 148 POSS-I O and E plates covering this region, includes an assessment of its astrometric and photometric accuracy, a photometric recalibration using the Minnesota APS catalog, a discussion of the classification algorithm, and quantitative tests of the catalog's reliability and completeness. We go on to show how the use of FIRST sources as astrometric standards allows us to improve the absolute astrometry of the POSS plates by nearly an order of magnitude to ~0.15"" rms. Matching the radio and optical catalogs yields counterparts for over 70,000 radio sources; we include detailed discussions of the reliability and completeness of these identifications as a function of optical and radio morphology, optical magnitude and color, and radio flux density. An analysis of the problem of radio sources with complex morphologies (e.g., double-lobed radio galaxies) is included. We conclude with a brief discussion of the source classes represented among the radio sources with identified counterparts. ","Optical Counterparts for 70,000 Radio Sources: APM Identifications for the FIRST Radio Survey" " The U(1) gauge field is usually induced from the gauge principle, that is, the extension of global U(1) phase transformation for matter field. However the phase itself is realized only for quantum theory. In this paper we introduce the U(1) gauge field and gauge coupling from the gauge principle classically. The gauge symmetry is spontaneously broken from the out set. The Higgs mechanism occurs and we obtain the London equation. The Hydrodynamical interpretation of classical field we utilized is given, and the relation to super conductivity is discussed. ",A Note on Gauge Principle and Spontaneous Symmetry Breaking in Classical Particle Mechanics " To make evidence-based recommendations to decision-makers, researchers conducting systematic reviews and meta-analyses must navigate a garden of forking paths: a series of analytical decision-points, each of which has the potential to influence findings. To identify challenges and opportunities related to designing systems to help researchers manage uncertainty around which of multiple analyses is best, we interviewed 11 professional researchers who conduct research synthesis to inform decision-making within three organizations. We conducted a qualitative analysis identifying 480 analytical decisions made by researchers throughout the scientific process. We present descriptions of current practices in applied research synthesis and corresponding design challenges: making it more feasible for researchers to try and compare analyses, shifting researchers' attention from rationales for decisions to impacts on results, and supporting communication techniques that acknowledge decision-makers' aversions to uncertainty. We identify opportunities to design systems which help researchers explore, reason about, and communicate uncertainty in decision-making about possible analyses in research synthesis. ",Decision-Making Under Uncertainty in Research Synthesis: Designing for the Garden of Forking Paths " We have performed individual-based lattice simulations of SIR and SEIR dynamics to investigate both the short and long-term dynamics of childhood epidemics. In our model, infection takes place through a combination of local and long-range contacts, in practice generating a dynamic small-world network. Sustained oscillations emerge with a period much larger than the duration of infection. We found that the network topology has a strong impact on the amplitude of oscillations and in the level of persistence. Diseases do not spread very effectively through local contacts. This can be seen by measuring an {\em effective} transmission rate $\beta_{\mbox {\scriptsize eff}}$ as well as the basic reproductive rate $R_0$. These quantities are lower in the small-world network than in an homogeneously mixed population, whereas the average age at infection is higher. ",Short and Long-Term Dynamics of Childhood Diseases on Dynamic Small-World Networks " In this note, we prove the generic Kobayashi volume measure hyperbolicity of singular directed varieties $(X, V)$, as soon as the canonical sheaf $\mathcal{K}\_V$ of $V$ is big in the sense of Demailly. ",Non-Degeneracy of Kobayashi Volume Measures for Singular Directed Varieties " The $t$-channel exchange of two gluons in a color singlet state represents the lowest order approximation to the Pomeron. This exchange mechanism is thought to also explain the formation of rapidity gaps in dijet events at the Tevatron. At the perturbative level this requires suppressed gluon emission in the rapidity interval between widely separated jets, analogous to color coherence effects in $t$-channel photon exchange. By calculating the imaginary part of the two gluon, color singlet exchange amplitude we show how this pattern does emerge for gluon emission at small transverse momenta. At large $p_T$ the radiation pattern characteristic for color octet single gluon exchange is reproduced. ",Evidence for a Hard Pomeron in Perturbative QCD " A widely accepted definition of intelligence in the context of Artificial Intelligence (AI) still eludes us. Due to our exceedingly rapid development of AI paradigms, architectures, and tools, the prospect of naturally arising AI consciousness seems more likely than ever. In this paper, we claim that all current intelligence tests are insufficient to point to the existence or lack of intelligence \textbf{as humans intuitively perceive it}. We draw from ideas in the philosophy of science, psychology, and other areas of research to provide a clearer definition of the problems of artificial intelligence, self-awareness, and agency. We furthermore propose a new heuristic approach to test for artificial self-awareness and outline a possible implementation. Finally, we discuss some of the questions that arise from this new heuristic, be they philosophical or implementation-oriented. ",Suffering Toasters -- A New Self-Awareness Test for AI " We present a one-parameter family of constant solutions of the reflection equation and define a family of quantum complex Grassmannians endowed with a transitive action of the quantum unitary group. By computing the radial part of a suitable Casimir operator, we identify the zonal spherical functions (i.e. infinitesimally bi-invariant matrix coefficients of finite-dimensional irreducible representations) as multivariable Askey-Wilson polynomials containing two continuous and two discrete parameters. ",Multivariable Askey-Wilson Polynomials and Quantum Complex Grassmannians " In classical random matrix theory the Gaussian and chiral Gaussian random matrix models with a source are realized as shifted mean Gaussian, and chiral Gaussian, random matrices with real $(\beta = 1)$, complex ($\beta = 2)$ and real quaternion $(\beta = 4$) elements. We use the Dyson Brownian motion model to give a meaning for general $\beta > 0$. In the Gaussian case a further construction valid for $\beta > 0$ is given, as the eigenvalue PDF of a recursively defined random matrix ensemble. In the case of real or complex elements, a combinatorial argument is used to compute the averaged characteristic polynomial. The resulting functional forms are shown to be a special cases of duality formulas due to Desrosiers. New derivations of the general case of Desrosiers' dualities are given. A soft edge scaling limit of the averaged characteristic polynomial is identified, and an explicit evaluation in terms of so-called incomplete Airy functions is obtained. ",The averaged characteristic polynomial for the Gaussian and chiral Gaussian ensembles with a source " Humans excel in complex long-horizon soft body manipulation tasks via flexible tool use: bread baking requires a knife to slice the dough and a rolling pin to flatten it. Often regarded as a hallmark of human cognition, tool use in autonomous robots remains limited due to challenges in understanding tool-object interactions. Here we develop an intelligent robotic system, RoboCook, which perceives, models, and manipulates elasto-plastic objects with various tools. RoboCook uses point cloud scene representations, models tool-object interactions with Graph Neural Networks (GNNs), and combines tool classification with self-supervised policy learning to devise manipulation plans. We demonstrate that from just 20 minutes of real-world interaction data per tool, a general-purpose robot arm can learn complex long-horizon soft object manipulation tasks, such as making dumplings and alphabet letter cookies. Extensive evaluations show that RoboCook substantially outperforms state-of-the-art approaches, exhibits robustness against severe external disturbances, and demonstrates adaptability to different materials. ",RoboCook: Long-Horizon Elasto-Plastic Object Manipulation with Diverse Tools " This investigation completely classifies the spatial chaos problem in plane edge coloring (Wang tiles) with two symbols. For a set of Wang tiles $\mathcal{B}$, spatial chaos occurs when the spatial entropy $h(\mathcal{B})$ is positive. $\mathcal{B}$ is called a minimal cycle generator if $\mathcal{P}(\mathcal{B})\neq\emptyset$ and $\mathcal{P}(\mathcal{B}')=\emptyset$ whenever $\mathcal{B}'\subsetneqq \mathcal{B}$, where $\mathcal{P}(\mathcal{B})$ is the set of all periodic patterns on $\mathbb{Z}^{2}$ generated by $\mathcal{B}$. Given a set of Wang tiles $\mathcal{B}$, write $\mathcal{B}=C_{1}\cup C_{2} \cup\cdots \cup C_{k} \cup N$, where $C_{j}$, $1\leq j\leq k$, are minimal cycle generators and $\mathcal{B}$ contains no minimal cycle generator except those contained in $C_{1}\cup C_{2} \cup\cdots \cup C_{k}$. Then, the positivity of spatial entropy $h(\mathcal{B})$ is completely determined by $C_{1}\cup C_{2} \cup\cdots \cup C_{k}$. Furthermore, there are 39 equivalent classes of marginal positive-entropy (MPE) sets of Wang tiles and 18 equivalent classes of saturated zero-entropy (SZE) sets of Wang tiles. For a set of Wang tiles $\mathcal{B}$, $h(\mathcal{B})$ is positive if and only if $\mathcal{B}$ contains an MPE set, and $h(\mathcal{B})$ is zero if and only if $\mathcal{B}$ is a subset of an SZE set. ",Spatial chaos of Wang tiles with two symbols " Gravitational wave observations indicate the existence of merging black holes (BHs) with high spin ($a\gtrsim0.3$), whose formation pathways are still an open question. A possible way to form those binaries is through the tidal spin-up of a Wolf-Rayet (WR) star by its BH companion. In this work, we investigate this scenario by directly calculating the tidal excitation of oscillation modes in WR star models, determining the tidal spin-up rate, and integrating the coupled spin-orbit evolution for WR-BH binaries. We find that, for short-period orbits and massive WR stars, the tidal interaction is mostly contributed by standing gravity modes, in contrast to Zahn's model of traveling waves which is frequently assumed in the literature. The standing modes are less efficiently damped than traveling waves, meaning that prior estimates of tidal spin-up may be overestimated. We show that tidal synchronization is rarely reached in WR-BH binaries, and the resulting BH spins have $a \lesssim 0.4$ for all but the shortest-period ($P_{\rm orb} \! \lesssim 0.5 \, {\rm d}$) binaries. Tidal spin-up in lower-mass systems is more efficient, providing an anticorrelation between the mass and spin of the BHs, which could be tested in future gravitational wave data. Nonlinear damping processes are poorly understood but may allow for more efficient tidal spin-up. We also discuss a new class of gravito-thermal modes that appear in our calculations. ",Tidal Spin-up of Black Hole Progenitor Stars " Emergent order resulting from spontaneous symmetry breakings has been a central topic in statistical physics. Active matter systems composed of nonequilibrium elements exhibit a diverse range of fascinating phenomena beyond equilibrium physics. One striking example is the emergent long-range orientational order in two dimensions, which is prohibited in equilibrium systems. The existence of long-range order in active matter systems was predicted first by a numerical model and proven analytically by dynamic renormalization group analysis. Experimental evidence for long-range order with giant number fluctuations has been provided in some experimental systems including microswimmers such as swimming bacteria and electrokinetic Janus particles. In this review, we provide a pedagogical introduction to the theoretical descriptions of long-range order in collective motion of active matter systems and an overview of the experimental efforts in the two prototypical microswimmer experimental systems. We also offer critical assessments of how and when such long-range order can be achieved in experimental systems. By comparing numerical, theoretical, and experimental results, we discuss the future challenges in active matter physics. ",Deciphering long-range order in active matter: Insights from swimming bacteria in quasi-2D and electrokinetic Janus particles " In this article we establish central limit theorems for multilevel Polyak-Ruppert averaged stochastic approximation schemes. We work under very mild technical assumptions and consider the slow regime in wich typical errors decay like $N^{-\delta}$ with $\delta\in(0,\frac 12)$ and the critical regime in which errors decay of order $N^{-1/2}\sqrt{\log N}$ in the runtime $N$ of the algorithm. ",General multilevel adaptations for stochastic approximation algorithms " Surface code error correction offers a highly promising pathway to achieve scalable fault-tolerant quantum computing. When operated as stabilizer codes, surface code computations consist of a syndrome decoding step where measured stabilizer operators are used to determine appropriate corrections for errors in physical qubits. Decoding algorithms have undergone substantial development, with recent work incorporating machine learning (ML) techniques. Despite promising initial results, the ML-based syndrome decoders are still limited to small scale demonstrations with low latency and are incapable of handling surface codes with boundary conditions and various shapes needed for lattice surgery and braiding. Here, we report the development of an artificial neural network (ANN) based scalable and fast syndrome decoder capable of decoding surface codes of arbitrary shape and size with data qubits suffering from the depolarizing error model. Based on rigorous training over 50 million random quantum error instances, our ANN decoder is shown to work with code distances exceeding 1000 (more than 4 million physical qubits), which is the largest ML-based decoder demonstration to-date. The established ANN decoder demonstrates an execution time in principle independent of code distance, implying that its implementation on dedicated hardware could potentially offer surface code decoding times of O($\mu$sec), commensurate with the experimentally realisable qubit coherence times. With the anticipated scale-up of quantum processors within the next decade, their augmentation with a fast and scalable syndrome decoder such as developed in our work is expected to play a decisive role towards experimental implementation of fault-tolerant quantum information processing. ",A scalable and fast artificial neural network syndrome decoder for surface codes " A narrow state, which we label DsJ(2458)+, with mass 2458.0 +/- 1.0(stat.) +/- 1.0(syst.) MeV/c^2, is observed in the inclusive Ds+ pi0 \gamma$ mass distribution in 91 fb-1 of e^+e^- annihilation data recorded by the BABAR detector at the PEP-II asymmetric-energy e^+e^- storage ring. The observed width is consistent with the experimental resolution. The data favor decay through Ds*(2112)+ pi0 rather than through DsJ*(2317)+ gamma. An analysis of Ds+ pi0 data accounting for the influence of the DsJ(2458)+ produces a DsJ*(2317)+ mass of 2317.3 +/- 0.4(stat.) +/- 0.8(syst.) MeV/c^2. ",Observation of a Narrow Meson Decaying to Ds pi0 gamma at a Mass of 2.458 GeV/c^2 " Viewing gravitational energy-momentum as equal by observation, but different in essence from inertial energy-momentum naturally leads to the gauge theory of volume-preserving diffeormorphisms of a four-dimensional in- ner space. To analyse scattering in this theory the gauge field is coupled to two Dirac fields with different masses. Based on a generalized LSZ reduction formula the S-matrix element for scattering of two Dirac particles in the gravitational limit and the corresponding scattering cross-section are calculated to leading order in perturbation theory. Taking the non-relativistic limit for one of the initial particles in the rest frame of the other the Rutherford-like cross-section of a non-relativistic particle scattering off an infinitely heavy scatterer calculated quantum mechanically in Newtonian gravity is recovered. This provides a non-trivial test of the gauge field theory of volume-preserving diffeomorphisms as a quantum theory of gravity ",Scattering Cross-Sections in Quantum Gravity - the Case of Matter-Matter Scattering " Build-to-order (BTO) supply chains have become common-place in industries such as electronics, automotive and fashion. They enable building products based on individual requirements with a short lead time and minimum inventory and production costs. Due to their nature, they differ significantly from traditional supply chains. However, there have not been studies dedicated to demand forecasting methods for this type of setting. This work makes two contributions. First, it presents a new and unique data set from a manufacturer in the BTO sector. Second, it proposes a novel data transformation technique for demand forecasting of BTO products. Results from thirteen forecasting methods show that the approach compares well to the state-of-the-art while being easy to implement and to explain to decision-makers. ",Demand forecasting techniques for build-to-order lean manufacturing supply chains " How multiple close-in super-Earths form around stars with masses lower than that of the Sun is still an open issue. Several recent modeling studies have focused on planet formation around M-dwarf stars, but so far no studies have focused specifically on K dwarfs, which are of particular interest in the search for extraterrestrial life. We aim to reproduce the currently known population of close-in super-Earths observed around K-dwarf stars and their system characteristics. We performed 48 high-resolution N-body simulations of planet formation via planetesimal accretion using the existing GENGA software running on GPUs. In the simulations we varied the initial disk mass and the solid and gas surface density profiles. Each simulation began with 12000 bodies with radii of between 200 and 2000 km around two different stars, with masses of 0.6 and 0.8 $M_{\odot}$. Most simulations ran for 20 Myr, with several simulations extended to 40 or 100 Myr. The mass distributions for the planets with masses between 2 and 12 $M_\oplus$ show a strong preference for planets with masses $M_p<6$ $M_\oplus$ and a lesser preference for planets with larger masses, whereas the mass distribution for the observed sample increases almost linearly. However, we managed to reproduce the main characteristics and architectures of the known planetary systems and produce mostly long-term angular-momentum-deficit-stable, nonresonant systems, but we require an initial disk mass of 15 $M_\oplus$ or higher and a gas surface density value at 1 AU of 1500 g cm$^{-2}$ or higher. Our simulations also produce many low-mass planets with $M<2$ $M_\oplus$, which are not yet found in the observed population, probably due to the observational biases. The final systems contain only a small number of planets, which could possibly accrete substantial amounts of gas, and these formed after the gas had mostly dissipated. ",Forming rocky exoplanets around K-dwarf stars " DNA sequencing is becoming increasingly commonplace, both in medical and direct-to-consumer settings. To promote discovery, collected genomic data is often de-identified and shared, either in public repositories, such as OpenSNP, or with researchers through access-controlled repositories. However, recent studies have suggested that genomic data can be effectively matched to high-resolution three-dimensional face images, which raises a concern that the increasingly ubiquitous public face images can be linked to shared genomic data, thereby re-identifying individuals in the genomic data. While these investigations illustrate the possibility of such an attack, they assume that those performing the linkage have access to extremely well-curated data. Given that this is unlikely to be the case in practice, it calls into question the pragmatic nature of the attack. As such, we systematically study this re-identification risk from two perspectives: first, we investigate how successful such linkage attacks can be when real face images are used, and second, we consider how we can empower individuals to have better control over the associated re-identification risk. We observe that the true risk of re-identification is likely substantially smaller for most individuals than prior literature suggests. In addition, we demonstrate that the addition of a small amount of carefully crafted noise to images can enable a controlled trade-off between re-identification success and the quality of shared images, with risk typically significantly lowered even with noise that is imperceptible to humans. ",Re-identification of Individuals in Genomic Datasets Using Public Face Images " Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems. While simple to state, this has been a particularly challenging problem in deep learning, where models often end up making overconfident predictions in such situations. In this work we present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention: when encountering a sample from an unseen class, the desired behavior is to abstain from predicting. Our approach uses a network with an extra abstention class and is trained on a dataset that is augmented with an uncurated set that consists of a large number of out-of-distribution (OoD) samples that are assigned the label of the abstention class; the model is then trained to learn an effective discriminator between in and out-of-distribution samples. We compare this relatively simple approach against a wide variety of more complex methods that have been proposed both for out-of-distribution detection as well as uncertainty modeling in deep learning, and empirically demonstrate its effectiveness on a wide variety of of benchmarks and deep architectures for image recognition and text classification, often outperforming existing approaches by significant margins. Given the simplicity and effectiveness of this method, we propose that this approach be used as a new additional baseline for future work in this domain. ",An Effective Baseline for Robustness to Distributional Shift " In this article, we consider the minimal $L^2$ integrals related to modules at boundary points on fibrations over open Riemann surfaces, and present a characterization for the concavity property of the minimal $L^2$ integrals degenerating to linearity. ","Boundary points, minimal $L^2$ integrals and concavity property IV -- fibrations over open Riemann surfaces" " AI systems that learn through reward feedback about the actions they take are increasingly deployed in domains that have significant impact on our daily life. However, in many cases the online rewards should not be the only guiding criteria, as there are additional constraints and/or priorities imposed by regulations, values, preferences, or ethical principles. We detail a novel online agent that learns a set of behavioral constraints by observation and uses these learned constraints as a guide when making decisions in an online setting while still being reactive to reward feedback. To define this agent, we propose to adopt a novel extension to the classical contextual multi-armed bandit setting and we provide a new algorithm called Behavior Constrained Thompson Sampling (BCTS) that allows for online learning while obeying exogenous constraints. Our agent learns a constrained policy that implements the observed behavioral constraints demonstrated by a teacher agent, and then uses this constrained policy to guide the reward-based online exploration and exploitation. We characterize the upper bound on the expected regret of the contextual bandit algorithm that underlies our agent and provide a case study with real world data in two application domains. Our experiments show that the designed agent is able to act within the set of behavior constraints without significantly degrading its overall reward performance. ",Incorporating Behavioral Constraints in Online AI Systems " We study multidimensional gravitational models with scalar curvature nonlinearities of the type 1/R and R^4. It is assumed that the corresponding higher dimensional spacetime manifolds undergo a spontaneous compactification to manifolds with warped product structure. Special attention is paid to the stability of the extra-dimensional factor spaces. It is shown that for certain parameter regions the systems allow for a freezing stabilization of these spaces. In particular, we find for the 1/R model that configurations with stabilized extra dimensions do not provide a late-time acceleration (they are AdS), whereas the solution branch which allows for accelerated expansion (the dS branch) is incompatible with stabilized factor spaces. In the case of the R^4 model, we obtain that the stability region in parameter space depends on the total dimension D=dim(M) of the higher dimensional spacetime M. For D>8 the stability region consists of a single (absolutely stable) sector which is shielded from a conformal singularity (and an antigravity sector beyond it) by a potential barrier of infinite height and width. This sector is smoothly connected with the stability region of a curvature-linear model. For D<8 an additional (metastable) sector exists which is separated from the conformal singularity by a potential barrier of finite height and width so that systems in this sector are prone to collapse into the conformal singularity. This second sector is not smoothly connected with the first (absolutely stable) one. Several limiting cases and the possibility for inflation are discussed for the R^4 model. ",AdS and stabilized extra dimensions in multidimensional gravitational models with nonlinear scalar curvature terms 1/R and R^4 " A model is introduced to describe guided propagation of a linear or nonlinear pulse which encounters a localized nonlinear defect, that may be either static or breather-like one (the model with the static defect applies to an optical pulse in a long fiber link with an inserted additional section of a nonlinear fiber). In the case when the host waveguide is linear, the pulse has a Gaussian shape. In that case, an immediate result of its interaction with the nonlinear defect can be found in an exact analytical form, amounting to transformation of the incoming Gaussian into an infinite array of overlapping Gaussian pulses. Further evolution of the array in the linear host medium is found numerically by means of the Fourier transform. An important ingredient of the linear medium is the third-order dispersion, that eventually splits the array into individual pulses. If the host medium is nonlinear, the input pulse is taken as a fundamental soliton. The soliton is found to be much more resistant to the action of the nonlinear defect than the Gaussian pulse in the linear host medium. In this case, the third-order-dispersion splits the soliton proper and wavepackets generated by the action of the defect. ",Scattering of a solitary pulse on a local defect or breather " A \textit{grounded set family} on $I$ is a subset $\mathcal{F}\subseteq2^I$ such that $\emptyset\in\mathcal{F}$. We study a linearized Hopf monoid \textbf{SF} on grounded set families, with restriction and contraction inspired by the corresponding operations for antimatroids. Many known combinatorial species, including simplicial complexes and matroids, form Hopf submonoids of \textbf{SF}, although not always with the ""standard"" Hopf structure (for example, our contraction operation is not the usual contraction of matroids). We use the topological methods of Aguiar and Ardila to obtain a cancellation-free antipode formula for the Hopf submonoid of lattices of order ideals of finite posets. Furthermore, we prove that the Hopf algebra of lattices of order ideals of chain gangs extends the Hopf algebra of symmetric functions, and that its character group extends the group of formal power series in one variable with constant term 1 under multiplication. ",Hopf monoids of set families " Finding a succinct representation to describe the ground state of a disordered interacting system could be very helpful in understanding the interplay between the interactions that is manifested in a quantum phase transition. In this work we use some elementary states to construct recursively an ansatz of multilayer wave functions, where in each step the higher-level wave function is represented by a superposition of the locally ""excited states"" obtained from the lower-level wave function. This allows us to write the Hamiltonian expectation in terms of some local functions of the variational parameters, and employ an efficient message-passing algorithm to find the optimal parameters. We obtain good estimations of the ground-state energy and the phase transition point for the transverse Ising model with a few layers of mean-field and symmetric tree states. The work is the first step towards the application of local and distributed message-passing algorithms in the study of structured variational problems in finite dimensions. ",Multilayer wave functions: A recursive coupling of local excitations " We investigate topological realizations of higher-rank graphs. We show that the fundamental group of a higher-rank graph coincides with the fundamental group of its topological realization. We also show that topological realization of higher-rank graphs is a functor, and that for each higher-rank graph \Lambda, this functor determines a category equivalence between the category of coverings of \Lambda\ and the category of coverings of its topological realization. We discuss how topological realization relates to two standard constructions for k-graphs: projective limits and crossed products by finitely generated free abelian groups. ",Topological realizations and fundamental groups of higher-rank graphs " The role of electrostatics on the interfacial properties of polyelectrolyte microgels has been discussed controversially in the literature. It is not yet clear if, or how, Coulomb interactions affect their behavior under interfacial confinement. In this work, we combine compression isotherms, atomic force microscopy imaging, and computer simulations to further investigate the behavior of pH-responsive microgels at oil-water interfaces. At low compression, charged microgels can be compressed more than uncharged microgels. The in-plane effective area of charged microgels is found to be smaller in comparison to uncharged ones. Thus, the compressibility is governed by in-plane interactions of the microgels with the interface. At high compression, however, charged microgels are less compressible than uncharged microgels. Microgel fractions located in the aqueous phase interact earlier for charged than for uncharged microgels because of their different swelling perpendicular to the interface. Therefore, the compressibility at high compression is controlled by out-of-plane interactions. In addition, the size of the investigated microgels plays a pivotal role. The charge-dependent difference in compressibility at low compression is only observed for small but not for large microgels, while the behavior at high compression does not depend on the size. Our results highlight the complex nature of soft polymer microgels as compared to rigid colloidal particles. We clearly demonstrate that electrostatic interactions affect the interfacial properties of polyelectrolyte microgels. ",Influence of charges on the behavior of polyelectrolyte microgels confined to oil-water interfaces " In this paper, we address the problem of collision avoidance for a swarm of UAVs used for continuous surveillance of an urban environment. Our method, LSwarm, efficiently avoids collisions with static obstacles, dynamic obstacles and other agents in 3-D urban environments while considering coverage constraints. LSwarm computes collision avoiding velocities that (i) maximize the conformity of an agent to an optimal path given by a global coverage strategy and (ii) ensure sufficient resolution of the coverage data collected by each agent. Our algorithm is formulated based on ORCA (Optimal Reciprocal Collision Avoidance) and is scalable with respect to the size of the swarm. We evaluate the coverage performance of LSwarm in realistic simulations of a swarm of quadrotors in complex urban models. In practice, our approach can compute collision avoiding velocities for a swarm composed of tens to hundreds of agents in a few milliseconds on dense urban scenes consisting of tens of buildings. ",LSwarm: Efficient Collision Avoidance for Large Swarms with Coverage Constraints in Complex Urban Scenes " Circumstellar shells around AGB stars are built over long periods of time that may reach several million years. They may therefore be extended over large sizes (~1 pc, possibly more), and different complementary tracers are needed to describe their global properties. In the present work, we combined 21-cm HI and CO rotational line data obtained on an oxygen-rich semi-regular variable, RX Lep, to describe the global properties of its circumstellar environment. With the SEST, we detected the CO(2-1) rotational line from RX Lep. The line profile is parabolic and implies an expansion velocity of ~4.2 km/s and a mass-loss rate ~1.7 10^-7 Msun/yr (d = 137 pc). The HI line at 21 cm was detected with the Nancay Radiotelescope on the star position and at several offset positions. The linear shell size is relatively small, ~0.1 pc, but we detect a trail extending southward to ~0.5 pc. The line profiles are approximately Gaussian with an FWHM ~3.8 km/s and interpreted with a model developed for the detached shell around the carbon-rich AGB star Y CVn. Our HI spectra are well-reproduced by assuming a constant outflow (Mloss = 1.65 10^-7 Msun/yr) of ~4 10^4 years duration, which has been slowed down by the external medium. The spatial offset of the HI source is consistent with the northward direction of the proper motion, lending support to the presence of a trail resulting from the motion of the source through the ISM, as already suggested for Mira, RS Cnc, and other sources detected in HI. The source was also observed in SiO (3 mm) and OH (18 cm), but not detected. The properties of the external parts of circumstellar shells around AGB stars should be dominated by the interaction between stellar outflows and external matter for oxygen-rich, as well as for carbon-rich, sources, and the 21-cm HI line provides a very useful tracer of these regions. ",HI and CO in the circumstellar environment of the oxygen-rich AGB star RX Lep " We study the constraints that can be placed on anomalous $\tau$-lepton couplings at the LHC. We use an effective Lagrangian description for physics beyond the standard model which contains the $\tau$-lepton anomalous magnetic moment, electric dipole moment and weak dipole moments in two operators of dimension six. We include in our study two additional operators of dimension eight that directly couple the $\tau$-leptons to gluons and are therefore enhanced at the LHC. We consider the two main effects from these couplings: modifications to the Drell-Yan cross-section and to the $\tau$-lepton pair production in association with a Higgs boson. We find that a measurement of the former at the 14% level can produce constraints comparable to existing ones for the anomalous dipole couplings; and that a bound on the latter at a sensitivity level of $500\ \sigma_{SM}$ or better would produce the best constraint on the $\tau$-gluonic couplings. ",Constraining $\tau$-lepton dipole moments and gluon couplings at the LHC " Recently the developments in the field of II-VI-oxides have been spectacular. Various epitaxial methods has been used to grow epitaxial ZnO layers. Not only epilayers but also sufficiently good-quality multiple quantum wells (MQWs) have also been grown by laser molecular-beam epitaxy (laser-MBE). We discuss mainly the experimental aspect of the optical properties of excitons in ZnO-based MQW heterostructures. Systematic temperature-dependent studies of optical absorption and photoluminescence in these MQWs were used to evaluate the well-width dependence and the composition dependence of the major excitonic properties. Based on these data, the localization of excitons, the influence of exciton-phonon interaction, and quantum-confined Stark effects are discussed. The optical spectra of dense excitonic systems are shown to be determined mainly by the interaction process between excitons and biexcitons. The high-density excitonic effects play a role for the observation of room-temperature stimulated emission in the ZnO MQWs. The binding energies of exciton and biexciton are enhanced from the bulk values, as a result of quantum-confinement effects. ",Optical Properties of Excitons in ZnO-based Quantum Well Heterostructures " This volume of EPTCS contains the proceedings of the First Workshop on Hammers for Type Theories (HaTT 2016), held on 1 July 2016 as part of the International Joint Conference on Automated Reasoning (IJCAR 2016) in Coimbra, Portugal. The proceedings contain four regular papers, as well as abstracts of the two invited talks by Pierre Corbineau (Verimag, France) and Aleksy Schubert (University of Warsaw, Poland). ",Proceedings First International Workshop on Hammers for Type Theories " Recent discoveries make it possible to compute the K-theory of certain rings from their cyclic homology and certain versions of their cdh-cohomology. We extend the work of G. Corti\~nas et al. who calculated the K-theory of, in addition to many other varieties, cones over smooth varieties, or equivalently the K-theory of homogeneous polynomial rings. We focus on specific examples of polynomial rings, which happen to be filtered deformations of homogeneous polynomial rings. Along the way, as a secondary result, we will develop a method for computing the periodic cyclic homology of a singular variety as well as the negative cyclic homology when the cyclic homology of that variety is known. Finally, we will apply these methods to extend the results of Michler who computed the cyclic homology of hypersurfaces with isolated singularities. ",The K-theory of filtered deformations of graded polynomial algebras We study the time evolution of the entanglement entropy in the short and long-range coupled harmonic oscillators that have well-defined continuum limit field theories. We first introduce a method to calculate the entanglement evolution in generic coupled harmonic oscillators after quantum quench. Then we study the entanglement evolution after quantum quench in harmonic systems that the couplings decay effectively as $1/r^{d+\alpha}$ with the distance $r$. After quenching the mass from non-zero value to zero we calculate numerically the time evolution of von Neumann and R\'enyi entropies. We show that for $1<\alpha<2$ we have a linear growth of entanglement and then saturation independent of the initial state. For $0<\alpha<1$ depending on the initial state we can have logarithmic growth or just fluctuation of entanglement. We also calculate the mutual information dynamics of two separated individual harmonic oscillators. Our findings suggest that in our system there is no particular connection between having a linear growth of entanglement after quantum quench and having a maximum group velocity or generalized Lieb-Robinson bound. ,Entanglement dynamics in short and long-range harmonic oscillators " A promising technique for the spectral design of acoustic metamaterials is based on the formulation of suitable constrained nonlinear optimization problems. Unfortunately, the straightforward application of classical gradient-based iterative optimization algorithms to the numerical solution of such problems is typically highly demanding, due to the complexity of the underlying physical models. Nevertheless, supervised machine learning techniques can reduce such a computational effort, e.g., by replacing the original objective functions of such optimization problems with more-easily computable approximations. In this framework, the present article describes the application of a related unsupervised machine learning technique, namely, principal component analysis, to approximate the gradient of the objective function of a band gap optimization problem for an acoustic metamaterial, with the aim of making the successive application of a gradient-based iterative optimization algorithm faster. Numerical results show the effectiveness of the proposed method. ",Principal Component Analysis Applied to Gradient Fields in Band Gap Optimization Problems for Metamaterials " In this paper, we explore a novel method for tomographic image reconstruction in the field of SPECT imaging. Deep Learning methodologies and more specifically deep convolutional neural networks (CNN) are employed in the new reconstruction method, which is referred to as ""CNN Reconstruction - CNNR"". For training of the CNNR Projection data from software phantoms were used. For evaluation of the efficacy of the CNNR method, both software and hardware phantoms were used. The resulting tomographic images are compared to those produced by filtered back projection (FBP) [1], the ""Maximum Likelihood Expectation Maximization"" (MLEM) [1] and ordered subset expectation maximization (OSEM) [2]. ",SPECT Imaging Reconstruction Method Based on Deep Convolutional Neural Network The chirally improved (CI) fermion action allows us to obtain results for pion masses down to 320 MeV on (in lattice units) comparatively small lattices with physical extent of 2.4 fm. We use differently smeared quarks sources to build sets of several interpolators. The variational method then leads to excellent ground state masses for most mesons and baryons. The excited state signals weaken in quality towards smaller quark masses. In particular the excited baryons come out too high. ,Excited hadrons in n_f=2 QCD " We generalize the concept of folding from surface codes to CSS codes by considering certain dualities within them. In particular, this gives a general method to implement logical operations in suitable LDPC quantum codes using transversal gates and qubit permutations only. To demonstrate our approach, we specifically consider a [[30, 8, 3]] hyperbolic quantum code called Bring's code. Further, we show that by restricting the logical subspace of Bring's code to four qubits, we can obtain the full Clifford group on that subspace. ",Fold-Transversal Clifford Gates for Quantum Codes " We investigate the cosmological applications of scalar-tensor theories that arise effectively from the Lorentz fiber bundle of a Finsler-like geometry. We first show that the involved nonlinear connection induces a new scalar degree of freedom and eventually a scalar-tensor theory. Using both a holonomic and a nonholonomic basis, we show the appearance of an effective dark-energy sector, which additionally acquires an explicit interaction with the matter sector, arising purely from the internal structure of the theory. Applying the theory at late times we find that we can obtain the thermal history of the Universe, namely the sequence of matter and dark-energy epochs, and moreover the effective dark-energy equation-of-state parameter can be quintessencelike, phantomlike, or experience the phantom-divide crossing during cosmological evolution. Furthermore, applying the scenario at early times, we see that one can acquire an exponential de Sitter solution as well as obtain an inflationary realization with the desired scale-factor evolution. These features arise purely from the intrinsic geometrical structure of Finsler-like geometry and reveal the capabilities of the construction. ",Cosmology of Lorentz fiber-bundle induced scalar-tensor theories " Support Vector Machines have been successfully used for one-class classification (OCSVM, SVDD) when trained on clean data, but they work much worse on dirty data: outliers present in the training data tend to become support vectors, and are hence considered ""normal"". In this article, we improve the effectiveness to detect outliers in dirty training data with a leave-out strategy: by temporarily omitting one candidate at a time, this point can be judged using the remaining data only. We show that this is more effective at scoring the outlierness of points than using the slack term of existing SVM-based approaches. Identified outliers can then be removed from the data, such that outliers hidden by other outliers can be identified, to reduce the problem of masking. Naively, this approach would require training N individual SVMs (and training $O(N^2)$ SVMs when iteratively removing the worst outliers one at a time), which is prohibitively expensive. We will discuss that only support vectors need to be considered in each step and that by reusing SVM parameters and weights, this incremental retraining can be accelerated substantially. By removing candidates in batches, we can further improve the processing time, although it obviously remains more costly than training a single SVM. ",LOSDD: Leave-Out Support Vector Data Description for Outlier Detection " We have obtained the decay asymmetry parameters in non-mesonic weak decay of polarized Lambda-hypernuclei by measuring the proton asymmetry. The polarized Lambda-hypernuclei, 5_Lambda-He, 12_Lambda-C, and 11_Lambda-B, were produced in high statistics via the (pi^+,k^+) reaction at 1.05 GeV/c in the forward angles. Preliminary analysis shows that the decay asymmetry parameters are very small for these s-shell and p-shell hypernuclei. ",Proton asymmetry in non-mesonic weak decay of light hypernuclei " Strongly correlated fractional quantum Hall liquids support fractional excitations, which can be understood in terms of adiabatic flux insertion arguments. A second route to fractionalization is through the coupling of weakly interacting electrons to topologically nontrivial backgrounds such as in polyacetylene. Here we demonstrate that electronic fractionalization combining features of both these mechanisms occurs in noncoplanar itinerant magnetic systems, where integer quantum Hall physics arises from the coupling of electrons to the magnetic background. The topologically stable magnetic vortices in such systems carry fractional (in general irrational) electronic quantum numbers and exhibit Abelian anyonic statistics. We analyze the properties of these topological defects by mapping the distortions of the magnetic texture onto effective non-Abelian vector potentials. We support our analytical results with extensive numerical calculations. ",Anyons in integer quantum Hall magnets " As a followup to the latest BABAR amplitude analysis of the decay $B^+ \rightarrow K^+ K^- K^+$, we investigate the $K^+ K^-$ invariant-mass dependence of the CP asymmetry and compare it to that obtained by the LHCb collaboration. The results are based on a data sample of approximately $470 \times 10^6 B\bar{B}$ decays, collected with the BABAR detector at the PEP-II asymmetric-energy $B$ factory at the SLAC National Accelerator Laboratory. ",Study of the $K^+ K^-$ invariant-mass dependence of CP asymmetry in $B^+ \rightarrow K^+ K^- K^+$ decays " In this paper, we construct the binary linear codes $C(SL(n,q))$ associated with finite special linear groups $SL(n,q)$, with both \emph{n,q} powers of two. Then, via Pless power moment identity and utilizing our previous result on the explicit expression of the Gauss sum for $SL(n,q)$, we obtain a recursive formula for the power moments of multi-dimensional Kloosterman sums in terms of the frequencies of weights in $C(SL(n,q))$. In particular, when $n=2$, this gives a recursive formula for the power moments of Kloosterman sums. We illustrate our results with some examples. ",Codes Associated with Special Linear Groups and Power Moments of Multi-dimensional Kloosterman Sums " We present a study of the magnetic properties of Zr1-xNbxZn2, using an Arrott plot analysis of the magnetization. The Curie temperature TC is suppressed to zero temperature for Nb concentration xC=0.083+/-0.002, while the spontaneous moment vanishes linearly with TC as predicted by the Stoner theory. The initial susceptibility chi displays critical behavior for x<=xC, with a critical exponent which smoothly crosses over from the mean field to the quantum critical value. For high temperatures and x<=xC and for low temperatures and x>=xC we find that chi^(-1)=chi_o^(-1)+aT^(4/3), where chi_o^(-1) vanishes as x approaches xC. The resulting magnetic phase diagram shows that the quantum critical behavior extends over the widest range of temperatures for x=xC, and demonstrates how a finite transition temperature ferromagnet is transformed into a paramagnet, via a quantum critical point. ",Critical Phenomena and the Quantum Critical Point of Ferromagnetic Zr1-xNbxZn2 " We report the results of a combined experimental and theoretical study on nonstoichiometric CrN1+d thin films grown by reactive magnetron sputtering on c-plane sapphire, MgO (100) and LaAlO3 (100) substrates in a Ar/N2 gas mixture using different percentage of N2. There is a transition from n-type to p-type behavior in the layers as a function of nitrogen concentration varying from 48 at. % to 52 at. % in CrN films. The compositional change follows a similar trend for all substrates, with a N/Cr ratio increasing from approximately 0.7 to 1.06-1.10 by increasing percentage of N2 in the gas flow ratio. As a result of the change in stoichiometry, the lattice parameter and the Seebeck coefficient increase together with the increase of N in CrN1+d; in particular, the Seebeck value coefficient transitions from -50 uV.K-1 for CrN0.97 to +75 uV.K-1 for CrN1.1. Density functional theory calculations show that Cr vacancies can account for the change in Seebeck coefficient, since they push the Fermi level down in the valence band, whereas N interstitial defects in the form of N2 dumbbells are needed to explain the increasing lattice parameter. Calculations including both types of defects, which have a strong tendency to bind together, reveal a slight increase in the lattice parameter and a simultaneous formation of holes in the valence band. To explain the experimental trends, we argue that both Cr vacancies and N2 dumbbells, possibly in combined configurations, are present in the films. We demonstrate the possibility of controlling the semiconducting behavior of CrN with intrinsic defects from n- to p-type, opening possibilities to integrate this compound in energy-harvesting thermoelectric devices. ",P-type behavior of CrN thin films by control of point defects " The high-order accuracy of Fourier method makes it the method of choice in many large scale simulations. We discuss here the stability of Fourier method for nonlinear evolution problems, focusing on the two prototypical cases of the inviscid Burgers' equation and the multi-dimensional incompressible Euler equations. The Fourier method for such problems with quadratic nonlinearities comes in two main flavors. One is the spectral Fourier method. The other is the 2/3 pseudo-spectral Fourier method, where one removes the highest 1/3 portion of the spectrum; this is often the method of choice to maintain the balance of quadratic energy and avoid aliasing errors. Two main themes are discussed in this paper. First, we prove that as long as the underlying exact solution has a minimal C^{1+\alpha} spatial regularity, then both the spectral and the 2/3 pseudo-spectral Fourier methods are stable. Consequently, we prove their spectral convergence for smooth solutions of the inviscid Burgers equation and the incompressible Euler equations. On the other hand, we prove that after a critical time at which the underlying solution lacks sufficient smoothness, then both the spectral and the 2/3 pseudo-spectral Fourier methods exhibit nonlinear instabilities which are realized through spurious oscillations. In particular, after shock formation in inviscid Burgers' equation, the total variation of bounded (pseudo-) spectral Fourier solutions must increase with the number of increasing modes and we stipulate the analogous situation occurs with the 3D incompressible Euler equations: the limiting Fourier solution is shown to enforce L^2-energy conservation, and the contrast with energy dissipating Onsager solutions is reflected through spurious oscillations. ",Stability and spectral convergence of Fourier method for nonlinear problems. On the shortcomings of the 2/3 de-aliasing method " We present an efficient basis for imaginary time Green's functions based on a low rank decomposition of the spectral Lehmann representation. The basis functions are simply a set of well-chosen exponentials, so the corresponding expansion may be thought of as a discrete form of the Lehmann representation using an effective spectral density which is a sum of $\delta$ functions. The basis is determined only by an upper bound on the product $\beta \omega_{\max}$, with $\beta$ the inverse temperature and $\omega_{\max}$ an energy cutoff, and a user-defined error tolerance $\epsilon$. The number $r$ of basis functions scales as $\mathcal{O}\left(\log(\beta \omega_{\max}) \log (1/\epsilon)\right)$. The discrete Lehmann representation of a particular imaginary time Green's function can be recovered by interpolation at a set of $r$ imaginary time nodes. Both the basis functions and the interpolation nodes can be obtained rapidly using standard numerical linear algebra routines. Due to the simple form of the basis, the discrete Lehmann representation of a Green's function can be explicitly transformed to the Matsubara frequency domain, or obtained directly by interpolation on a Matsubara frequency grid. We benchmark the efficiency of the representation on simple cases, and with a high precision solution of the Sachdev-Ye-Kitaev equation at low temperature. We compare our approach with the related intermediate representation method, and introduce an improved algorithm to build the intermediate representation basis and a corresponding sampling grid. ",Discrete Lehmann representation of imaginary time Green's functions " We study experimentally and theoretically mixing at the external boundary of a submerged turbulent jet. In the experimental study we use Particle Image Velocimetry and an Image Processing Technique based on the analysis of the intensity of the Mie scattering to determine the spatial distribution of tracer particles. An air jet is seeded with the incense smoke particles which are characterized by large Schmidt number and small Stokes number. We determine the spatial distributions of the jet fluid characterized by a high concentration of the particles and of the ambient fluid characterized by a low concentration of the tracer particles. In the data analysis we use two approaches, whereby one approach is based on the measured phase function for the study of the mixed state of two fluids. The other approach is based on the analysis of the two-point second-order correlation function of the particle number density fluctuations generated by tangling of the gradient of the mean particle number density by the turbulent velocity field. This gradient is formed at the external boundary of a submerged turbulent jet. We demonstrate that PDF of the phase function of a jet fluid penetrating into an external flow and the two-point second-order correlation function of the particle number density do not have universal scaling and cannot be described by a power-law function. The theoretical predictions made in this study are in a qualitative agreement with the obtained experimental results. ",Mixing at the external boundary of a submerged turbulent jet " Using a mid-infrared calibration of the Cepheid distance scale based on recent observations at 3.6 um with the Spitzer Space Telescope, we have obtained a new, high-accuracy calibration of the Hubble constant. We have established the mid-IR zero point of the Leavitt Law (the Cepheid Period-Luminosity relation) using time-averaged 3.6 um data for ten high-metallicity, Milky Way Cepheids having independently-measured trigonometric parallaxes. We have adopted the slope of the PL relation using time-averaged 3.6 um data for 80 long-period Large Magellanic Cloud (LMC) Cepheids falling in the period range 0.8 < log(P) < 1.8. We find a new reddening-corrected distance to the LMC of 18.477 +/- 0.033 (systematic) mag. We re-examine the systematic uncertainties in H0, also taking into account new data over the past decade. In combination with the new Spitzer calibration, the systematic uncertainty in H0 over that obtained by the Hubble Space Telescope (HST) Key Project has decreased by over a factor of three. Applying the Spitzer calibration to the Key Project sample, we find a value of H0 = 74.3 with a systematic uncertainty of +/-2.1 (systematic) km/s/Mpc, corresponding to a 2.8% systematic uncertainty in the Hubble constant. This result, in combination with WMAP7 measurements of the cosmic microwave background anisotropies and assuming a flat universe, yields a value of the equation of state for dark energy, w0 = -1.09 +/- 0.10. Alternatively, relaxing the constraints on flatness and the numbers of relativistic species, and combining our results with those of WMAP7, Type Ia supernovae and baryon acoustic oscillations yields w0 = -1.08 +/- 0.10 and a value of N_eff = 4.13 +/- 0.67, mildly consistent with the existence of a fourth neutrino species. ",Carnegie Hubble Program: A Mid-Infrared Calibration of the Hubble Constant " We show that the sigma models with target spaces supersymmetric heterotic backgrounds with $SU(2)$ and $SU(3)$ holonomy are invariant under a W-symmetry algebra generated by the covariantly constant forms of these backgrounds. The closure of the W-algebra requires additional generators which we identify. We prove that the chiral anomalies of all these symmetries are consistent at one-loop in perturbation theory. We also demonstrate that these anomalies cancel at the same loop level either by adding suitable finite local counterterms in the sigma model effective action or by assuming a plausible quantum correction to the transformations. Such a correction is compatible with both the cancellation mechanism for spacetime frame rotation and gauge transformation anomalies, and the correction to the heterotic supergravity up to two loops in the sigma model perturbation theory. ","W-symmetries, anomalies and heterotic backgrounds with SU holonomy" " We extend the definition of tridendriform bialgebra by introducing a weight q. The subspace of primitive elements of a q-tridendriform bialgebra is equipped with an associative product and a natural structure of brace algebra, related by a distributive law. This data is called q-Gerstenhaber-Voronov algebras. We prove the equivalence between the categories of connected q-tridendriform bialgebras and of q-Gerstenhaber-Voronov algebras. The space spanned by surjective maps, as well as the space spanned by parking functions, have natural structures of q-tridendriform bialgebras, denoted ST(q) and PQSym(q)*, in such a way that ST(q) is a sub-tridendriform bialgebra of PQSym(q)*. Finally we show that the bialgebra of M-permutations defined by T. Lam and P. Pylyavskyy may be endowed with a natural structure of q-tridendriform algebra which is a quotient of ST(q). ",Tridendriform structure on combinatorial Hopf algebras " In this paper, we develop a numerical method for the L\'evy-Fokker-Planck equation with the fractional diffusive scaling. There are two main challenges. One comes from a two-fold nonlocality, that is, the need to apply the fractional Laplacian operator to a power law decay distribution. The other arises from long-time/small mean-free-path scaling, which introduces stiffness to the equation. To resolve the first difficulty, we use a change of variable to convert the unbounded domain into a bounded one and then apply the Chebyshev polynomial based pseudo-spectral method. To treat the multiple scales, we propose an asymptotic preserving scheme based on a novel micro-macro decomposition that uses the structure of the test function in proving the fractional diffusion limit analytically. Finally, the efficiency and accuracy of our scheme are illustrated by a suite of numerical examples. ",An asymptotic preserving scheme for L\'{e}vy-Fokker-Planck equation with fractional diffusion limit We study the problem of the existence and the holomorphicity of the Monge-Amp\`ere foliation associated to a plurisubharmonic solutions of the complex homogeneous Monge-Amp\`ere equation even at points of arbitrary degeneracy. We obtain good results for real analytic unbounded solutions. As a consequence we also provide a positive answer to a question of Burns on homogeneous polynomials whose logarithm satisfies the complex Monge-Amp\`ere equation and we obtain a generalization the work of P.M. Wong on the classification of complete weighted circular domains. ,Monge-Ampere foliations for degenerate solutions " We have reported in a previous paper (AAS 116, 417, 1996) B, V and I band photometric data for a sample of 24 edge-on interacting spiral galaxies, together with a control sample of 7 edge-on isolated galaxies. We discuss here the main result found in this study: the ratio h/z of the radial exponential scalelength h to the constant scaleheight z is about twice smaller for interacting galaxies. This is found to be due both to a thickening of the plane, and to a radial stripping or shrinking of the stellar disk. If we believe that any galaxy experienced a tidal interaction in the past, we must conclude that continuous gas accretion and subsequent star formation can bring back the ratio h/z to higher values, in a time scale of 1 Gyr. ",Tidally-triggered disk thickening. II. Results and interpretations The rate of events measured with the surface detector of the Pierre Auger Observatory is found to be modulated by the weather conditions. This effect is due to the increasing amount of matter traversed by the shower as the ground pressure increases and to the inverse proportionality of the Moliere radius to the air density near ground. Air-shower simulations with different realistic profiles of the atmosphere support this interpretation of the observed effects. ,Weather induced effects on extensive air showers observed with the surface detector of the Pierre Auger Observatory " We present UBVRI and CT1T2 photometry for fifteen catalogued open clusters of relative high brightness and compact appearance. From these unprecedented photometric data sets, covering wavelengths from the blue up to the near-infrared, we performed a thorough assessment of their reality as stellar aggregates. We statistically assigned to each observed star within the object region a probability of being a fiducial feature of that field in terms of its local luminosity function, colour distribution and stellar density. Likewise, we used accurate parallaxes and proper motions measured by the Gaia satellite to help our decision on the open cluster reality. Ten catalogued aggregates did not show any hint of being real physical systems; three of them had been assumed to be open clusters in previous studies, though. On the other hand, we estimated reliable fundamental parameters for the remaining five studied objects, which were confirmed as real open clusters. They resulted to be clusters distributed in a wide age range, 8.0 < log(t yr-1) < 9.4, of solar metal content and placed between 2.0 and 5.5 kpc from the Sun. Their ages and metallicities are in agreement with the presently known picture of the spatial distribution of open clusters in the Galactic disc. ",On the physical reality of overlooked open clusters " The so-called l0 pseudonorm on the Euclidean space Rd counts the number of nonzero components of a vector. We say that a sequence of norms is strictly increasingly graded (with respect to the l0 pseudonorm) if it is nondecreasing and that the sequence of norms of a vector~x becomes stationary exactly at the index l0(x). In this paper, with any (source) norm, we associate sequences of generalized top-k and k-support norms, and we also introduce the new class of orthant-strictly monotonic norms (that encompasses the lp norms, but for the extreme ones). Then, we show that an orthant-strictly monotonic source norm generates a sequence of generalized top-k norms which is strictly increasingly graded. With this, we provide a systematic way to generate sequences of norms with which the level sets of the l0 pseudonorm are expressed by means of the difference of two norms. Our results rely on the study of orthant-strictly monotonic norms. ","Orthant-Strictly Monotonic Norms, Generalized Top-k and k-Support Norms and the L0 Pseudonorm" " Effect of magnetic field on electron spin relaxation in quantum wells is studied theoretically. We have shown that Larmor effect and cyclotron motion of carriers can either jointly suppress D'yakonov-Perel' spin relaxation or compensate each other. The spin relaxation rates tensor is derived for any given direction of the external field and arbitrary ratio of bulk and structural contributions to spin splitting. Our results are applied to the experiments on electron spin resonance in SiGe heterostructures, and enable us to extract spin splitting value for such quantum wells. ",Magnetic field effects on spin relaxation in heterostructures Neutrino masses and mixings are very different from quark masses and mixings. This puzzle is a crucial hint in the search for the mechanism which determines fermion masses in grand unified theories. We study the flavour problem in an SO(10) GUT model in six dimensions compactified on an orbifold. Three sequential families are localized at three branes where SO(10) is broken to its three GUT subgroups. Their mixing with bulk fields leads to large neutrino mixings as well as small mixings among left-handed quarks. The small hierarchy of neutrino masses is due to the mismatch between up-quark and down-quark mass hierarchies. ,The flavour puzzle from an orbifold GUT perspective " This summary of the doctoral thesis is created to emphasize the close connection of the proposed spectral analysis method with the Discrete Fourier Transform (DFT), the most extensively studied and frequently used approach in the history of signal processing. It is shown that in a typical application case, where uniform data readings are transformed to the same number of uniformly spaced frequencies, the results of the classical DFT and proposed approach coincide. The difference in performance appears when the length of the DFT is selected to be greater than the length of the data. The DFT solves the unknown data problem by padding readings with zeros up to the length of the DFT, while the proposed Extended DFT (EDFT) deals with this situation in a different way, it uses the Fourier integral transform as a target and optimizes the transform basis in the extended frequency range without putting such restrictions on the time domain. Consequently, the Inverse DFT (IDFT) applied to the result of EDFT returns not only known readings, but also the extrapolated data, where classical DFT is able to give back just zeros, and higher resolution are achieved at frequencies where the data has been successfully extended. It has been demonstrated that EDFT able to process data with missing readings or gaps inside or even nonuniformly distributed data. Thus, EDFT significantly extends the usability of the DFT-based methods, where previously these approaches have been considered as not applicable. The EDFT founds the solution in an iterative way and requires repeated calculations to get the adaptive basis, and this makes it numerical complexity much higher compared to DFT. This disadvantage was a serious problem in the 1990s, when the method has been proposed. Fortunately, since then the power of computers has increased so much that nowadays EDFT application could be considered as a real alternative. ",Extended Fourier analysis of signals " In this work we introduce a fully-connected graph structure in the Deep Gaussian Conditional Random Field (G-CRF) model. For this we express the pairwise interactions between pixels as the inner-products of low-dimensional embeddings, delivered by a new subnetwork of a deep architecture. We efficiently minimize the resulting energy by solving the resulting low-rank linear system with conjugate gradients, and derive an analytic expression for the gradient of our embeddings which allows us to train them end-to-end with backpropagation. We demonstrate the merit of our approach by achieving state of the art results on three challenging Computer Vision benchmarks, namely semantic segmentation, human parts segmentation, and saliency estimation. Our implementation is fully GPU based, built on top of the Caffe library, and will be made publicly available. ","Deep, Dense, and Low-Rank Gaussian Conditional Random Fields" " We measure the clustering of X-ray, radio, and mid-IR-selected active galactic nuclei (AGN) at 0.2 < z < 1.2 using multi-wavelength imaging and spectroscopic redshifts from the PRIMUS and DEEP2 redshift surveys, covering 7 separate fields spanning ~10 square degrees. Using the cross-correlation of AGN with dense galaxy samples, we measure the clustering scale length and slope, as well as the bias, of AGN selected at different wavelengths. Similar to previous studies, we find that X-ray and radio AGN are more clustered than mid-IR-selected AGN. We further compare the clustering of each AGN sample with matched galaxy samples designed to have the same stellar mass, star formation rate, and redshift distributions as the AGN host galaxies and find no significant differences between their clustering properties. The observed differences in the clustering of AGN selected at different wavelengths can therefore be explained by the clustering differences of their host populations, which have different distributions in both stellar mass and star formation rate. Selection biases inherent in AGN selection, therefore, determine the clustering of observed AGN samples. We further find no significant difference between the clustering of obscured and unobscured AGN, using IRAC or WISE colors or X-ray hardness ratio. ","PRIMUS + DEEP2: Clustering of X-ray, Radio and IR-AGN at z~0.7" For time-periodical quantum systems generalized Floquet operator is found to be integral of motion.Spectrum of this invariant is shown to be quasienergy spectrum.Analogs of invariant Floquet operators are found for nonperiodical systems with several characteristic times.Generalized quasienergy states are introduced for these systems. Geometrical phase is shown to be integral of motion. ,"Geometrical phase,generalized quasienergy and Floquet operator as invariants" " Quasi-periodic Pedestal Burst Instabilities (PBIs), featuring alternative turbulence suppression and bursts, have been clearly identified by various edge diagnostics during I-mode to H-mode transition in the EAST Tokamak. The radial distribution of the phase perturbation caused by PBI shows that PBI is localized in the pedestal. Prior to each PBI, a significant increase of density gradient close to the pedestal top can be clearly distinguished, then the turbulence burst is generated, accompanied by the relaxation of the density profile, and then induces an outward particle flux. The relative density perturbation caused by PBIs is about $6 \sim 8\%$. Statistic analyses show that the pedestal normalized density gradient triggering the first PBI has a threshold value, mostly in the range of $22 \sim 24$, suggesting that a PBI triggering instability could be driven by the density gradient. And the pedestal normalized density gradient triggering the last PBI is about $30 \sim 40$ and seems to increase with the loss power and the chord-averaged density. In addition, the frequency of PBI is likely to be inversely proportional to the chord-averaged density and the loss power. These results suggest that PBIs and the density gradient prompt increase prior to PBIs can be considered as the precursor for controlling I-H transition. ",Characterization of Pedestal Burst Instabilities during I-mode to H-mode Transition in the EAST Tokamak " Let $\Sigma_g$ denote the closed orientable surface of genus $g$ and fix an arbitrary simplicial triangulation of $\Sigma_g$. We construct and study a natural surjective group homomorphism from the surface braid group on $n$ strands on $\Sigma_g$ to the first singular homology group of $\Sigma_g$ with integral coefficients. In particular, we show that the kernel of this homomorphism is generated by canonical braids which arise from the triangulation of $\Sigma_g$. This provides a simple description of natural subgroups of surface braid groups which are closely tied to the homology groups of the surfaces $\Sigma_g$. ",Braid Groups on Triangulated Surfaces and Singular Homology " In practical purposes for some geometrical problems in computer science we have as information the coordinates of some finite points in surface instead of the whole body of a surface. The problem arised here is: ""How to define a distance function in a finite space?"" as we will show the appropriate function for this purpose is not a metric function. Here we try to define this distance function in order to apply it in further proposes, specially in the field setting of transportation theory and vehicle routing problem. More precisely in this paper we consider VRP problem for two dimensional manifolds in R3. ",On Distance Function among Finite Set of Points " In spin systems, the decay of the Loschmidt echo in the time-reversal experiment (evolution, perturbation, time-reversed evolution) is linked to the generation of multiple-quantum coherences. The approach is extended to other systems, and the general problem of reversibility of quantum dynamics is analyzed. ",Reversibility of dynamics and multiple-quantum coherences " We discuss a new set of $\sim$ 500 numerical n-body calculations designed to constrain the masses and bulk densities of Styx, Nix, Kerberos, and Hydra. Comparisons of different techniques for deriving the semimajor axis and eccentricity of the four satellites favor methods relying on the theory of Lee & Peale (2006), where satellite orbits are derived in the context of the restricted three body problem (Pluto, Charon, and one massless satellite). In each simulation, we adopt the nominal satellite masses derived in Kenyon & Bromley (2019a), multiply the mass of at least one satellite by a numerical factor $f \ge 1$, and establish whether the system ejects at least one satellite on a time scale $\le$ 4.5 Gyr. When the total system mass is large ($f \gg 1$), ejections of Kerberos are more common. Systems with lower satellite masses ($ f \approx$ 1) usually eject Styx. In these calculations, Styx often `signals' an ejection by moving to higher orbital inclination long before ejection; Kerberos rarely signals in a useful way. The n-body results suggest that Styx and Kerberos are more likely to have bulk densities comparable with water ice, $\rho_{SK} \lesssim$ 2 g cm$^{-3}$, than with rock. A strong upper limit on the total system mass, $M_{SNKH} \lesssim 9.5 \times 10^{19}$ g, also places robust constraints on the average bulk density of the four satellites, $\rho_{SNKH} \lesssim$ 1.4 g cm$^{-3}$. These limits support models where the satellites grow out of icy material ejected during a major impact on Pluto or Charon. ",A Pluto--Charon Sonata IV. Improved Constraints on the Dynamical Behavior and Masses of the Small Satellites " Generating relevant responses in a dialog is challenging, and requires not only proper modeling of context in the conversation but also being able to generate fluent sentences during inference. In this paper, we propose a two-step framework based on generative adversarial nets for generating conditioned responses. Our model first learns a meaningful representation of sentences by autoencoding and then learns to map an input query to the response representation, which is in turn decoded as a response sentence. Both quantitative and qualitative evaluations show that our model generates more fluent, relevant, and diverse responses than existing state-of-the-art methods. ",Adversarial Learning on the Latent Space for Diverse Dialog Generation " We establish a connection between two recently-proposed approaches to the understanding of the geometric origin of the Fu-Kane-Mele invariant $\mathrm{FKM} \in \mathbb{Z}_2$, arising in the context of 2-dimensional time-reversal symmetric topological insulators. On the one hand, the $\mathbb{Z}_2$ invariant can be formulated in terms of the Berry connection and the Berry curvature of the Bloch bundle of occupied states over the Brillouin torus. On the other, using techniques from the theory of bundle gerbes it is possible to provide an expression for $\mathrm{FKM}$ containing the square root of the Wess-Zumino amplitude for a certain $U(N)$-valued field over the Brillouin torus. We link the two formulas by showing directly the equality between the above mentioned Wess-Zumino amplitude and the Berry phase, as well as between their square roots. An essential tool of independent interest is an equivariant version of the adjoint Polyakov-Wiegmann formula for fields $\mathbb{T}^2 \to U(N)$, of which we provide a proof employing only basic homotopy theory and circumventing the language of bundle gerbes. ","Gauge-theoretic invariants for topological insulators: A bridge between Berry, Wess-Zumino, and Fu-Kane-Mele" " In this article, we analyze the fundamental global and local symmetries involved in the action for the free relativistic point particle. Moreover, we identify a hidden local symmetry, whose explicit consideration and factorization utilizing of a Fujikawa prescription, leads to the construction of relativistic propagators that satisfy the Chapman-Kolmogorov identity. By means of a detailed topological analysis, we find three different relativistic propagators (orthochronous, space-like, and Feynman) which are obtained from the exclusive integration of paths within different sectors in Minkowski space. Finally, the connection of this approach to the Feynman checkerboard construction is explored. ",Path integral of the relativistic point particle in Minkowski space " In this article, we present characterizations of the concavity property of minimal $L^2$ integrals degenerating to linearity in the case of fibrations over open Riemann surfaces. As applications, we obtain characterizations of the holding of equality in optimal jets $L^2$ extension problem from fibers over analytic subsets to fibrations over open Riemann surfaces, which implies characterizations of the fibration versions of the equality parts of Suita conjecture and extended Suita conjecture. ",Concavity property of minimal $L^2$ integrals with Lebesgue measurable gain V--fibrations over open Riemann surfaces The physics potential of the Circular Electron Positron Collider (CEPC) can be significantly strengthened by two detectors with complementary designs. A promising detector approach based on the Silicon Detector (SiD) designed for the International Linear Collider (ILC) is presented. Several simplifications of this detector for the lower energies expected at the CEPC are proposed. A number of cost optimizations of this detector are illustrated using full detector simulations. We show that the proposed changes will enable to reach the physics goals at the CEPC. ,Conceptual Design Studies for a CEPC Detector " This is a follow-up paper of Polson and Scott (2012, Bayesian Analysis), which claimed that the half-Cauchy prior is a sensible default prior for a scale parameter in hierarchical models. For estimation of a normal mean vector under the quadratic loss, they showed that the Bayes estimator with respect to the half-Cauchy prior seems to be minimax through numerical experiments. In terms of the shrinkage coefficient, the half-Cauchy prior has a U-shape and can be interpreted as a continuous spike and slab prior. In this paper, we consider a general class of priors with U-shapes and theoretically establish sufficient conditions for the minimaxity of the corresponding (generalized) Bayes estimators. We also develop an algorithm for posterior sampling and present numerical results. ",Minimaxity under half-Cauchy type priors " In this article we study abstract and embedded invariants of reduced curve germs via topological techniques. One of the most important numerical analytic invariants of an abstract curve is its delta invariant. Our primary goal is to develop delta invariant formulae for curves embedded in rational singularities in terms of embedded data. The topological machinery not only produces formulae, but it also creates deep connections with the theory of (analytical and topological) multivariable Poincar\'e series. ",The delta invariant of curves on rational surfaces II: Poincar\'e series and topological aspects " We show that there are diversified Ruderman-Kittel-Kasuya-Yosida (RKKY) interactions between magnetic impurities, mediated by itinerant electrons, in a centrosymmetric crystal respecting a nonsymmorphic space group. We take the $P4/nmm$ space group as an example. We demonstrate that the different type of interactions, including the Heisenberg-type, the Dzyaloshinskii-Moriya (DM)-type, the Ising-type and the anisotropic interactions, can appear in accordance with the positions of the impurities in the real space. Their strengths strongly depend on the location of the itinerant electrons in the reciprocal space. The diversity stems from the position-dependent site groups and the momentum-dependent electronic structures guaranteed by the nonsymmorphic symmetries. Our study unveils the role of the nonsymmorphic symmetries in affecting magnetism, and suggests that the nonsymmorphic crystals can be promising platforms to design magnetic interactions. ",Diversified Ruderman-Kittel-Kasuya-Yosida Interactions in a Nonsymmorphic Crystal " The chromatic number $\chi(G)$ of a graph $G$, that is, the smallest number of colors required to color the vertices of $G$ so that no two adjacent vertices are assigned the same color, is a classic and extensively studied parameter. Here we consider the case where $G$ is a random block graph, also known as the stochastic block model. The vertex set is partitioned into $k\in\mathbb{N}$ parts $V_1, \dotsc, V_k$, and for each $1 \le i\le j\le k$, two vertices $u \in V_i, v\in V_j$ are connected by an edge with some probability $p_{ij} \in (0,1)$ independently. Our main result pins down the typical asymptotic value of $\chi(G)$ and establishes the distribution of the sizes of the color classes in optimal colorings. We discover that in contrast to the case of a binomial random graph $G(n,p)$, that corresponds to $k=1$ in our model, where the average size of a color class in an (almost) optimal coloring essentially coincides with the independence number, the block model reveals a more diverse picture: the ""average"" class in an optimal coloring is a convex combination of several types of independent sets that vary in total size as well as in the size of their intersection with each $V_i$, $1\le i \le k$. ",The Chromatic Number of Dense Random Block Graphs " Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging. In this paper, we introduce variance layers, a different kind of stochastic layers. Each weight of a variance layer follows a zero-mean distribution and is only parameterized by its variance. We show that such layers can learn surprisingly well, can serve as an efficient exploration tool in reinforcement learning tasks and provide a decent defense against adversarial attacks. We also show that a number of conventional Bayesian neural networks naturally converge to such zero-mean posteriors. We observe that in these cases such zero-mean parameterization leads to a much better training objective than conventional parameterizations where the mean is being learned. ",Variance Networks: When Expectation Does Not Meet Your Expectations " Cloth folding is a widespread domestic task that is seemingly performed by humans but which is highly challenging for autonomous robots to execute due to the highly deformable nature of textiles; It is hard to engineer and learn manipulation pipelines to efficiently execute it. In this paper, we propose a new solution for robotic cloth folding (using a standard folding board) via learning from demonstrations. Our demonstration video encoding is based on a high-level abstraction, namely, a refined optical flow-based spatiotemporal graph, as opposed to a low-level encoding such as image pixels. By constructing a new spatiotemporal graph with an advanced visual corresponding descriptor, the policy learning can focus on key points and relations with a 3D spatial configuration, which allows to quickly generalize across different environments. To further boost the policy searching, we combine optical flow and static motion saliency maps to discriminate the dominant motions for better handling the system dynamics in real-time, which aligns with the attentional motion mechanism that dominates the human imitation process. To validate the proposed approach, we analyze the manual folding procedure and developed a custom-made end-effector to efficiently interact with the folding board. Multiple experiments on a real robotic platform were conducted to validate the effectiveness and robustness of the proposed method. ",Learning Cloth Folding Tasks with Refined Flow Based Spatio-Temporal Graphs " The measured properties of stellar oscillations can provide powerful constraints on the internal structure and composition of stars. To begin this process, oscillation frequencies must be extracted from the observational data, typically time series of the star's brightness or radial velocity. In this paper, a probabilistic model is introduced for inferring the frequencies and amplitudes of stellar oscillation modes from data, assuming that there is some periodic character to the oscillations, but that they may not be exactly sinusoidal. Effectively we fit damped oscillations to the time series, and hence the mode lifetime is also recovered. While this approach is computationally demanding for large time series (> 1500 points), it should at least allow improved analysis of observations of solar-like oscillations in subgiant and red giant stars, as well as sparse observations of semiregular stars, where the number of points in the time series is often low. The method is demonstrated on simulated data and then applied to radial velocity measurements of the red giant star xi Hydrae, yielding a mode lifetime between 0.41 and 2.65 days with 95% posterior probability. The large frequency separation between modes is ambiguous, however we argue that the most plausible value is 6.3 microHz, based on the radial velocity data and the star's position in the HR diagram. ",Gaussian Process Modelling of Asteroseismic Data " A coloring of the Boolean $n$-cube is called perfect if, for every vertex $x$, the collection of the colors of the neighbors of $x$ depends only on the color of $x$. A Boolean function is called correlation-immune of degree $n-m$ if it takes the value 1 the same number of times for each $m$-face of the Boolean $n$-cube. In the present paper it is proven that each Boolean function $\chi^S$ ($S\subset E^n$) satisfies the inequality $${\rm nei}(S)+ 2({\rm cor}(S)+1)(1-\rho(S))\leq n,$$ where ${\rm cor}(S)$ is the maximum degree of the correlation immunity of $\chi^S$, ${\rm nei} (S)= \frac{1}{|S|}\sum\limits_{x\in S}|B(x)\cap S|-1$ is the average number of neighbors in the set $S$ for vertices in $S$, and $\rho(S)=|S|/2^n$ is the density of the set $S$. Moreover, the function $\chi^S$ is a perfect coloring if and only if we obtain an equality in the above formula. Keywords: hypercube, perfect coloring, perfect code, correlation-immune function. ",On the connection between correlation-immune functions and perfect 2-colorings of the Boolean n-cube " Layered transition metal dichalcogenide WTe$_2$ has recently attracted significant attention due to the discovery of an extremely large magnetoresistance, a predicted type-II Weyl semimetallic state, and the pressure-induced superconducting state. By a careful measurement of the superconducting upper critical fields as a function of the magnetic field angle at a pressure as high as 98.5 kbar, we provide the first detailed examination of the dimensionality of the superconducting condensate in WTe$_2$. Despite the layered crystal structure, the upper critical field exhibits a negligible field anisotropy. The angular dependence of the upper critical field can be satisfactorily described by the anisotropic mass model from 2.2 K ($T/T_c\sim0.67$) to 0.03 K ($T/T_c\sim0.01$), with a practically identical anisotropy factor $\gamma\sim1.7$. The temperature dependence of the upper critical field, determined for both $H\perp ab$ and $H\parallel ab$, can be understood by a conventional orbital depairing mechanism. Comparison of the upper critical fields along the two orthogonal field directions results in the same value of $\gamma\sim1.7$, leading to a temperature independent anisotropy factor from near $T_c$ to $<0.01T_c$. Our findings thus identify WTe$_2$ as a nearly isotropic superconductor, with an anisotropy factor among one of the lowest known in superconducting transition metal dichalcogenides. ",Nearly isotropic superconductivity in layered Weyl semimetal WTe$_2$ at 98.5 kbar " In this paper, by using Cauchy-Binet formula and charcter sums over finite fields, we evaluate determinants of two cyclotomic matrices involving Jacobi sums $J(\chi^i,\chi^j)$. For example, we show that $$\det[J(\chi^i,\chi^j)]_{1\le i,j\le p-2}=(p-1)^{p-3},$$ and that $$\det [J(\chi^{2i},\chi^{2j})]_{1\le i,j\le (p-3)/2} =\frac{p+(-1)^{\frac{p+1}{2}}}{4} \left(\frac{p-1}{2}\right)^{\frac{p-5}{2}}(-1)^{\frac{p+1}{2}},$$ where $p$ is an odd prime and $\chi$ is a generator of the group of all multiplicative characters of $\mathbb{F}_p$. ",On Determinants of cyclotomic matrices involving Jacobi sums " The absolute atomic mass of $^{208}$Pb has been determined with a fractional uncertainty of $7\times 10^{-11}$ by measuring the cyclotron-frequency ratio $R$ of $^{208}$Pb$^{41+}$ to $^{132}$Xe$^{26+}$ with the high-precision Penning-trap mass spectrometer Pentatrap and computing the binding energies $E_{\text{Pb}}$ and $E_{\text{Xe}}$ of the missing 41 and 26 atomic electrons, respectively, with the ab initio fully relativistic multi-configuration Dirac-Hartree-Fock (MCDHF) method. $R$ has been measured with a relative precision of $9\times 10^{-12}$. $E_{\text{Pb}}$ and $E_{\text{Xe}}$ have been computed with an uncertainty of 9.1 eV and 2.1 eV, respectively, yielding $207.976\,650\,571(14)$ u (u$=9.314\,941\,024\,2(28)\times 10^{8}$ eV/c$^2$) for the $^{208}$Pb neutral atomic mass. This result agrees within $1.2\sigma$ with that from the Atomic-Mass Evaluation (AME) 2020, while improving the precision by almost two orders of magnitude. The new mass value directly improves the mass precision of 14 nuclides in the region of Z=81-84 and is the most precise mass value with A>200. Thus, the measurement establishes a new region of reference mass values which can be used e.g. for precision mass determination of transuranium nuclides, including the superheavies. ",High-precision mass measurement of doubly magic $^{208}$Pb " Dewatering processes are invariably encountered in the chemical manufacturing and processing of various bioproducts. In this study, Computational Fluid Mechanics (CFD) simulations and theory are utilized to model and optimize the dewatering of commercial nanofiber suspensions. The CFD simulations are based on the volume-averaged Navier-Stokes equations while the analytical model is deduced from the empirical Darcy's law for dewatering flows. The results are then successfully compared to experimental data on commercial cellulose suspensions obtained with a Dynamic Drainage Analyzer (DDA). Both the CFD simulations and the analytical model capture the dewatering flow profiles of the commercial suspensions in an experiment utilizing a constant pressure profile. However, a temporally varying pressure profile offers a superior dewatering performance, as indicated by both the simulations and the analytical model. Finally, the analytical model also predicts an optimized number of pressure pulses, minimizing the time required to completely dewater the suspension. ",The effect of pressure pulsing on the mechanical dewatering of nanofiber suspensions " We give a general framework for the treatment of perturbations of types and structures in continuous logic, allowing to specify which parts of the logic may be perturbed. We prove that separable, elementarily equivalent structures which are approximately $\aleph_0$-saturated up to arbitrarily small perturbations are isomorphic up to arbitrarily small perturbations (where the notion of perturbation is part of the data). As a corollary, we obtain a Ryll-Nardzewski style characterisation of complete theories all of whose separable models are isomorphic up to arbitrarily small perturbations. ",On perturbations of continuous structures " We consider the 3D incompressible Hall-MHD system and prove a stability theorem for global large solutions under a suitable integrable hypothesis in which one of the parcels is linked to the Hall term. As a byproduct, a class of global strong solutions is obtained with large velocities and small initial magnetic fields. Moreover, we prove the local-in-time well-posedness of $H^{2}$-strong solutions which improves previous regularity conditions on initial data. ",Existence and stability of global large strong solutions for the Hall-MHD system " Over the last years, supervised learning (SL) has established itself as the state-of-the-art for data-driven turbulence modeling. In the SL paradigm, models are trained based on a dataset, which is typically computed a priori from a high-fidelity solution by applying the respective filter function, which separates the resolved and the unresolved flow scales. For implicitly filtered large eddy simulation (LES), this approach is infeasible, since here, the employed discretization itself acts as an implicit filter function. As a consequence, the exact filter form is generally not known and thus, the corresponding closure terms cannot be computed even if the full solution is available. The reinforcement learning (RL) paradigm can be used to avoid this inconsistency by training not on a previously obtained training dataset, but instead by interacting directly with the dynamical LES environment itself. This allows to incorporate the potentially complex implicit LES filter into the training process by design. In this work, we apply a reinforcement learning framework to find an optimal eddy-viscosity for implicitly filtered large eddy simulations of forced homogeneous isotropic turbulence. For this, we formulate the task of turbulence modeling as an RL task with a policy network based on convolutional neural networks that adapts the eddy-viscosity in LES dynamically in space and time based on the local flow state only. We demonstrate that the trained models can provide long-term stable simulations and that they outperform established analytical models in terms of accuracy. In addition, the models generalize well to other resolutions and discretizations. We thus demonstrate that RL can provide a framework for consistent, accurate and stable turbulence modeling especially for implicitly filtered LES. ",Deep Reinforcement Learning for Turbulence Modeling in Large Eddy Simulations The gluon propagator is calculated in quenched QCD for two different lattice sizes (16^3x48 and 32^3x64) at beta=6.0. The volume dependence of the propagator in Landau gauge is studied. The smaller lattice is instrumental in revealing finite volume and anisotropic lattice artefacts. Methods for minimising these artefacts are developed and applied to the larger lattice data. New structure seen in the infrared region survives these conservative cuts to the lattice data. This structure serves to rule out a number of models that have appeared in the literature. A fit to a simple analytical form capturing the momentum dependence of the nonperturbative gluon propagator is also reported. ,Gluon Propagator in the Infrared Region " We show how to approximate diffeomorphisms of the closed interval and the circle by elements of Thompson's groups $F$ and $T$, respectively. This is relevant in the context of Jones' continuum limit of discrete multipartite systems and its dynamics. ",Approximating Diffeomorphisms by Elements of Thompson's Groups F and T " We consider a quasilinear parabolic Cauchy problem with spatial anisotropy of orthotropic type and study the spatial localization of solutions. Assuming the initial datum is localized with respect to a coordinate having slow diffusion rate, we bound the corresponding directional velocity of the support along the flow. The expansion rate is shown to be optimal for large times. ",Anisotropic Sobolev embeddings and the speed of propagation for parabolic equations " The distribution function of the free energy fluctuations in one-dimensional directed polymers with $\delta$-correlated random potential is studied by mapping the replicated problem to the $N$-particle quantum boson system with attractive interactions. We find the full set of eigenfunctions and eigenvalues of this many-body system and perform the summation over the entire spectrum of excited states. It is shown that in the thermodynamic limit the problem is reduced to the Fredholm determinant with the Airy kernel yielding the universal Tracy-Widom distribution, which is known to describe the statistical properties of the Gaussian unitary ensemble as well as many other statistical systems. ",Replica Bethe ansatz derivation of the Tracy-Widom distribution of the free energy fluctuations in one-dimensional directed polymers " A connection between non-perturbative formulations of quantum gravity and perturbative string theory is exhibited, based on a formulation of the non-perturbative dynamics due to Markopoulou. In this formulation the dynamics of spin network states and their generalizations is described in terms of histories which have discrete analogues of the causal structure and many fingered time of Lorentzian spacetimes. Perturbations of these histories turn out to be described in terms of spin systems defined on 2-dimensional timelike surfaces embedded in the discrete spacetime. When the history has a classical limit which is Minkowski spacetime, the action of the perturbation theory is given to leading order by the spacetime area of the surface, as in bosonic string theory. This map between a non-perturbative formulation of quantum gravity and a 1+1 dimensional theory generalizes to a large class of theories in which the group SU(2) is extended to any quantum group or supergroup. It is argued that a necessary condition for the non-perturbative theory to have a good classical limit is that the resulting 1+1 dimensional theory defines a consistent and stable perturbative string theory. ",Strings as perturbations of evolving spin-networks " We study atom losses associated to a previously unreported magnetic Feshbach resonance in potassium 39. This resonance is peculiar in that it presents $d$-wave character both in the open and in the closed channels, directly coupled by the dominant spin-exchange interaction. The losses associated to a $d$-wave open-channel resonance present specific signatures such as strong temperature dependance and anisotropic line shapes. The resonance strength and position depend on the axial projection of the orbital angular momentum of the system and are extracted from rigorous multichannel calculations. A two-step model, with an intermediate collision complex being ejected from the trap after collisions with free atoms, permits to reproduce the observed dependance of the loss rate as a function of temperature and magnetic field. ",Quantitative analysis of losses close to a d-wave open-channel Feshbach resonance in Potassium 39 " We argue that Lovelock theories of gravity suffer from shock formation, unlike General Relativity. We consider the propagation of (i) a discontinuity in curvature, and (ii) weak, high frequency, gravitational waves. Such disturbances propagate along characteristic hypersurfaces of a ""background"" spacetime and their amplitude is governed by a transport equation. In GR the transport equation is linear. In Lovelock theories, it is nonlinear and its solutions can blow up, corresponding to the formation of a shock. We show that this effect is absent in some simple cases e.g. a flat background spacetime, and demonstrate its presence for a plane wave background. We comment on weak cosmic censorship, the evolution of shocks, and the nonlinear stability of Minkowski spacetime, in Lovelock theories. ",Shock Formation in Lovelock Theories " Grain boundaries (GBs) often control the processing and properties of polycrystalline materials. Here, a potentially transformative research is represented by constructing GB property diagrams as functions of temperature and bulk composition, also called ""complexion diagrams,"" as a general materials science tool on par with phase diagrams. However, a GB has five macroscopic (crystallographic) degrees of freedom (DOFs). It is essentially a ""mission impossible"" to construct property diagrams for GBs as a function of five DOFs by either experiments or modeling. Herein, we combine isobaric semi-grand-canonical ensemble hybrid Monte Carlo and molecular dynamics (hybrid MC/MD) simulations with a genetic algorithm (GA) and deep neural network (DNN) models to tackle this grand challenge. The DNN prediction is ~108 faster than atomistic simulations, thereby enabling the construction of the property diagrams for millions of distinctly different GBs of five DOFs. Notably, excellent prediction accuracies have been achieved for not only symmetric-tilt and twist GBs, but also asymmetric-tilt and mixed tilt-twist GBs; the latter are more complex and much less understood, but they are ubiquitous and often limit the performance properties of real polycrystals as the weak links. The data-driven prediction of GB properties as function of temperature, bulk composition, and five crystallographic DOFs (i.e., in a 7D space) opens a new paradigm. ",Genetic Algorithm-Guided Deep Learning of Grain Boundary Diagrams: Addressing the Challenge of Five Degrees of Freedom " Quantum Electrodynamics (QED) is considered the most accurate theory in the history of science. However, this precision is limited to a single experimental value: the anomalous magnetic moment of the electron (g-factor). The calculation of the electron g-factor was carried out in 1950 by Karplus and Kroll. Seven years later, Petermann detected and corrected a serious error in the calculation of a Feynman diagram; however, neither the original calculation nor the subsequent correction was ever published.Therefore, the entire prestige of QED depends on the calculation of a single Feynman diagram (IIc) that has never been published and cannot be independently verified. ",The Unpublished Feynman Diagram IIc " We deal with the distribution of N points placed consecutively around the circle by a fixed angle of a. From the proof of Tony van Ravenstein, we propose a detailed proof of the Steinhaus conjecture whose result is the following: the N points partition the circle into gaps of at most three different lengths. We study the mathematical notions required for the proof of this theorem revealed during a formal proof carried out in Coq. ",The Three Gap Theorem (Steinhauss Conjecture) " The mathematical functions log(x), exp(x), root[n]x, sin(x), cos(x), tan(x), arcsin(x), arctan(x), x^y, sinh(x), cosh(x), tanh(x) and Gamma(x) have been implemented for arguments x in the real domain in a native Java library on top of the multi-precision BigDecimal representation of floating point numbers. This supports scientific applications where more than the double precision accuracy of the library of the Standard Edition is desired. The full source code is made available under the LGPL v3.0. ",A Java Math.BigDecimal Implementation of Core Mathematical Functions " Sequential ballistic deposition (BD) with next-nearest-neighbor (NNN) interactions in a N-column box is viewed a time-ordered product of N\times N-matrices consisting of a single sl_2-block which has a random position along the diagonal. We relate the uniform BD growth with the diffusion in the symmetric space H_N=SL(N,R)/SO(N). In particular, the distribution of the maximal height of a growing heap is connected with the distribution of the maximal distance for the diffusion process in H_N. The coordinates of H_N are interpreted as the coordinates of particles of the one--dimensional Toda chain. The group-theoretic structure of the system and links to some random matrix models are also discussed. ",Random ballistic growth and diffusion in symmetric spaces " Vulnerabilities related to weak passwords are a pressing global economic and security issue. We report a novel, simple, and effective approach to address the weak password problem. Building upon chaotic dynamics, criticality at phase transitions, CAPTCHA recognition, and computational round-off errors we design an algorithm that strengthens security of passwords. The core idea of our method is to split a long and secure password into two components. The first component is memorized by the user. The second component is transformed into a CAPTCHA image and then protected using evolution of a two-dimensional dynamical system close to a phase transition, in such a way that standard brute-force attacks become ineffective. We expect our approach to have wide applications for authentication and encryption technologies. ","The weak password problem: chaos, criticality, and encrypted p-CAPTCHAs" " We follow the mathematical framework proposed by Bouchut and present in this contribution a dual entropy approach for determining equilibrium states of a lattice Boltzmann scheme. This method is expressed in terms of the dual of the mathematical entropy relative to the underlying conservation law. It appears as a good mathematical framework for establishing a ""H-theorem"" for the system of equations with discrete velocities. The dual entropy approach is used with D1Q3 lattice Boltzmann schemes for the Burgers equation. It conducts to the explicitation of three different equilibrium distributions of particles and induces naturally a nonlinear stability condition. Satisfactory numerical results for strong nonlinear shocks and rarefactions are presented. We prove also that the dual entropy approach can be applied with a D1Q3 lattice Boltzmann scheme for systems of linear and nonlinear acoustics and we present a numerical result with strong nonlinear waves for nonlinear acoustics. We establish also a negative result: with the present framework, the dual entropy approach cannot be used for the shallow water equations. ",Stable lattice Boltzmann schemes with a dual entropy approach for monodimensional nonlinear waves Some integration techniques for real-valued functions with respect to vector measures with values in Banach spaces (and viceversa) are investigated in order to establish abstract versions of classical theorems of Probability and Stochastic Processes. In particular the Girsanov Theorem is extended and used with the treated methods. ,A vector Girsanov result and its applications to conditional measures via the Birkhoff integrability " We study the moments finiteness problem for the class of Lipschitz maps $F: [a,b]\rightarrow\mathbb R^n$ with images in a compact Lipschitz triangulable curve $\Gamma$. We apply the obtained results to the center problem for ODEs describing in some cases (including equations with analytic coefficients) the set of universal centers of such equations by vanishing of finitely many moments from their coefficients. ",Moments Finiteness Problem and Center Problem for Ordinary Differential Equations In this proceeding we review some of our recent results for the vector meson electromagnetic form factors and structure functions in the Sakai-Sugimoto model. The latter is a string model that describes many features of the non-perturbative regime of Quantum Chromodynamics in the large $N_c$ limit. ,Electromagnetic scattering of vector mesons in the Sakai-Sugimoto model " We construct an unbounded representative for the shriek class associated to the embeddings of spheres into Euclidean space. We equip this unbounded Kasparov cycle with a connection and compute the unbounded Kasparov product with the Dirac operator on $\mathbb R^{n+1}$. We find that the resulting spectral triple for the algebra $C(\mathbb S^n)$ differs from the Dirac operator on the round sphere by a so-called index cycle, whose class in $KK_0(\mathbb C, \mathbb C)$ represents the multiplicative unit. At all points we check that our construction involving the unbounded Kasparov product is compatible with the bounded Kasparov product using Kucerovsky's criterion and we thus capture the composition law for the shriek map for these immersions at the unbounded KK-theoretical level. ",Immersions and the unbounded Kasparov product: embedding spheres into Euclidean space " The complexity of sentences characteristic to biomedical articles poses a challenge to natural language parsers, which are typically trained on large-scale corpora of non-technical text. We propose a text simplification process, bioSimplify, that seeks to reduce the complexity of sentences in biomedical abstracts in order to improve the performance of syntactic parsers on the processed sentences. Syntactic parsing is typically one of the first steps in a text mining pipeline. Thus, any improvement in performance would have a ripple effect over all processing steps. We evaluated our method using a corpus of biomedical sentences annotated with syntactic links. Our empirical results show an improvement of 2.90% for the Charniak-McClosky parser and of 4.23% for the Link Grammar parser when processing simplified sentences rather than the original sentences in the corpus. ",Towards Effective Sentence Simplification for Automatic Processing of Biomedical Text " Cells have evolved efficient strategies to probe their surroundings and navigate through complex environments. From metastatic spread in the body to swimming cells in porous materials, escape through narrow constrictions - a key component of any structured environment connecting isolated micro-domains - is one ubiquitous and crucial aspect of cell exploration. Here, using the model microalgae Chlamydomonas reinhardtii, we combine experiments and simulations to achieve a tractable realization of the classical Brownian narrow escape problem in the context of active confined matter. Our results differ from those expected for Brownian particles or leaking chaotic billiards and demonstrate that cell-wall interactions substantially modify escape rates and, under generic conditions, expedite spread dynamics. ",Microbial narrow-escape is facilitated by wall interactions " Text-based person search is a sub-task in the field of image retrieval, which aims to retrieve target person images according to a given textual description. The significant feature gap between two modalities makes this task very challenging. Many existing methods attempt to utilize local alignment to address this problem in the fine-grained level. However, most relevant methods introduce additional models or complicated training and evaluation strategies, which are hard to use in realistic scenarios. In order to facilitate the practical application, we propose a simple but effective end-to-end learning framework for text-based person search named TIPCB (i.e., Text-Image Part-based Convolutional Baseline). Firstly, a novel dual-path local alignment network structure is proposed to extract visual and textual local representations, in which images are segmented horizontally and texts are aligned adaptively. Then, we propose a multi-stage cross-modal matching strategy, which eliminates the modality gap from three feature levels, including low level, local level and global level. Extensive experiments are conducted on the widely-used benchmark dataset (CUHK-PEDES) and verify that our method outperforms the state-of-the-art methods by 3.69%, 2.95% and 2.31% in terms of Top-1, Top-5 and Top-10. Our code has been released in https://github.com/OrangeYHChen/TIPCB. ",TIPCB: A Simple but Effective Part-based Convolutional Baseline for Text-based Person Search " We study the N-Delta-gamma transition in the Dyson-Schwinger approach. The nucleon and Delta baryons are treated as quark-diquark bound states, where the ingredients of the electromagnetic transition current are computed self-consistently from the underlying dynamics in QCD. Although our approach does not include pion-cloud effects, we find that the electric and Coulomb quadrupole form-factor ratios R_EM and R_SM show good agreement with experimental data. This implies that the deformation from a spherical charge distribution inside both baryons can be traced back to the appearance of p waves in the nucleon and Delta bound-state amplitudes which are a consequence of Poincare covariance. On the other hand, the dominant transition amplitude, i.e. the magnetic dipole transition form factor, underestimates the data by ~25% in the static limit whereas agreement is achieved at larger momentum transfer, which is consistent with missing pion-cloud contributions. We furthermore find that the static properties of the form factors are not very sensitive to a variation of the current-quark mass. ",Nucleon to Delta electromagnetic transition in the Dyson-Schwinger approach " This paper presents a method for synthesizing a reactive program which coordinates the actions of a group of other reactive programs, so that the combined system satisfies a temporal specification of its desired long-term behavior. Traditionally, reactive synthesis has been applied to the construction of a stateful hardware circuit. This work is motivated by applications to other domains, such as the IoT (the Internet of Things) and robotics, where it is necessary to coordinate the actions of multiple sensors, devices, and robots. The mathematical model represents such entities as individual processes in Hoare's CSP model. Given a network of interacting entities, called an \emph{environment}, and a temporal specification of long-term behavior, the synthesis method constructs a \emph{coordinator} process (if one exists) that guides the actions of the environment entities so that the combined system is deadlock-free and satisfies the given specification. The main technical challenge is that a coordinator may have only \emph{partial knowledge} of the environment state, due to non-determinism within the environment, and environment actions that are hidden from the coordinator. This is the first method to handle both sources of partial knowledge, and to do so for arbitrary linear temporal logic specifications. It is shown that the coordination synthesis problem is \PSPACE-hard in the size of the environment. A prototype implementation is able to synthesize compact solutions for a number of coordination problems. ",Synthesis of coordination programs from linear temporal logic " Software design is one of the most important and key activities in the system development life cycle (SDLC) phase that ensures the quality of software. Different key areas of design are very vital to be taken into consideration while designing software. Software design describes how the software system is decomposed and managed in smaller components. Object-oriented (OO) paradigm has facilitated software industry with more reliable and manageable software and its design. The quality of the software design can be measured through different metrics such as Chidamber and Kemerer (CK) design metrics, Mood Metrics & Lorenz and Kidd metrics. CK metrics is one of the oldest and most reliable metrics among all metrics available to software industry to evaluate OO design. This paper presents an evaluation of CK metrics to propose an improved CK design metrics values to reduce the defects during software design phase in software. This paper will also describe that whether a significant effect of any CK design metrics exists on total number of defects per module or not. This is achieved by conducting survey in two software development companies. ",Evaluation of the Design Metric to Reduce the Number of Defects in Software Development The adjustment of two different selfocs is considered using both exact formulas for the mode-connection coefficients expressed in terms of Hermite polynomials of several variables and a qualitative approach based on the Frank-Condon principle. Several examples of the refractive-index dependence are studied and illustrative plots for these examples are presented. The connection with the tomographic approach to quantum states of a two-dimensional oscillator and the Frank-Condon factors is established. ,Frank-Condon principle and adjustment of optical waveguides with nonhomogeneous refractive indices " The characteristics of shallow hydrogen-like muonium (Mu) states in nominally undoped ZnO and CdS (0001) crystals have been studied close to the surface at depths in the range of 10 nm - 180 nm by using low-energy muons, and in the bulk using conventional muSR. The muon implantation depths are adjusted by tuning the energy of the low-energy muons between 2.5 keV and 30 keV. We find that the bulk ionization energy of the shallow donor-like Mu state is lowered by about 10 meV at a depth of 100 nm, and continuously decreasing on approaching the surface. At a depth of about 10 nm the ionization energy is further reduced by 25-30 meV compared to its bulk value. We attribute this change to the presence of electric fields due to band bending close to the surface, and we determine the depth profile of the electric field within a simple one-dimensional model. ",Depth dependence of the ionization energy of shallow hydrogen states in ZnO and CdS " We prove the convergence of the spectrum of the generator of the kinetic Brownian motion to the spectrum of the base Laplacian for closed Riemannian manifolds. This generalizes recent work of Kolb--Weich--Wolf [arXiv:2011.06434] on constant curvature surfaces and of Ren--Tao [arXiv:2208.13111] on locally symmetric spaces. As an application, we prove a conjecture of Baudoin--Tardif [arXiv:1604.06813] on the optimal convergence rate to the equilibrium. ",Spectral asymptotics for kinetic Brownian motion on Riemannian manifolds " Topography is the expression of both internal and external processes of a planetary body. Thus hypsometry (the study of topography) is a way to decipher the dynamic of a planet. For that purpose, the statistics of height and slopes may be described by different tools, at local and global scale. We propose here to use the multifractal approach to describe fields of topography. This theory both encompass height and slopes and other statistical moment of the field, tacking into account the scale invariance. Contrary to the widely used fractal formalism, multifractal is able to describe the intermittency of the topography field. As we commonly observe the juxtapostion of rough and smooth at given scale, the multifractal framework seems to be appropriate for hypsometric studies. Here we analyze the data at global scale of the Earth, Mars, Mercury and the Moon and find that the statistics are in good agreement with the multifractal theory for scale larger than 10km. Surprisingly, the analysis shows that all bodies have the same fractal behavior for scale smaller than 10km. We hypothesized that dynamic topography of the mantle may be the explanation at large scale, whereas the smaller scales behavior may be related to elastic thickness. ",Multifractal topography of several planetary bodies in the Solar System " Unresolved Doppler velocity measurements are not homogenous across the solar disc (Brookes et al. 1978). We consider one cause of the inhomogeneity that originates from the BiSON instrumentation itself: the intensity of light observed from a region on the solar disc is dependent on the distance between that region on the image of the solar disc formed in the instrument and the detector. The non-uniform weighting affects the realization of the solar noise and the amplitudes of the solar oscillations observed by a detector. An 'offset velocity', which varies with time, is observed in BiSON data and has consequences for the long-term stability of observations. We have attempted to model, in terms of the inhomogeneous weighting, the average observed offset velocity. ",The inhomogeneous response across the solar disc of unresolved Doppler velocity observations " In this paper, the unidirectional pulse propagation equation generalized to structured media is derived. A fast modal transform linking the spatio-temporal representation of the field and its modal distribution is presented. This transform is used for solving the propagation equation by using a split-step algorithm. As an example, we present, to the best of our knowledge, the first numerical evidence of the generation of conical waves in highly multimodes waveguides. ",Multimodal unidirectionnal pulse propagation equation " The number of rich galaxy clusters per unit volume is a strong function of Omega, the cosmological density parameter, and sigma_8, the linear extrapolation to z=0 of the density contrast in 8/h Mpc spheres. The CNOC cluster redshift survey provides a sample of clusters whose average mass profiles are accurately known, which enables a secure association between cluster numbers and the filtered density perturbation spectrum. We select from the CNOC cluster survey those EMSS clusters with bolometric L_x>=10^45 erg/s and a velocity dispersion exceeding 800 km/s in the redshift ranges 0.18-0.35 and 0.35-0.55. We compare the number density of these subsamples with similar samples at both high and low redshift. Using the Press-Schechter formalism and CDM style structure models, the density data are described with sigma_8=0.75+/-0.1 and Omega=0.4+/-0.2 (90% confidence). The cluster dynamical analysis gives Omega=0.2+/-0.1$ for which sigma_8=0.95+/-0.1 (90% confidence). The predicted cluster density evolution in an \Omega=1 CDM model exceeds that observed by more than an order of magnitude. ",Redshift Evolution of Galaxy Cluster Densities " Making informed decisions about model adequacy has been an outstanding issue for regression models with discrete outcomes. Standard assessment tools for such outcomes (e.g. deviance residuals) often show a large discrepancy from the hypothesized pattern even under the true model and are not informative especially when data are highly discrete (e.g. binary). To fill this gap, we propose a quasi-empirical residual distribution function for general discrete (e.g. ordinal and count) outcomes that serves as an alternative to the empirical Cox-Snell residual distribution function. The assessment tool we propose is a principled approach and does not require injecting noise into the data. When at least one continuous covariate is available, we show asymptotically that the proposed function converges uniformly to the identity function under the correctly specified model, even with highly discrete outcomes. Through simulation studies, we demonstrate empirically that the proposed quasi-empirical residual distribution function outperforms commonly used residuals for various model assessment tasks, since it is close to the hypothesized pattern under the true model and significantly departs from this pattern under model misspecification, and is thus an effective assessment tool. ",Assessment of Regression Models with Discrete Outcomes Using Quasi-Empirical Residual Distribution Functions " Phylogenetic networks are notoriously difficult to reconstruct. Here we suggest that it can be useful to view unknown genetic distance along edges in phylogenetic networks as analogous to unknown resistance in electric circuits. This resistance distance, well known in graph theory, turns out to have nice mathematical properties which allow the precise reconstruction of networks. Specifically we show that the resistance distance for a weighted 1-nested network is Kalmanson, and that the unique associated circular split network fully represents the splits of the original phylogenetic network (or circuit). In fact, this full representation corresponds to a face of the balanced minimal evolution polytope for level-1 networks. Thus the unweighted class of the original network can be reconstructed by either the greedy algorithm neighbor-net or by linear programming over a balanced minimal evolution polytope. We begin study of 2-nested networks with both minimum path and resistance distance, and include some counting results for 2-nested networks. ",Phylogenetic networks as circuits with resistance distance " In atoms spin-orbit coupling (SOC) cannot raise the angular momentum above a maximum value or lower it below a minimum. Here we show that this need not be the case in materials built from nanoscale structures including multi-nuclear coordination complexes, materials with decorated lattices, or atoms on surfaces. In such cyclic molecules the electronic spin couples to currents running around the molecule. For odd-fold symmetric molecules (e.g., odd membered rings) the SOC is highly analogous to the atomic case; but for even-fold symmetric molecules every angular momentum state can be both raised and lowered. These differences arise because for odd-fold symmetric molecules the maximum and minimum molecular orbital angular momentum states are time reversal conjugates, whereas for even-fold symmetric molecules they are aliases of the same single state. We show, from first principles calculations, that in suitable molecules this molecular SOC is large, compared to the energy differences between frontier molecular orbitals. Finally, we show that, when electronic correlations are strong, molecular SOC can cause highly anisotropic exchange interactions and discuss how this can lead to effective spin models with compass Hamiltonians. ",Spin-orbit coupling and strong electronic correlations in cyclic molecules " We study the problem of transfer learning, observing that previous efforts to understand its information-theoretic limits do not fully exploit the geometric structure of the source and target domains. In contrast, our study first illustrates the benefits of incorporating a natural geometric structure within a linear regression model, which corresponds to the generalized eigenvalue problem formed by the Gram matrices of both domains. We next establish a finite-sample minimax lower bound, propose a refined model interpolation estimator that enjoys a matching upper bound, and then extend our framework to multiple source domains and generalized linear models. Surprisingly, as long as information is available on the distance between the source and target parameters, negative-transfer does not occur. Simulation studies show that our proposed interpolation estimator outperforms state-of-the-art transfer learning methods in both moderate- and high-dimensional settings. ",A Class of Geometric Structures in Transfer Learning: Minimax Bounds and Optimality " Well-known principles of induction include monotone induction and different sorts of non-monotone induction such as inflationary induction, induction over well-founded sets and iterated induction. In this work, we define a logic formalizing induction over well-founded sets and monotone and iterated induction. Just as the principle of positive induction has been formalized in FO(LFP), and the principle of inflationary induction has been formalized in FO(IFP), this paper formalizes the principle of iterated induction in a new logic for Non-Monotone Inductive Definitions (ID-logic). The semantics of the logic is strongly influenced by the well-founded semantics of logic programming. Our main result concerns the modularity properties of inductive definitions in ID-logic. Specifically, we formulate conditions under which a simultaneous definition $\D$ of several relations is logically equivalent to a conjunction of smaller definitions $\D_1 \land ... \land \D_n$ with disjoint sets of defined predicates. The difficulty of the result comes from the fact that predicates $P_i$ and $P_j$ defined in $\D_i$ and $\D_j$, respectively, may be mutually connected by simultaneous induction. Since logic programming and abductive logic programming under well-founded semantics are proper fragments of our logic, our modularity results are applicable there as well. ",A Logic for Non-Monotone Inductive Definitions " We consider a spatial multi-type branching model in which individuals migrate in geographic space according to random walks and reproduce according to a state-dependent branching mechanism which can be sub-, super- or critical depending on the local intensity of individuals of the different types. The model is a Lotka-Volterra type model with a spatial component and is related to two models studied in \cite{BlathEtheridgeMeredith2007} as well as to earlier work in \cite{Etheridge2004} and in \cite{NeuhauserPacala1999}. Our main focus is on the diffusion limit of small mass, locally many individuals and rapid reproduction. This system differs from spatial critical branching systems since it is not density preserving and the densities for large times do not depend on the initial distribution but mainly on the carrying capacities. We prove existence of the infinite particle model and the system of interacting diffusions as solutions of martingale problems or systems of stochastic equations. In the exchangeable case in which the parameters are not type dependent we show uniqueness of the solutions. For that purpose we establish a new exponential duality. ",Multi-type spatial branching models for local self-regulation I: Construction and an exponential duality " Integrating renewable energy sources into the power grid is becoming increasingly important as the world moves towards a more sustainable energy future in line with SDG 7. However, the intermittent nature of renewable energy sources can make it challenging to manage the power grid and ensure a stable supply of electricity, which is crucial for achieving SDG 9. In this paper, we propose a deep learning-based approach for predicting energy demand in a smart power grid, which can improve the integration of renewable energy sources by providing accurate predictions of energy demand. Our approach aligns with SDG 13 on climate action, enabling more efficient management of renewable energy resources. We use long short-term memory networks, well-suited for time series data, to capture complex patterns and dependencies in energy demand data. The proposed approach is evaluated using four historical short-term energy demand data datasets from different energy distribution companies, including American Electric Power, Commonwealth Edison, Dayton Power and Light, and Pennsylvania-New Jersey-Maryland Interconnection. The proposed model is also compared with three other state-of-the-art forecasting algorithms: Facebook Prophet, Support Vector Regression, and Random Forest Regression. The experimental results show that the proposed REDf model can accurately predict energy demand with a mean absolute error of 1.4%, indicating its potential to enhance the stability and efficiency of the power grid and contribute to achieving SDGs 7, 9, and 13. The proposed model also has the potential to manage the integration of renewable energy sources in an effective manner. ","Predicting Short Term Energy Demand in Smart Grid: A Deep Learning Approach for Integrating Renewable Energy Sources in Line with SDGs 7, 9, and 13" " Ultracold atoms in optical lattices are a flexible and effective platform for quantum precision measurement, and the lifetime of high-band atoms is an essential parameter for the performance of quantum sensors. In this work, we investigate the relationship between the lattice depth and the lifetime of D-band atoms in a triangular optical lattice and show that there is an optimal lattice depth for the maximum lifetime. After loading the Bose Einstein condensate into D-band of optical lattice by shortcut method, we observe the atomic distribution in quasi-momentum space for the different evolution time, and measure the atomic lifetime at D-band with different lattice depths. The lifetime is maximized at an optimal lattice depth, where the overlaps between the wave function of D-band and other bands (mainly S-band) are minimized. Additionally, we discuss the influence of atomic temperature on lifetime. These experimental results are in agreement with our numerical simulations. This work paves the way to improve coherence properties of optical lattices, and contributes to the implications for the development of quantum precision measurement, quantum communication, and quantum computing. ",Optimal lattice depth on lifetime of D-band ultracold atoms in a triangular optical lattice " We present a theory of conductance and noise in generic mesoscopic conductors connected in parallel, and we demonstrate that the additivity of conductance and of shot noise arises as a sole property of the junctions connecting the two (or more) conductors in parallel. Consequences on the functionality of devices based on the Aharonov-Bohm effect are also drawn. ",Theory of conductance and noise additivity in parallel mesoscopic conductors " In this paper, we show a connection between a certain online low-congestion routing problem and an online prediction of graph labeling. More specifically, we prove that if there exists a routing scheme that guarantees a congestion of $\alpha$ on any edge, there exists an online prediction algorithm with mistake bound $\alpha$ times the cut size, which is the size of the cut induced by the label partitioning of graph vertices. With previous known bound of $O(\log n)$ for $\alpha$ for the routing problem on trees with $n$ vertices, we obtain an improved prediction algorithm for graphs with high effective resistance. In contrast to previous approaches that move the graph problem into problems in vector space using graph Laplacian and rely on the analysis of the perceptron algorithm, our proof are purely combinatorial. Further more, our approach directly generalizes to the case where labels are not binary. ",Low congestion online routing and an improved mistake bound for online prediction of graph labeling " There are many data sources available that report related variables of interest that are also referenced over geographic regions and time; however, there are relatively few general statistical methods that one can readily use that incorporate these multivariate-spatio-temporal dependencies. As such, we introduce the multivariate-spatio-temporal mixed effects model (MSTM) to analyze areal data with multivariate-spatio-temporal dependencies. The proposed MSTM extends the notion of Moran's I basis functions to the multivariate-spatio-temporal setting. This extension leads to several methodological contributions including extremely effective dimension reduction, a dynamic linear model for multivariate-spatio-temporal areal processes, and the reduction of a high-dimensional parameter space using a novel parameter model. Several examples are used to demonstrate that the MSTM provides an extremely viable solution to many important problems found in different and distinct corners of the spatio-temporal statistics literature including: modeling nonseparable and nonstationary covariances, combing data from multiple repeated surveys, and analyzing massive multivariate-spatio-temporal datasets. ",Mixed Effects Modeling for Areal Data that Exhibit Multivariate-Spatio-Temporal Dependencies " Matrix rank and inertia optimization problems are a class of discontinuous optimization problems in which the decision variables are matrices running over certain matrix sets, while the ranks and inertias of the variable matrices are taken as integer-valued objective functions. In this paper, we establish a group of explicit formulas for calculating the maximal and minimal values of the rank and inertia objective functions of the Hermitian matrix expression $A_1 - B_1XB_1^{*}$ subject to the common Hermitian solution of a pair of consistent matrix equations $B_2XB^{*}_2 = A_2$ and $B_3XB_3^{*} = A_3$, and Hermitian solution of the consistent matrix equation $B_4X= A_4$, respectively. Many consequences are obtained, in particular, necessary and sufficient conditions are established for the triple matrix equations $B_1XB^{*}_1 =A_1$, $B_2XB^{*}_2 = A_2$ and $B_3XB^{*}_3 = A_3$ to have a common Hermitian solution, as necessary and sufficient conditions for the two matrix equations $B_1XB^{*}_1 =A_1$ and $B_4X = A_4$ to have a common Hermitian solution. ",Formulas for calculating the extremal ranks and inertias of a matrix-valued function subject to matrix equation restrictions " Polar codes achieve outstanding error correction performance when using successive cancellation list (SCL) decoding with cyclic redundancy check. A larger list size brings better decoding performance and is essential for practical applications such as 5G communication networks. However, the decoding speed of SCL decreases with increased list size. Adaptive SCL (ASCL) decoding can greatly enhance the decoding speed, but the decoding latency for each codeword is different so A-SCL is not a good choice for hardware-based applications. In this paper, a hardware-friendly two-staged adaptive SCL (TA-SCL) decoding algorithm is proposed such that a constant input data rate is supported even if the list size for each codeword is different. A mathematical model based on Markov chain is derived to explore the bounds of its decoding performance. Simulation results show that the throughput of TA-SCL is tripled for good channel conditions with negligible performance degradation and hardware overhead. ",A Two-staged Adaptive Successive Cancellation List Decoding for Polar Codes " Recently, the popularity and wide use of the last-generation video conferencing technologies created an exponential growth in its market size. Such technology allows participants in different geographic regions to have a virtual face-to-face meeting. Additionally, it enables users to employ a virtual background to conceal their own environment due to privacy concerns or to reduce distractions, particularly in professional settings. Nevertheless, in scenarios where the users should not hide their actual locations, they may mislead other participants by claiming their virtual background as a real one. Therefore, it is crucial to develop tools and strategies to detect the authenticity of the considered virtual background. In this paper, we present a detection strategy to distinguish between real and virtual video conferencing user backgrounds. We demonstrate that our detector is robust against two attack scenarios. The first scenario considers the case where the detector is unaware about the attacks and inn the second scenario, we make the detector aware of the adversarial attacks, which we refer to Adversarial Multimedia Forensics (i.e, the forensically-edited frames are included in the training set). Given the lack of publicly available dataset of virtual and real backgrounds for video conferencing, we created our own dataset and made them publicly available [1]. Then, we demonstrate the robustness of our detector against different adversarial attacks that the adversary considers. Ultimately, our detector's performance is significant against the CRSPAM1372 [2] features, and post-processing operations such as geometric transformations with different quality factors that the attacker may choose. Moreover, our performance results shows that we can perfectly identify a real from a virtual background with an accuracy of 99.80%. ",Real or Virtual: A Video Conferencing Background Manipulation-Detection System " Supernova remnant (SNR) blast shells can reach the flow speed $v_s = 0.1 c$ and shocks form at its front. Instabilities driven by shock-reflected ion beams heat the plasma in the foreshock, which may inject particles into diffusive acceleration. The ion beams can have the speed $v_b \approx v_s$. For $v_b \ll v_s$ the Buneman or upper-hybrid instabilities dominate, while for $v_b \gg v_s$ the filamentation and mixed modes grow faster. Here the relevant waves for $v_b \approx v_s$ are examined and how they interact nonlinearly with the particles. The collision of two plasma clouds at the speed $v_s$ is modelled with particle-in-cell (PIC) simulations, which convect with them magnetic fields oriented perpendicular to their flow velocity vector. One simulation models equally dense clouds and the other one uses a density ratio of 2. Both simulations show upper-hybrid waves that are planar over large spatial intervals and that accelerate electrons to $\sim$ 10 keV. The symmetric collision yields only short oscillatory wave pulses, while the asymmetric collision also produces large-scale electric fields, probably through a magnetic pressure gradient. The large-scale fields destroy the electron phase space holes and they accelerate the ions, which facilitates the formation of a precursor shock. ",Two-dimensional PIC simulations of ion-beam instabilities in Supernova-driven plasma flows " Delta lenses are a kind of morphism between categories which are used to model bidirectional transformations between systems. Classical state-based lenses, also known as very well-behaved lenses, are both algebras for a monad and coalgebras for a comonad. Delta lenses generalise state-based lenses, and while delta lenses have been characterised as certain algebras for a semi-monad, it is natural to ask if they also arise as coalgebras. This short paper establishes that delta lenses are coalgebras for a comonad, through showing that the forgetful functor from the category of delta lenses over a base, to the category of cofunctors over a base, is comonadic. The proof utilises a diagrammatic approach to delta lenses, and clarifies several results in the literature concerning the relationship between delta lenses and cofunctors. Interestingly, while this work does not generalise the corresponding result for state-based lenses, it does provide new avenues for exploring lenses as coalgebras. ",Delta lenses as coalgebras for a comonad " We look for a non-Gaussian signal in the WMAP 5-year temperature anisotropy maps by performing a needlet-based data analysis. We use the foreground-reduced maps obtained by the WMAP team through the optimal combination of the W, V and Q channels, and perform realistic non-Gaussian simulations in order to constrain the non-linear coupling parameter $\fnl$. We apply a third-order estimator of the needlet coefficients skewness and compute the $\chi^2$ statistics of its distribution. We obtain $-80<\fnl<120$ at 95% confidence level, which is consistent with a Gaussian distribution and comparable to previous constraints on the non-linear coupling. We then develop an estimator of $\fnl$ based on the same simulations and we find consistent constraints on primordial non-Gaussianity. We finally compute the three point correlation function in needlet space: the constraints on $\fnl$ improve to $-50<\fnl<110$ at 95% confidence level. ",Constraints on Primordial Non-Gaussianity from a Needlet Analysis of the WMAP-5 Data " Attention networks have successfully boosted accuracy in various vision problems. Previous works lay emphasis on designing a new self-attention module and follow the traditional paradigm that individually plugs the modules into each layer of a network. However, such a paradigm inevitably increases the extra parameter cost with the growth of the number of layers. From the dynamical system perspective of the residual neural network, we find that the feature maps from the layers of the same stage are homogenous, which inspires us to propose a novel-and-simple framework, called the dense and implicit attention (DIA) unit, that shares a single attention module throughout different network layers. With our framework, the parameter cost is independent of the number of layers and we further improve the accuracy of existing popular self-attention modules with significant parameter reduction without any elaborated model crafting. Extensive experiments on benchmark datasets show that the DIA is capable of emphasizing layer-wise feature interrelation and thus leads to significant improvement in various vision tasks, including image classification, object detection, and medical application. Furthermore, the effectiveness of the DIA unit is demonstrated by novel experiments where we destabilize the model training by (1) removing the skip connection of the residual neural network, (2) removing the batch normalization of the model, and (3) removing all data augmentation during training. In these cases, we verify that DIA has a strong regularization ability to stabilize the training, i.e., the dense and implicit connections formed by our method can effectively recover and enhance the information communication across layers and the value of the gradient thus alleviate the training instability. ",Layer-wise Shared Attention Network on Dynamical System Perspective " Wireless sensor networks (WSNs) suffers from the hot spot problem where the sensor nodes closest to the base station are need to relay more packet than the nodes farther away from the base station. Thus, lifetime of sensory network depends on these closest nodes. Clustering methods are used to extend the lifetime of a wireless sensor network. However, current clustering algorithms usually utilize two techniques; selecting cluster heads with more residual energy, and rotating cluster heads periodically to distribute the energy consumption among nodes in each cluster and lengthen the network lifetime. Most of the algorithms use random selection for selecting the cluster heads. Here, we propose a novel trajectory clustering technique for selecting the cluster heads in WSNs. Our algorithm selects the cluster heads based on traffic and rotates periodically. It provides the first trajectory based clustering technique for selecting the cluster heads and to extenuate the hot spot problem by prolonging the network lifetime. ",A Novel Trajectory Clustering technique for selecting cluster heads in Wireless Sensor Networks " Matter effects modify the mixing and the effective masses of neutrinos in a way which depends on the neutrino mass hierarchy. Consequently, for normal and inverted hierarchies the oscillations and flavor conversion results are different. Sensitivity to the mass hierarchy appears whenever the matter effects on the 1-3 mixing and mass splitting become substantial. This happens in supernovae in wide energy range and in the matter of the Earth. The Earth density profile is a multi-layer medium where the resonance and parametric enhancements of oscillations occur. The enhancement is realized in neutrino (antineutrino) channels for the normal (inverted) mass hierarchy. Multi-megaton scale under ice (water) atmospheric neutrino detectors with low energy threshold can establish mass hierarchy with $(3 - 10) \sigma$ confidence level in few years. The main challenges of these experiments are discussed and various ways to improve sensitivity are outlined. In particular, inelasticity measurements will allow to increase significance of the hierarchy identification by $20 - 50 \%$ . ",Neutrino mass hierarchy and matter effects " We describe a new technique for heterodyne spectroscopy, which we call Least-Squares Frequency Switching, or LSFS. This technique avoids the need for a traditional reference spectrum, which--when combined with the on-source spectrum--introduces both noise and systematic artifacts such as ``baseline wiggles''. In contrast, LSFS derives the spectrum directly, and in addition the instrumental gain profile. The resulting spectrum retains nearly the full theoretical sensitivity and introduces no systematic artifacts. Here we discuss mathematical details of the technique and use numerical experiments to explore optimum observing schemas. We outline a modification suitable for computationally difficult cases as the number of spectral channels grows beyond several thousand. We illustrate the method with three real-life examples. In one of practical interest, we created a large contiguous bandwidth aligning three smaller bandwidths end-to-end; radio astronomers are often faced with the need for a larger contiguous bandwidth than is provided with the available correlator. ",A New Technique for Heterodyne Spectroscopy: Least-Squares Frequency Switching (LSFS) We present several conjectures on multiple q-zeta values and on the role they play in certain problems of enumerative geometry. ,Hilbert schemes and multiple q-zeta values " Mixtures of experts probabilistically divide the input space into regions, where the assumptions of each expert, or conditional model, need only hold locally. Combined with Gaussian process (GP) experts, this results in a powerful and highly flexible model. We focus on alternative mixtures of GP experts, which model the joint distribution of the inputs and targets explicitly. We highlight issues of this approach in multi-dimensional input spaces, namely, poor scalability and the need for an unnecessarily large number of experts, degrading the predictive performance and increasing uncertainty. We construct a novel model to address these issues through a nested partitioning scheme that automatically infers the number of components at both levels. Multiple response types are accommodated through a generalised GP framework, while multiple input types are included through a factorised exponential family structure. We show the effectiveness of our approach in estimating a parsimonious probabilistic description of both synthetic data of increasing dimension and an Alzheimer's challenge dataset. ",Enriched Mixtures of Gaussian Process Experts " Marketplaces for distributing software products and services have been getting increasing popularity. GitHub, which is most known for its version control functionality through Git, launched its own marketplace in 2017. GitHub Marketplace hosts third party apps and actions to automate workflows in software teams. Currently, this marketplace hosts 440 Apps and 7,878 Actions across 32 different categories. Overall, 419 Third party developers released their apps on this platform which 111 distinct customers adopted. The popularity and accessibility of GitHub projects have made this platform and the projects hosted on it one of the most frequent subjects for experimentation in the software engineering research. A simple Google Scholar search shows that 24,100 Research papers have discussed GitHub within the Software Engineering field since 2017, but none have looked into the marketplace. The GitHub Marketplace provides a unique source of information on the tools used by the practitioners in the Open Source Software (OSS) ecosystem for automating their project's workflow. In this study, we (i) mine and provide a descriptive overview of the GitHub Marketplace, (ii) perform a systematic mapping of research studies in automation for open source software, and (iii) compare the state of the art with the state of the practice on the automation tools. We conclude the paper by discussing the potential of GitHub Marketplace for knowledge mobilization and collaboration within the field. This is the first study on the GitHub Marketplace in the field. ",GitHub Marketplace for Practitioners and Researchers to Date: A Systematic Analysis of the Knowledge Mobilization Gap in Open Source Software Automation " We demonstrate edge-emitting exciton-polariton (polariton) lasing from 5 to 300 K and amplification of non-radiative guided polariton modes within ZnO waveguides. The mode dispersion below and above the lasing threshold is directly measured using gratings present on top of the sample, fully demonstrating the polaritonic nature of the lasing modes. The threshold is found to be similar to that of radiative polarions in planar ZnO microcavities. These results open broad perspectives for guided polaritonics by allowing an easier and more straightforward implementation of polariton integrated circuits exploiting fast propagating polaritons. ",Edge-emitting polariton laser and amplifier based on a ZnO waveguide " The ability to generalize experimental results from randomized control trials (RCTs) across locations is crucial for informing policy decisions in targeted regions. Such generalization is often hindered by the lack of identifiability due to unmeasured effect modifiers that compromise direct transport of treatment effect estimates from one location to another. We build upon sensitivity analysis in observational studies and propose an optimization procedure that allows us to get bounds on the treatment effects in targeted regions. Furthermore, we construct more informative bounds by balancing on the moments of covariates. In simulation experiments, we show that the covariate balancing approach is promising in getting sharper identification intervals. ",Covariate Balancing Sensitivity Analysis for Extrapolating Randomized Trials across Locations " We present a mainly analytical study of the entanglement spectrum of Bernal-stacked graphene bilayers in the presence of trigonal warping in the energy spectrum. Upon tracing out one layer, the entanglement spectrum shows qualitative geometric differences to the energy spectrum of a graphene monolayer. However, topological quantities such as Berry phase type contributions to Chern numbers agree. The latter analysis involves not only the eigenvalues of the entanglement Hamiltonian but also its eigenvectors. We also discuss the entanglement spectra resulting from tracing out other sublattices. As a technical basis of our analysis we provide closed analytical expressions for the full eigensystem of bilayer graphene in the entire Brillouin zone with a trigonally warped spectrum. ",Trigonal Warping in Bilayer Graphene: Energy versus Entanglement Spectrum " This paper discusses the adaptive sampling problem in a nonholonomic mobile robotic sensor network for efficiently monitoring a spatial field. It is proposed to employ Gaussian process to model a spatial phenomenon and predict it at unmeasured positions, which enables the sampling optimization problem to be formulated by the use of the log determinant of a predicted covariance matrix at next sampling locations. The control, movement and nonholonomic dynamics constraints of the mobile sensors are also considered in the adaptive sampling optimization problem. In order to tackle the nonlinearity and nonconvexity of the objective function in the optimization problem we first exploit the linearized alternating direction method of multipliers (L-ADMM) method that can effectively simplify the objective function, though it is computationally expensive since a nonconvex problem needs to be solved exactly in each iteration. We then propose a novel approach called the successive convexified ADMM (SC-ADMM) that sequentially convexify the nonlinear dynamic constraints so that the original optimization problem can be split into convex subproblems. It is noted that both the L-ADMM algorithm and our SC-ADMM approach can solve the sampling optimization problem in either a centralized or a distributed manner. We validated the proposed approaches in 1000 experiments in a synthetic environment with a real-world dataset, where the obtained results suggest that both the L-ADMM and SC- ADMM techniques can provide good accuracy for the monitoring purpose. However, our proposed SC-ADMM approach computationally outperforms the L-ADMM counterpart, demonstrating its better practicality. ",ADMM-based Adaptive Sampling Strategy for Nonholonomic Mobile Robotic Sensor Networks We investigate the existence and especially the linear stability of single and multiple-charge quantized vortex states of nonlinear Schroedinger equations in the presence of a periodic and a parabolic potential in two spatial dimensions. The study is motivated by the examination of pancake-shaped Bose-Einstein condensates in the presence of magnetic and optical confiement. A two-parameter space of the condensate's chemical potential versus the periodic potential's strength is scanned for both single- and double-quantized vortex states located at a local minimum or a local maximum of the lattice. Triply charged vortices are also briefly discussed. Single-charged vortices are found to be stable for cosinusoidal potentials and unstable for sinusoidal ones above a critical strength. Higher charge vortices are more unstable for both types of potentials and their dynamical evolution leads to breakup into single-charged vortices. ,Stability of Quantized Vortices in a Bose-Einstein Condensate Confined in an Optical Lattice " In this article, we introduce the concept of samplets by transferring the construction of Tausch-White wavelets to the realm of data. This way we obtain a multilevel representation of discrete data which directly enables data compression, detection of singularities and adaptivity. Applying samplets to represent kernel matrices, as they arise in kernel based learning or Gaussian process regression, we end up with quasi-sparse matrices. By thresholding small entries, these matrices are compressible to O(N log N) relevant entries, where N is the number of data points. This feature allows for the use of fill-in reducing reorderings to obtain a sparse factorization of the compressed matrices. Besides the comprehensive introduction to samplets and their properties, we present extensive numerical studies to benchmark the approach. Our results demonstrate that samplets mark a considerable step in the direction of making large data sets accessible for analysis. ",Samplets: A new paradigm for data compression " The recent increase in popularity of volumetric representations for scene reconstruction and novel view synthesis has put renewed focus on animating volumetric content at high visual quality and in real-time. While implicit deformation methods based on learned functions can produce impressive results, they are `black boxes' to artists and content creators, they require large amounts of training data to generalise meaningfully, and they do not produce realistic extrapolations outside the training data. In this work we solve these issues by introducing a volume deformation method which is real-time, easy to edit with off-the-shelf software and can extrapolate convincingly. To demonstrate the versatility of our method, we apply it in two scenarios: physics-based object deformation and telepresence where avatars are controlled using blendshapes. We also perform thorough experiments showing that our method compares favourably to both volumetric approaches combined with implicit deformation and methods based on mesh deformation. ","VolTeMorph: Realtime, Controllable and Generalisable Animation of Volumetric Representations" " We derive the nonlinear sigma model as a peculiar dimensional reduction of Yang-Mills theory. In this framework, pions are reformulated as higher-dimensional gluons arranged in a kinematic configuration that only probes cubic interactions. This procedure yields a purely cubic action for the nonlinear sigma model which exhibits a symmetry enforcing color-kinematics duality. Remarkably, the associated kinematic algebra originates directly from the Poincare algebra in higher dimensions. Applying the same construction to gravity yields a new quartic action for Born-Infeld theory and, applied once more, a cubic action for the special Galileon theory. Since the nonlinear sigma model and special Galileon are subtly encoded in the cubic sectors of Yang-Mills theory and gravity, respectively, their double copy relationship is automatic. ",Pions as Gluons in Higher Dimensions " The Wide-field Infrared Survey Explorer Preliminary Data Release Source Catalog contains over 257 million objects. We describe the method used to flag variable source candidates in the Catalog. Using a method based on the chi- square of single-exposure flux measurements, we generated a variability flag for each object, and have identified almost 460,000 candidates sources that exhibit significant flux variability with greater than \sim 7{\sigma} confidence. We discuss the flagging method in detail and describe its benefits and limitations. We also present results from the flagging method, including example light curves of several types of variable sources including Algol-type eclipsing binaries, RR Lyr, W UMa, and a blazar candidate. ",Variability Flagging in the WISE Preliminary Data Release " In this paper we present the identification of two periodic X-ray signals coming from the direction of the Small Magellanic Cloud (SMC). On detection with the Rossi X-ray Timing Explorer (RXTE), the 175.4s and 85.4s pulsations were considered to originate from new Be/X-ray binary (BeXRB) pulsars with unknown locations. Using rapid follow-up INTEGRAL and XMM-Newton observations, we show the first pulsar (designated SXP175) to be coincident with a candidate high-mass X-ray binary (HMXB) in the northern bar region of the SMC undergoing a small Type II outburst. The orbital period (87d) and spectral class (B0-B0.5IIIe) of this system are determined and presented here for the first time. The second pulsar is shown not to be new at all, but is consistent with being SXP91.1 - a pulsar discovered at the very beginning of the 13 year long RXTE key monitoring programme of the SMC. Whilst it is theoretically possible for accreting neutron stars to change spin period so dramatically over such a short time, the X-ray and optical data available for this source suggest this spin-up is continuous during long phases of X-ray quiescence, where accretion driven spin-up of the neutron star should be minimal. ",Contrasting behaviour from two Be/X-ray binary pulsars: insights into differing neutron star accretion modes " In this paper, we evaluate and analyze the impact of different network loads and varying no. of nodes on distance vector and link state routing algorithms. We select three well known proactive protocols; Destination Sequenced Distance Vector (DSDV) operates on distance vector routing, while Fisheye State Routing (FSR) and Optimized Link State Routing (OLSR) protocols are based on link state routing. Further, we evaluate and compare the effects on the performance of protocols by changing the routing strategies of routing algorithms. We also enhance selected protocols to achieve high performance. We take throughput, End-to-End Delay (E2ED) and Normalized Routing Load (NRL) as performance metrics for evaluation and comparison of chosen protocols both with default and enhanced versions. Based upon extensive simulations in NS-2, we compare and discuss performance trade-offs of the protocols, i.e., how a protocol achieves high packet delivery by paying some cost in the form of increased E2ED and/or routing overhead. FSR due to scope routing technique performs well in high data rates, while, OLSR is more scalable in denser networks due to limited retransmissions through Multi-Point Relays (MPRs). ",Evaluating Wireless Proactive Routing Protocols under Scalability and Traffic Constraints " Geometrical shock dynamics, also called CCW theory, yields approximate equations for shock propagation in which only the conditions at the shock appear explicitly; the post-shock flow is presumed approximately uniform and enters implicitly via a Riemann invariant. The nonrelativistic theory, formulated by G. B. Whitham and others, matches many experimental results surprisingly well. Motivated by astrophysical applications, we adapt the theory to ultra-relativistic shocks advancing into an ideal fluid whose pressure is negligible ahead of the shock, but one third of its proper energy density behind the shock. Exact results are recovered for some self-similar cylindrical and spherical shocks with power-law pre-shock density profiles. Comparison is made with numerical solutions of the full hydrodynamic equations. We review relativistic vorticity and circulation. In an ultrarelativistic ideal fluid, circulation can be defined so that it changes only at shocks, notwithstanding entropy gradients in smooth parts of the flow. ",Ultra-relativistic geometrical shock dynamics and vorticity " Using exact-diagonalization of small clusters and Dyson equation embedding techniques, the conductance $G$ of linear arrays of quantum dots is investigated. The Hubbard interaction induces Kondo peaks at low temperatures for an odd number of dots. Remarkably, the Kondo peak is split in half by a deep minimum, and the conductance vanishes at one value of the gate voltage. Tentative explanations for this unusual effect are proposed, including an interference process between two channels contributing to $G$, with one more and one less particle than the exactly-solved cluster ground-state. The Hubbard interaction and fermionic statistics of electrons also appear to be important to understand this phenomenon. Although most of the calculations used a particle-hole symmetric Hamiltonian and formalism, results also presented here show that the conductance dip exists even when this symmetry is broken. The conductance cancellation effect obtained using numerical techniques is potentially interesting, and other many-body techniques should be used to confirm its existence. ",Unexpected Conductance Dip in the Kondo Regime of Linear Arrays of Quantum Dots " We consider an anisotropically two-dimensional diffusion of a charged molecule (particle) through a large biological channel under an external voltage. The channel is modeled as a cylinder of three structure parameters: radius, length, and surface density of negative charges located at the channel interior-lining. These charges induce inside the channel a potential that plays a key role in controlling the particle current through the channel. It was shown that to facilitate the transmembrane particle movement the channel should be reasonably self-optimized so that its potential coincides with the resonant one, resulting in a large particle current across the channel. Observed facilitation appears to be an intrinsic property of biological channels, regardless the external voltage or the particle concentration gradient. This facilitation is very selective in the sense that a channel of definite structure parameters can facilitate the transmembrane movement of only particles of proper valence at corresponding temperatures. Calculations also show that the modeled channel is non-Ohmic with the ion conductance which exhibits a resonance at the same channel potential as that identified in the current. ",Self-optimized biological channels in facilitating the transmembrane movement of charged molecules " Ribosome is a molecular machine that polymerizes a protein where the sequence of the amino acid residues, the monomers of the protein, is dictated by the sequence of codons (triplets of nucleotides) on a messenger RNA (mRNA) that serves as the template. The ribosome is a molecular motor that utilizes the template mRNA strand also as the track. Thus, in each step the ribosome moves forward by one codon and, simultaneously, elongates the protein by one amino acid. We present a theoretical model that captures most of the main steps in the mechano-chemical cycle of a ribosome. The stochastic movement of the ribosome consists of an alternating sequence of pause and translocation; the sum of the durations of a pause and the following translocation is the time of dwell of the ribosome at the corresponding codon. We derive the analytical expression for the distribution of the dwell times of a ribosome in our model. Whereever experimental data are available, our theoretical predictions are consistent with those results. We suggest appropriate experiments to test the new predictions of our model, particularly, the effects of the quality control mechanism of the ribosome and that of their crowding on the mRNA track. ","Distribution of dwell times of a ribosome: effects of infidelity, kinetic proofreading and ribosome crowding" " In this paper, we consider a hierarchical control based DC microgrid (DCmG) equipped with unknown input observer (UIO) based detectors, where the potential false data injection (FDI) attacks and the distributed countermeasure are investigated. First, we find that the vulnerability of the UIO-based detector originates from the lacked knowledge of true unknown inputs. Zero trace stealthy (ZTS) attacks can be launched by secretly faking the unknown inputs, under which the detection residual will not be altered, and the impact on the DCmG in terms of voltage balancing and current sharing is theoretically analyzed. Then, to mitigate the ZTS attack, we propose an automatic and timely countermeasure based on the average point of common coupling (PCC) voltage obtained from the dynamic average consensus (DAC) estimator. The integrity of the communicated data utilized in DAC estimators is guaranteed via UIO-based detectors, where the DAC parameters are perturbed in a fixed period to be concealed from attackers. Finally, the detection and mitigation performance of the proposed countermeasure is rigorously investigated, and extensive simulations are conducted in Simulink/PLECS to validate the theoretical results. ",False Data Injection Attacks and the Distributed Countermeasure in DC Microgrids " We introduce the concept of non-uniform input-to-state stability for networks. It combines the uniform global stability with the uniform attractivity of any subnetwork, while it allows for non-uniform convergence of all components. For an infinite network consisting of input-to-state stable subsystems, that do not necessarily have a uniform KL-bound on the transient behaviour, we show: If the gain operator satisfies the uniform small-gain condition, then the whole network is non-uniformly input-to-state stable and all its finite subnetworks are input-to-state stable. ",Non-uniform ISS small-gain theorem for infinite networks " The objective of this paper is to clarify the relationships between the quantum D-module and equivariant Floer theory. Equivariant Floer theory was introduced by Givental in his paper ``Homological Geometry''. He conjectured that the quantum D-module of a symplectic manifold is isomorphic to the equivariant Floer cohomology for the universal cover of the free loop space. First, motivated by the work of Guest, we formulate the notion of ``abstract quantum D-module''which generalizes the D-module defined by the small quantum cohomology algebra. Second, we define the equivariant Floer cohomology of toric complete intersections rigorously as a D-module, using Givental's model. This is shown to satisfy the axioms of abstract quantum D-module. By Givental's mirror theorem, it follows that equivariant Floer cohomology defined here is isomorphic to the quantum cohomology D-module. ",Quantum D-modules and equivariant Floer theory for free loop spaces " We consider unsupervised domain adaptation (UDA), where labeled data from a source domain (e.g., photographs) and unlabeled data from a target domain (e.g., sketches) are used to learn a classifier for the target domain. Conventional UDA methods (e.g., domain adversarial training) learn domain-invariant features to improve generalization to the target domain. In this paper, we show that contrastive pre-training, which learns features on unlabeled source and target data and then fine-tunes on labeled source data, is competitive with strong UDA methods. However, we find that contrastive pre-training does not learn domain-invariant features, diverging from conventional UDA intuitions. We show theoretically that contrastive pre-training can learn features that vary subtantially across domains but still generalize to the target domain, by disentangling domain and class information. Our results suggest that domain invariance is not necessary for UDA. We empirically validate our theory on benchmark vision datasets. ","Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation" " Non-invasive imaging plays a crucial role in diagnosing and studying eye diseases. However, existing photoacoustic ophthalmoscopy (PAOM) techniques in mice have limitations due to handling restrictions, suboptimal optical properties, limited availability of light sources and permissible light fluence at the retina. This study introduces an innovative approach that utilizes Rose Bengal, a contrast agent, to enhance PAOM contrast. This enables visualization of deeper structures like the choroidal microvasculature and sclera in the mouse eye using visible light. The integration of near-infrared-II optical coherence tomography (NIR-II OCT) provides additional tissue contrast and insights into potential NIR-II PAOM capabilities. To optimize imaging, we developed a cost-effective 3D printable mouse eye phantom and a fully 3D printable tip/tilt mouse platform. This solution elevates PAOM to a user-friendly technology, which can be used to address pressing research questions concerning several ocular diseases such as myopia, glaucoma and/or age-related macular degeneration in the future. ",Multimodal imaging of the mouse eye using visible light photoacoustic ophthalmoscopy and near-infrared-II optical coherence tomography " New experimental data on the behaviour of the single-particle two-dimensional correlation functions R versus Q (Q is the number of nucleons emitted from nuc- lei) and Ap (Ap is the mass of projectile nuclei) are presented in this paper. The interactions of protons, d, 4He and 12C nuclei with carbon nuclei (at a momentum of 4.2 A GeV/c) are considered.The values of R are obtained separately for pi minus mesons and protons.In so doing,the values of R are normalized so that -1=0$. It is proved that for any $k>0$, the problem admits global classical solutions, whenever $\chi\in\big(0,-\frac{k-1}{2}+\frac{1}{2}\sqrt{(k-1)^2+\frac{8k}{n}}\big)$. The global solutions are moreover globally bounded if $n\le 8$. This shows an exact way the size of the diffusion constant $k$ of the chemicals $v$ effects the behavior of solutions. ",Global boundedness of solutions in a parabolic-parabolic chemotaxis system with singular sensitivity " Optimization based tracking methods have been widely successful by integrating a target model prediction module, providing effective global reasoning by minimizing an objective function. While this inductive bias integrates valuable domain knowledge, it limits the expressivity of the tracking network. In this work, we therefore propose a tracker architecture employing a Transformer-based model prediction module. Transformers capture global relations with little inductive bias, allowing it to learn the prediction of more powerful target models. We further extend the model predictor to estimate a second set of weights that are applied for accurate bounding box regression. The resulting tracker relies on training and on test frame information in order to predict all weights transductively. We train the proposed tracker end-to-end and validate its performance by conducting comprehensive experiments on multiple tracking datasets. Our tracker sets a new state of the art on three benchmarks, achieving an AUC of 68.5% on the challenging LaSOT dataset. ",Transforming Model Prediction for Tracking " Many existing approaches for unsupervised domain adaptation (UDA) focus on adapting under only data distribution shift and offer limited success under additional cross-domain label distribution shift. Recent work based on self-training using target pseudo-labels has shown promise, but on challenging shifts pseudo-labels may be highly unreliable, and using them for self-training may cause error accumulation and domain misalignment. We propose Selective Entropy Optimization via Committee Consistency (SENTRY), a UDA algorithm that judges the reliability of a target instance based on its predictive consistency under a committee of random image transformations. Our algorithm then selectively minimizes predictive entropy to increase confidence on highly consistent target instances, while maximizing predictive entropy to reduce confidence on highly inconsistent ones. In combination with pseudo-label based approximate target class balancing, our approach leads to significant improvements over the state-of-the-art on 27/31 domain shifts from standard UDA benchmarks as well as benchmarks designed to stress-test adaptation under label distribution shift. ",SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised Domain Adaptation " Classical multi-armed bandit problems use the expected value of an arm as a metric to evaluate its goodness. However, the expected value is a risk-neutral metric. In many applications like finance, one is interested in balancing the expected return of an arm (or portfolio) with the risk associated with that return. In this paper, we consider the problem of selecting the arm that optimizes a linear combination of the expected reward and the associated Conditional Value at Risk (CVaR) in a fixed budget best-arm identification framework. We allow the reward distributions to be unbounded or even heavy-tailed. For this problem, our goal is to devise algorithms that are entirely distribution oblivious, i.e., the algorithm is not aware of any information on the reward distributions, including bounds on the moments/tails, or the suboptimality gaps across arms. In this paper, we provide a class of such algorithms with provable upper bounds on the probability of incorrect identification. In the process, we develop a novel estimator for the CVaR of unbounded (including heavy-tailed) random variables and prove a concentration inequality for the same, which could be of independent interest. We also compare the error bounds for our distribution oblivious algorithms with those corresponding to standard non-oblivious algorithms. Finally, numerical experiments reveal that our algorithms perform competitively when compared with non-oblivious algorithms, suggesting that distribution obliviousness can be realised in practice without incurring a significant loss of performance. ","Distribution oblivious, risk-aware algorithms for multi-armed bandits with unbounded rewards" " Cerebellum is part of the brain that occupies only 10% of the brain volume, but it contains about 80% of total number of brain neurons. New cerebellar function model is developed that sets cerebellar circuits in context of multibody dynamics model computations, as important step in controlling balance and movement coordination, functions performed by two oldest parts of the cerebellum. Model gives new functional interpretation for granule cells-Golgi cell circuit, including distinct function for upper and lower Golgi cell dendritc trees, and resolves issue of sharing Granule cells between Purkinje cells. Sets new function for basket cells, and for stellate cells according to position in molecular layer. New model enables easily and direct integration of sensory information from vestibular system and cutaneous mechanoreceptors, for balance, movement and interaction with environments. Model gives explanation of Purkinje cells convergence on deep-cerebellar nuclei. ",The Cerebellum: New Computational Model that Reveals its Primary Function to Calculate Multibody Dynamics Conform to Lagrange-Euler Formulation " We study the effect of density fluctuations induced by turbulence on the HI/H$_2$ structure in photodissociation regions (PDRs) both analytically and numerically. We perform magnetohydrodynamic numerical simulations for both subsonic and supersonic turbulent gas, and chemical HI/H$_2$ balance calculations. We derive atomic-to-molecular density profiles and the HI column density probability density function (PDF) assuming chemical equilibrium. We find that while the HI/H$_2$ density profiles are strongly perturbed in turbulent gas, the mean HI column density is well approximated by the uniform-density analytic formula of Sternberg et al. (2014). The PDF width depends on (a) the radiation intensity to mean density ratio, (b) the sonic Mach number and (c) the turbulence decorrelation scale, or driving scale. We derive an analytic model for the HI PDF and demonstrate how our model, combined with 21 cm observations, can be used to constrain the Mach number and driving scale of turbulent gas. As an example, we apply our model to observations of HI in the Perseus molecular cloud. We show that a narrow observed HI PDF may imply small scale decorrelation, pointing to the potential importance of subcloud-scale turbulence driving. ",The HI-to-H2 Transition in a Turbulent Medium " In the present work we systematically study the half--lives of proton radioactivity for $51 \leq Z \leq 83$ nuclei based on the Gamow--like model with a screened electrostatic barrier. In this model there are two parameters while considering the screened electrostatic effect of Coulomb potential with the Hulthen potential i.e. the effective nuclear radius parameter r_0 and the screening parameter a. The calculated results can well reproduce the experimental data. In addition, we extend this model to predict the proton radioactivity half--lives of 16 nuclei in the same region within a factor of 2.94, whose proton radioactivity are energetically allowed or observed but not yet quantified. Meanwhile, studying on the proton radioactivity half-life by a type of universal decay law has been done. The results indicate that the calculated half--lives are linearly dependent on Coulomb parameter with the same orbital angular momentum. ",Systematic study of proton radioactivity based on Gamow--like model with a screened electrostatic barrier Coherent coupling of two qubits mediated by a nonlinear resonator is studied. It is shown that the amount of entanglement accessible in the evolution depends both on the strength of nonlinearity in the Hamiltonian of the resonator and on the initial preparation of the system. The created entanglement survives in the presence of decoherence. ,Entanglement of qubits via a nonlinear resonator " Two faint X-ray pulsars, AX J1749.2-2725 and AX J1749.1-2733, located in the direction to the Galactic Center, were studied in detail using data of INTEGRAL, XMM-Newton and Chandra observatories in X-rays, the SOFI/NTT instrument in infrared and the RTT150 telescope in optics. X-ray positions of both sources were determined with the uncertainty better than ~1 arcsec, that allowed us to identify their infrared counterparts. From the subsequent analysis of infrared and optical data we conclude that counterparts of both pulsars are likely massive stars of B0-B3 classes located behind the Galactic Center at distances of 12-20 kpc, depending on the type, probably in further parts of galactic spiral arms. In addition, we investigated the extinction law towards the galactic bulge and found that it is significantly different from standard one. ",AX J1749.1-2733 and AX J1749.2-2725 - the close pair of X-ray pulsars behind the Galactic Center: an optical identification " In this work, a novel model of the random geometric graph (RGG), namely the isotropic random geometric graph (IRGG) has been developed and its topological properties in two dimensions have been studied in details. The defining characteristics of RGG and IRGG are the same --- two nodes are connected by an edge if their distance is less than a fixed value, called the connection radius. However, IRGGs have two major differences from regular RGGs. Firstly, the shape of their boundaries --- which is circular. It brings very little changes in final results but gives a significant advantage in analytical calculations of the network properties. Secondly, it opens up the possibility of an empty concentric region inside the network. The empty region contains no nodes but allows the communicating edges between the nodes to pass through it. This second difference causes significant alterations in physically relevant network properties such as average degree, connectivity, clustering coefficient and average shortest path. Analytical expressions for most of these features have been provided. These results agree well with those obtained from simulations. Apart from the applicability of the model due to its symmetry and simplicity, the scope of incorporating a penetrable cavity makes it suitable for potential applications in wireless communication networks that often have a node-free region. ",Isotropic random geometric networks in two dimensions with a penetrable cavity " We compute the real-space power spectrum and the redshift-space distortions of galaxies in the 2dF 100k galaxy redshift survey using pseudo-Karhunen-Loeve eigenmodes and the stochastic bias formalism. Our results agree well with those published by the 2dFGRS team, and have the added advantage of producing easy-to-interpret uncorrelated minimum-variance measurements of the galaxy-galaxy, galaxy-velocity and velocity-velocity power spectra in 27 k-bands, with narrow and well-behaved window functions in the range 0.01h/Mpc < k < 0.8h/Mpc. We find no significant detection of baryonic wiggles, although our results are consistent with a standard flat Omega_Lambda=0.7 ``concordance'' model and previous tantalizing hints of baryonic oscillations. We measure the galaxy-matter correlation coefficient r > 0.4 and the redshift-distortion parameter beta=0.49+/-0.16 for r=1 (beta=0.47+/- 0.16 without finger-of-god compression). Since this is an apparent-magnitude limited sample, luminosity-dependent bias may cause a slight red-tilt in the power spectum. A battery of systematic error tests indicate that the survey is not only impressive in size, but also unusually clean, free of systematic errors at the level to which our tests are sensitive. Our measurements and window functions are available at http://www.hep.upenn.edu/~max/2df.html together with the survey mask, radial selection function and uniform subsample of the survey that we have constructed. ",The power spectrum of galaxies in the 2dF 100k redshift survey Several algorithms for tracking and for primary and secondary vertex reconstruction have been developed by the ATLAS collaboration following different approaches. This has allowed a thorough cross-check of the performances of the algorithms and of the reconstruction software. The results of the most recent studies on this topic are discussed and compared. ,Tracking and vertexing at ATLAS " For a smooth surface X over an algebraically closed field of positive characteristic, we consider the ramification of an Artin-Schreier extension of X. A ramification at a point of codimension 1 of X is understood by the Swan conductor. A ramification at a closed point of X is understood by the invariant r_x defined by Kato [2]. The main theme of this paper is to give a simple formula to compute r_x' defined in [4], which is equal to r_x for good Artin-Schreier extension. We also prove Kato's conjecture for upper bound of r_x. ",On ramifications of Artin-Schreier extensions of surfaces over algebraically closed fields of positive characteristic III " Certain facial parts are salient (unique) in appearance, which substantially contribute to the holistic recognition of a subject. Occlusion of these salient parts deteriorates the performance of face recognition algorithms. In this paper, we propose a generative model to reconstruct the missing parts of the face which are under occlusion. The proposed generative model (SD-GAN) reconstructs a face preserving the illumination variation and identity of the face. A novel adversarial training algorithm has been designed for a bimodal mutually exclusive Generative Adversarial Network (GAN) model, for faster convergence. A novel adversarial ""structural"" loss function is also proposed, comprising of two components: a holistic and a local loss, characterized by SSIM and patch-wise MSE. Ablation studies on real and synthetically occluded face datasets reveal that our proposed technique outperforms the competing methods by a considerable margin, even for boosting the performance of Face Recognition. ",SD-GAN: Structural and Denoising GAN reveals facial parts under occlusion " We consider the renormalizable $SO(5)/SO(4)$ $\sigma$-model, in which the Higgs particle has a pseudo-Nambu-Goldstone boson character, and explore what the minimal field extension required to implement the Peccei-Quinn symmetry (PQ) is, within the partial compositeness scenario. It turns out that the minimal model does not require the enlargement of the exotic fermionic sector, but only the addition of a singlet scalar: it is sufficient that the exotic fermions involved in partial compositeness and the singlet scalar become charged under Peccei-Quinn transformations. We explore the phenomenological predictions for photonic signals in axion searches for all models discussed. Because of the constraints imposed on the exotic fermion sector by the Standard Model fermion masses, the expected range of allowed axion-photon couplings turns out to be generically narrowed with respect to that of standard invisible axion models, impacting the experimental quest. ",The Axion and the Goldstone Higgs " The effective low-energy late-time description of many body systems near thermal equilibrium provided by classical hydrodynamics in terms of dissipative transport phenomena receives important corrections once the effects of stochastic fluctuations are taken into account. One such physical effect is the occurrence of long-time power law tails in correlation functions of conserved currents. In the hydrodynamic regime $\vec{k} \rightarrow 0$ this amounts to non-analytic dependence of the correlation functions on the frequency $\omega$. In this article, we consider a relativistic fluid with a conserved global $U(1)$ charge in the presence of a strong background magnetic field, and compute the long-time tails in correlation functions of the stress tensor. The presence of the magnetic field renders the system anisotropic. In the absence of the magnetic field, there are three out-of-equilibrium transport parameters that arise at the first order in the hydrodynamic derivative expansion, all of which are dissipative. In the presence of a background magnetic field, there are ten independent out-of-equilibrium transport parameters at the first order, three of which are non-dissipative and the rest are dissipative. We provide the most general linearized equations about a given state of thermal equilibrium involving the various transport parameters in the presence of a magnetic field, and use them to compute the long-time tails for the fluid. ",Hydrodynamic fluctuations and long-time tails in a fluid on an anisotropic background " Distributed ledger technologies replace central counterparties with time-consuming consensus protocols to record the transfer of ownership. This settlement latency slows down cross-market trading and exposes arbitrageurs to price risk. We theoretically derive arbitrage bounds induced by settlement latency. Using Bitcoin orderbook and network data, we estimate average arbitrage bounds of 121 basis points, explaining 91% of the cross-market price differences, and demonstrate that asset flows chase arbitrage opportunities. Controlling for inventory holdings as a measure of trust in exchanges does not affect our main results. Blockchain-based settlement without trusted intermediation thus introduces a non-trivial friction that impedes arbitrage activity. ",Building Trust Takes Time: Limits to Arbitrage in Blockchain-Based Markets This note presents criteria in terms of Bernoulli numbers for a number to be simultaneously a Wilson prime and a Lerch prime. ,A characterization of Wilson-Lerch primes We consider the standard Abelian sandpile process on the Bethe lattice. We show the existence of the thermodynamic limit for the finite volume stationary measures and the existence of a unique infinite volume Markov process exhibiting features of self-organized criticality. ,The Abelian Sandpile Model on an Infinite Tree " Converting source or unit test code to English has been shown to improve the maintainability, understandability, and analysis of software and tests. Code summarizers identify important statements in the source/tests and convert them to easily understood English sentences using static analysis and NLP techniques. However, current test summarization approaches handle only a subset of the variation and customization allowed in the JUnit assert API (a critical component of test cases) which may affect the accuracy of conversions. In this paper, we present our work towards improving JUnit test summarization with a detailed process for converting a total of 45 unique JUnit assertions to English, including 37 previously-unhandled variations of the assertThat method. This process has also been implemented and released as the AssertConvert tool. Initial evaluations have shown that this tool generates English conversions that accurately represent a wide variety of assertion statements which could be used for code summarization or other NLP analyses. ",A Fine-Grained Approach for Automated Conversion of JUnit Assertions to English " Automatic live commenting aims to provide real-time comments on videos for viewers. It encourages users engagement on online video sites, and is also a good benchmark for video-to-text generation. Recent work on this task adopts encoder-decoder models to generate comments. However, these methods do not model the interaction between videos and comments explicitly, so they tend to generate popular comments that are often irrelevant to the videos. In this work, we aim to improve the relevance between live comments and videos by modeling the cross-modal interactions among different modalities. To this end, we propose a multimodal matching transformer to capture the relationships among comments, vision, and audio. The proposed model is based on the transformer framework and can iteratively learn the attention-aware representations for each modality. We evaluate the model on a publicly available live commenting dataset. Experiments show that the multimodal matching transformer model outperforms the state-of-the-art methods. ",Multimodal Matching Transformer for Live Commenting " The meniscal tissue is a layered material with varying properties influenced by collagen content and arrangement. Understanding the relationship between structure and properties is crucial for disease management, treatment development, and biomaterial design. The internal layer of the meniscus is softer and more deformable than the outer layers, thanks to interconnected collagen channels that guide fluid flow. To investigate these relationships, we propose a novel approach that combines Computational Fluid Dynamics (CFD) with Image Analysis (CFD-IA). We analyze fluid flow in the internal architecture of the human meniscus across a range of inlet velocities (0.1mm/s to 1.6m/s) using high-resolution 3D micro-computed tomography scans. Statistical correlations are observed between architectural parameters (tortuosity, connectivity, porosity, pore size) and fluid flow parameters (Re number distribution, permeability). Some channels exhibit Re values of 1400 at an inlet velocity of 1.6m/s, and a transition from Darcy's regime to a non-Darcian regime occurs around an inlet velocity of 0.02m/s. Location-dependent permeability ranges from 20-32 Darcy. Regression modelling reveals a strong correlation between fluid velocity and tortuosity at high inlet velocities, as well as with channel diameter at low inlet velocities. At higher inlet velocities, flow paths deviate more from the preferential direction, resulting in a decrease in the concentration parameter by an average of 0.4. This research provides valuable insights into the fluid flow behaviour within the meniscus and its structural influences. ",On the characteristics of natural hydraulic dampers: An image-based approach to study the fluid flow behaviour inside the human meniscal tissue " We investigate gravitational lensing in the Palatini approach to the f(R) extended theories of gravity. Starting from an exact solution of the f(R) field equations, which corresponds to the Schwarzschild-de Sitter metric and, on the basis of recent studies on this metric, we focus on some lensing observables, in order to evaluate the effects of the non linearity of the gravity Lagrangian. We give estimates for some astrophysical events, and show that these effects are tiny for galactic lenses, but become interesting for extragalactic ones. ",Gravitational Lensing and f(R) theories in the Palatini approach " Information exchange over networks can be affected by various forms of delay. This causes challenges for using the network by a multi-agent system to solve a distributed optimisation problem. Distributed optimisation schemes, however, typically do not assume network models that are representative for real-world communication networks, since communication links are most of the time abstracted as lossless. Our objective is therefore to formulate a representative network model and provide practically verifiable network conditions that ensure convergence of distributed algorithms in the presence of interference and possibly unbounded delay. Our network is modelled by a sequence of directed-graphs, where to each network link we associate a process for the instantaneous signal-to-interference-plus-noise ratio. We then formulate practical conditions that can be verified locally and show that the age of information (AoI) associated with data communicated over the network is in $\mathcal{O}(\sqrt{n})$. Under these conditions we show that a penalty-based gradient descent algorithm can be used to solve a rich class of stochastic, constrained, distributed optimisation problems. The strength of our result lies in the bridge between practical verifiable network conditions and an abstract optimisation theory. We illustrate numerically that our algorithm converges in an extreme scenario where the average AoI diverges. ",Practical sufficient conditions for convergence of distributed optimisation algorithms over communication networks with interference " We review three broadly geometrodynamical---and in part, Machian or relational---projects, from the perspective of spacetime functionalism. We show how all three are examples of functionalist reduction of the type that was advocated by D. Lewis, and nowadays goes by the label 'the Canberra Plan'. The projects are: (1) the recovery of geometrodynamics by Hojman et al. (1976); (2) the programme of Schuller and collaborators (Schuller 2011; Dull, Schuller et al. 2018) to deduce a metric from the physics of matter fields; (3) the deduction of the ADM Hamiltonian by Gomes and Shyam (2016). We end by drawing a positive corollary about shape dynamics: namely, it has a good rationale for the Hamiltonian it postulates. ",Geometrodynamics as Functionalism about Time " A toric degeneration in algebraic geometry is a process where a given projective variety is being degenerated into a toric one. Then one can obtain information about the original variety via analyzing the toric one, which is a much easier object to study. Harada and Kaveh described how one incorporates a symplectic structure into this process, providing a very useful tool for solving certain problems in symplectic geometry. Below we present applications of this method to questions about the Gromov width, and cohomological rigidity problems. ",Toric degenerations in symplectic geometry " We detect a novel radiative cascade from a neutral semiconductor quantum dot. The cascade initiates from a metastable biexciton state in which the holes form a spin-triplet configuration, Pauli-blockaded from relaxation to the spin-singlet ground state. The triplet biexciton has two photon-phonon-photon decay paths. Unlike in the singlet-ground state biexciton radiative cascade, in which the two photons are co-linearly polarized, in the triplet biexciton cascade they are crosslinearly polarized. We measured the two-photon polarization density matrix and show that the phonon emitted when the intermediate exciton relaxes from excited to ground state, preserves the exciton's spin. The phonon, thus, does not carry with it any which-path information other than its energy. Nevertheless, entanglement distillation by spectral filtering was found to be rather ineffective for this cascade. This deficiency results from the opposite sign of the anisotropic electron-hole exchange interaction in the excited exciton relative to that in the ground exciton. ",Radiative cascade from quantum dot metastable spin-blockaded biexciton " This paper presents a solution to Autonomous Underwater Vehicles (AUVs) large scale route planning and task assignment joint problem. Given a set of constraints (e.g., time) and a set of task priority values, the goal is to find the optimal route for underwater mission that maximizes the sum of the priorities and minimizes the total risk percentage while meeting the given constraints. Making use of the heuristic nature of genetic and swarm intelligence algorithms in solving NP-hard graph problems, Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) are employed to find the optimum solution, where each individual in the population is a candidate solution (route). To evaluate the robustness of the proposed methods, the performance of the all PS and GA algorithms are examined and compared for a number of Monte Carlo runs. Simulation results suggest that the routes generated by both algorithms are feasible and reliable enough, and applicable for underwater motion planning. However, the GA-based route planner produces superior results comparing to the results obtained from the PSO based route planner. ",Optimal Route Planning with Prioritized Task Scheduling for AUV Missions " Datacenter applications demand both low latency and high throughput; while interactive applications (e.g., Web Search) demand low tail latency for their short messages due to their partition-aggregate software architecture, many data-intensive applications (e.g., Map-Reduce) require high throughput for long flows as they move vast amounts of data across the network. Recent proposals improve latency of short flows and throughput of long flows by addressing the shortcomings of existing packet scheduling and congestion control algorithms, respectively. We make the key observation that long tails in the Flow Completion Times (FCT) of short flows result from packets that suffer congestion at more than one switch along their paths in the network. Our proposal, Slytherin, specifically targets packets that suffered from congestion at multiple points and prioritizes them in the network. Slytherin leverages ECN mechanism which is widely used in existing datacenters to identify such tail packets and dynamically prioritizes them using existing priority queues. As compared to existing state-of-the-art packet scheduling proposals, Slytherin achieves 18.6% lower 99th percentile flow completion times for short flows without any loss of throughput. Further, Slytherin drastically reduces 99th percentile queue length in switches by a factor of about 2x on average. ","Slytherin: Dynamic, Network-assisted Prioritization of Tail Packets in Datacenter Networks" " Traffic congestion is a major challenge in modern urban settings. The industry-wide development of autonomous and automated vehicles (AVs) motivates the question of how can AVs contribute to congestion reduction. Past research has shown that in small scale mixed traffic scenarios with both AVs and human-driven vehicles, a small fraction of AVs executing a controlled multiagent driving policy can mitigate congestion. In this paper, we scale up existing approaches and develop new multiagent driving policies for AVs in scenarios with greater complexity. We start by showing that a congestion metric used by past research is manipulable in open road network scenarios where vehicles dynamically join and leave the road. We then propose using a different metric that is robust to manipulation and reflects open network traffic efficiency. Next, we propose a modular transfer reinforcement learning approach, and use it to scale up a multiagent driving policy to outperform human-like traffic and existing approaches in a simulated realistic scenario, which is an order of magnitude larger than past scenarios (hundreds instead of tens of vehicles). Additionally, our modular transfer learning approach saves up to 80% of the training time in our experiments, by focusing its data collection on key locations in the network. Finally, we show for the first time a distributed multiagent policy that improves congestion over human-driven traffic. The distributed approach is more realistic and practical, as it relies solely on existing sensing and actuation capabilities, and does not require adding new communication infrastructure. ",Scalable Multiagent Driving Policies For Reducing Traffic Congestion " The most consistently useful simple model for the study of odd deformed nuclei, the particle-rotor model (strong coupling limit of the core-particle coupling model) has nevertheless been beset by a long-standing problem: It is necessary in many cases to introduce an ad hoc parameter that reduces the size of the Coriolis interaction coupling the collective and single-particle motions. Of the numerous suggestions put forward for the origin of this supplementary interaction, none of those actually tested by calculations has been accepted as the solution of the problem. In this paper we seek a solution of the difficulty within the framework of a general formalism that starts from the spherical shell model and is capable of treating an arbitrary linear combination of multipole and pairing forces. With the restriction of the interaction to the familiar sum of a quadrupole multipole force and a monopole pairing force, we have previously studied a semi-microscopic version of the formalism whose framework is nevertheless more comprehensive than any previously applied to the problem. We obtained solutions for low-lying bands of several strongly deformed odd rare earth nuclei and found good agreement with experiment, except for an exaggerated staggering of levels for K=1/2 bands, which can be understood as a manifestation of the Coriolis attenuation problem. We argue that within the formalism utilized, the only way to improve the physics is to add interactions to the model Hamiltonian. We verify that by adding a magnetic dipole interaction of essentially fixed strength, we can fit the K=1/2 bands without destroying the agreement with other bands. In addition we show that our solution also fits 163Er, a classic test case of Coriolis attenuation that we had not previously studied. ",Possible solution of the Coriolis attenuation problem " We propose a classical model Hamiltonian with a ground state presenting a spin ice structure. We analyze the introduction of metastable excitations on this ground state, showing the emergence of pairs of magnetic monopoles. The interaction between monopoles and dipoles in the system is studied. As a consequence, we obtain an effective nonlocal interaction between monopoles and dipoles from a local classical spin model. ",Continuous local model for two-dimensional spin ice " The radiosotope $^{44}$Ti is produced through $\alpha$-rich freezeout and explosive helium burning in type Ia supernovae (SNe Ia). In this paper, we discuss how the detection of $^{44}$Ti, either through late-time light curves of SNe Ia, or directly via gamma rays, can uniquely constrain the origin of SNe Ia. In particular, building upon recent advances in the hydrodynamical simulation of helium-ignited double white dwarf binaries, we demonstrate that the detection of $^{44}$Ti in a nearby SN Ia or in a young galactic supernova remnant (SNR) can discriminate between the double-detonation and double-degenerate channels of sub-Chandrasekhar (sub-$M_{\rm Ch}$) and near-Chandrasekhar (near-$M_{\rm Ch}$) SNe Ia. In addition, we predict that the late-time light curves of calcium-rich transients are entirely dominated by $^{44}$Ti. ",Using $^{44}$Ti Emission to Differentiate Between Thermonuclear Supernova Progenitors " Let g=g_{0} \oplus g_{1} be a classical Lie superalgebra and F be the category of finite dimensional g-supermodules which are semisimple over g_{0}. In this paper we investigate the homological properties of the category F. In particular we prove that F is self-injective in the sense that all projective supermodules are injective. We also show that all supermodules in F admit a projective resolution with polynomial rate of growth and, hence, one can study complexity in F. If g is a Type I Lie superalgebra we introduce support varieties which detect projectivity and are related to the associated varieties of Duflo and Serganova. If in addition g has a (strong) duality then we prove that the conditions of being tilting or projective are equivalent. ",Complexity and module varieties for classical Lie superalgebras " We compute the Riemannian volume on the moduli space of flat connections on a nonorientable 2-manifold, for a natural class of metrics. We also show that Witten's volume formula for these moduli spaces may be derived using Haar measure, and we give a new proof of Witten's volume formula for the moduli space of flat connections on an orientable surface using Haar measure. ",The volume of the moduli space of flat connections on a nonorientable 2-manifold " The Virasoro groups are a family of central extensions of $\mathrm{Diff}^+(S^1)$, the group of orientation-preserving diffeomorphisms of $S^1$, by the circle group $\mathbb T$. We give a novel, geometric construction of these central extensions using ""off-diagonal"" differential lifts of the first Pontryagin class, thus affirmatively answering a question of Freed-Hopkins. ",Constructing the Virasoro groups using differential cohomology " We use particle-in-cell (PIC) simulations to study the effects of variations of the incoming 400 GeV proton bunch parameters on the amplitude and phase of the wakefields resulting from a seeded self-modulation (SSM) process. We find that these effects are largest during the growth of the SSM, i.e. over the first five to six meters of plasma with an electron density of $7 \times10^{14}$ cm$^{-3}$. However, for variations of any single parameter by $\pm$5%, effects after the SSM saturation point are small. In particular, the phase variations correspond to much less than a quarter wakefield period, making deterministic injection of electrons (or positrons) into the accelerating and focusing phase of the wakefields in principle possible. We use the wakefields from the simulations and a simple test electron model to estimate the same effects on the maximum final energies of electrons injected along the plasma, which are found to be below the initial variations of $\pm$5%. This analysis includes the dephasing of the electrons with respect to the wakefields that is expected during the growth of the SSM. Based on a PIC simulation, we also determine the injection position along the bunch and along the plasma leading to the largest energy gain. For the parameters taken here (ratio of peak beam density to plasma density $n_{b0}/n_0 \approx 0.003$), we find that the optimum position along the proton bunch is at $\xi \approx -1.5 \; \sigma_{zb}$, and that the optimal range for injection along the plasma (for a highest final energy of $\sim$1.6 GeV after 10 m) is 5-6 m. ",Influence of proton bunch parameters on a proton-driven plasma wakefield acceleration experiment " We formulate and close the boundary state bootstrap for factorizing K-matrices in AdS/CFT. We found that there are no boundary degrees of freedom in the boundary bound states, merely the boundary parameters are shifted. We use this family of boundary bound states to describe the D3-D5 system for higher dimensional matrix product states and provide their asymptotic overlap formulas. In doing so we generalize the nesting for overlaps of matrix product states and Bethe states. ",Boundary state bootstrap and asymptotic overlaps in AdS/dCFT " Using the recent measurement of the $\Xi ^0 \to \Lambda \gamma $ asymmetry as an input, we reanalyse nonleptonic and weak radiative hyperon decays in a single symmetry-based framework. In this framework the old S:P problem of nonleptonic decays is automatically resolved when the most important features of weak radiative decays are taken into account. Experimental data require that symmetry between the two types of hyperon decays be imposed at the level of currents, not fields. Previously established connections between hyperon decays and nuclear parity violation imply that the conflict, originally suggested by weak radiative decays, has to surface somewhere. ",Connecting nonleptonic and weak radiative hyperon decays " The Flying Sidekick Traveling Salesman Problem (FSTSP) considers a delivery system composed by a truck and a drone. The drone launches from the truck with a single package to deliver to a customer. Each drone must return to the truck to recharge batteries, pick up another package, and launch again to a new customer location. This work proposes a novel Mixed Integer Programming (MIP) formulation and a heuristic approach to address the problem. The proposedMIP formulation yields better linear relaxation bounds than previously proposed formulations for all instances, and was capable of optimally solving several unsolved instances from the literature. A hybrid heuristic based on the General Variable Neighborhood Search metaheuristic combining Tabu Search concepts is employed to obtain high-quality solutions for large-size instances. The efficiency of the algorithm was evaluated on 1415 benchmark instances from the literature, and over 80% of the best known solutions were improved. ",Exact and Heuristic Approaches to Drone Delivery Problems " A two amino acid (hydrophobic and polar) scheme is used to perform the design on target conformations corresponding to the native states of twenty single chain proteins. Strikingly, the percentage of successful identification of the nature of the residues benchmarked against naturally occurring proteins and their homologues is around 75 % independent of the complexity of the design procedure. Typically, the lowest success rate occurs for residues such as alanine that have a high secondary structure functionality. Using a simple lattice model, we argue that one possible shortcoming of the model studied may involve the coarse-graining of the twenty kinds of amino acids into just two effective types. ",Inverse design of proteins with hydrophobic and polar amino acids " We consider the leading and sub-leading twist $T$-odd and even contributions to the $\cos 2\phi$ azimuthal asymmetry in unpolarized dilepton production in Drell-Yan Scattering. We estimate the contributions' effects at $500 {\rm GeV}$, $ 50 {\rm GeV}$, and $25 {\rm GeV}$ energies in the framework of the parton model using a quark diquark-spectator model of the nucleon to approximate the soft contributions. ",Novel Azimuthal Asymmetries in Drell Yan and Semi-inclusive Deep Inelastic Scattering " The I4U consortium was established to facilitate a joint entry to NIST speaker recognition evaluations (SRE). The latest edition of such joint submission was in SRE 2018, in which the I4U submission was among the best-performing systems. SRE'18 also marks the 10-year anniversary of I4U consortium into NIST SRE series of evaluation. The primary objective of the current paper is to summarize the results and lessons learned based on the twelve sub-systems and their fusion submitted to SRE'18. It is also our intention to present a shared view on the advancements, progresses, and major paradigm shifts that we have witnessed as an SRE participant in the past decade from SRE'08 to SRE'18. In this regard, we have seen, among others, a paradigm shift from supervector representation to deep speaker embedding, and a switch of research challenge from channel compensation to domain adaptation. ",I4U Submission to NIST SRE 2018: Leveraging from a Decade of Shared Experiences " We present optical spectropolarimetry of the tidal disruption event (TDE) AT 2019qiz on days $+0$ and $+29$ relative to maximum brightness. Continuum polarization, which informs the shape of the electron-scattering surface, was found to be consistent with 0 per cent at peak brightness. On day $+29$, the continuum polarization rose to $\sim 1$ per cent, making this the first reported spectropolarimetric evolution of a TDE. These findings are incompatible with a naked eccentric disc that lacks significant mass outflow. Instead, the spectropolarimetry paints a picture wherein, at maximum brightness, high-frequency emission from the accretion disc is reprocessed into the optical band by a nearly spherical, optically thick, electron-scattering photosphere located far away from the black hole. We estimate the radius of the scattering photosphere to be $\sim 100\rm\, au$ at maximum brightness -- significantly larger than the tidal radius ($\sim 1\rm\, au$) and the thermalisation radius ($\sim 30\rm\, au$) where the optical continuum is formed. A month later, as the fallback rate drops and the scattering photosphere recedes, the continuum polarization increases, revealing a moderately aspherical interior. We also see evidence for smaller-scale density variations in the scattering photosphere, inferred from the scatter of the data in the Stokes $q-u$ plane. On day $+29$, the H$\alpha$ emission-line peak is depolarized to $\sim 0.3$ per cent (compared to $\sim 1$ per cent continuum polarization), and displays a gradual rise toward the line's redder wavelengths. This observation indicates the H$\alpha$ line formed near the electron-scattering radius. ",Spectropolarimetry of the tidal disruption event AT 2019qiz: a quasispherical reprocessing layer " Based on the results of computational simulations, the research addresses a broad range of electronic and optical properties which are typical for two most stable compositions of the yttrium oxyhydride, Y4H10O and YHO. Emphasis was placed on characteristics of thin films of different structural phases. Macroscopic optical properties were deduced and analyzed within the conventional scheme that utilizes the knowledge of refractive index, absorption, transmittance and reflectance spectra. Our major goal was two-fold: First, to provide modeling and description of optical spectra for various single- and bi-phase oxyhydride compositions, and second, to conduct comparative analysis that would be powerful enough to explain the features of the experimentally measured transmittance spectra. In the context of nonlinear optics, for the P-43m noncentrosymmetric cubic structure of Y4H10O we evaluated a frequency profile of the second-order susceptibility ${\chi}^{(2)}(2\omega)$ and showed that the bulk Y4H10O may exhibit a rather considerable optical nonlinearity. ",Optical properties of yttrium oxyhydrides: A comparative analysis with experiment " We revisit the relation between the asymmetries $A_{FB}$ and $A_{FB}^\ell$ in $t \bar t$ production at the Tevatron, using as new physics benchmark a colour octet. We find that $A_{FB}^\ell$ receives large contributions from the interference between $\lambda = \pm 1/2$ top helicity states, which has been ignored in some of the previous literature on the subject. The omission of these contributions results in a severe underestimation of the asymmetry, around $1/2$ and $1/50$ of the true value for right-handed and left-handed top couplings to the octet, respectively. Interference effects are closely related to a $\mathcal{O}(1)$ transverse top polarisation, as yet not considered in this context. ","Quantum coherence, top transverse polarisation and the Tevatron asymmetry $A_{FB}^\ell$" " This paper advocates proximal Markov Chain Monte Carlo (ProxMCMC) as a flexible and general Bayesian inference framework for constrained or regularized estimation. Originally introduced in the Bayesian imaging literature, ProxMCMC employs the Moreau-Yosida envelope for a smooth approximation of the total-variation regularization term, fixes nuisance and regularization strength parameters as constants, and relies on the Langevin algorithm for the posterior sampling. We extend ProxMCMC to the full Bayesian framework with modeling and data adaptive estimation of all parameters including the regularization strength parameter. More efficient sampling algorithms such as the Hamiltonian Monte Carlo are employed to scale ProxMCMC to high-dimensional problems. Analogous to the proximal algorithms in optimization, ProxMCMC offers a versatile and modularized procedure for the inference of constrained and non-smooth problems. The power of ProxMCMC is illustrated on various statistical estimation and machine learning tasks. The inference in these problems is traditionally considered difficult from both frequentist and Bayesian perspectives. ",Proximal MCMC for Bayesian Inference of Constrained and Regularized Estimation " Multiple Artificial Intelligence (AI) methods have been proposed over recent years to create controllers to play multiple video games of different nature and complexity without revealing the specific mechanics of each of these games to the AI methods. In recent years, Evolutionary Algorithms (EAs) employing rolling horizon mechanisms have achieved extraordinary results in these type of problems. However, some limitations are present in Rolling Horizon EAs making it a grand challenge of AI. These limitations include the wasteful mechanism of creating a population and evolving it over a fraction of a second to propose an action to be executed by the game agent. Another limitation is to use a scalar value (fitness value) to direct evolutionary search instead of accounting for a mechanism that informs us how a particular agent behaves during the rolling horizon simulation. In this work, we address both of these issues. We introduce the use of a statistical tree that tackles the latter limitation. Furthermore, we tackle the former limitation by employing a mechanism that allows us to seed part of the population using Monte Carlo Tree Search, a method that has dominated multiple General Video Game AI competitions. We show how the proposed novel mechanism, called Statistical Tree-based Population Seeding, achieves better results compared to vanilla Rolling Horizon EAs in a set of 20 games, including 10 stochastic and 10 deterministic games. ",Statistical Tree-based Population Seeding for Rolling Horizon EAs in General Video Game Playing Study of gravitational-radiation induced merging rates of relativistic binary stars (double neutron stars; neutron star + black hole; double black holes) shows that the first-generation gravitational wave interferometers with an rms-sensitivity of $10^{-21}$ at frequency 100 Hz can detect 10-700 black hole and only $\sim 1$ neutron star coalescences in a 1-year integration time in a wide range of stellar evolution parameters. It is notable that modern concepts of stellar evolution predict that the first detection of gravitational wave will independently discover black holes. ,Black holes and gravitational waves: simultaneous discovery by initial laser interferometers " We studied the effects of the incommensurate structural modulations on the ladder subsystem of the $Sr\_{14-x}Ca\_{x}Cu\_{24}O\_{41}$ family of compounds using ab-initio explicitly-correlated calculations. From these calculations we derived $t-J$ model as a function of the fourth crystallographic coordinate $\tau$ describing the incommensurate modulations. It was found that in the highly calcium-doped system, the on-site orbital energies are strongly modulated along the ladder legs. On the contrary the two sites of the ladder rungs are iso-energetic and the holes are thus expected to be delocalized on the rungs. Chain-ladder interactions were also evaluated and found to be very negligible. The ladder superconductivity model for these systems is discussed in the light of the present results. ",Influence of the structural modulations and the Chain-ladder interaction in the $Sr\_{14-x}Ca\_{x}Cu\_{24}O\_{41}$ compounds " This note establishes smooth approximation from above for J-plurisubharmonic functions on an almost complex manifold (X,J). The following theorem is proved. Suppose X is J-pseudoconvex, i.e., X admits a smooth strictly J-plurisubharmonic exhaustion function. Let u be an (upper semi-continuous) J-plurisubharmonic function on X. Then there exists a sequence {u_j} of smooth, strictly J-plurisubharmonic functions point-wise decreasing down to u. On any almost complex manifold (X,J) each point has a fundamental neighborhood system of J-pseudoconvex domains, and so the theorem above establishes local smooth approximation on X. This result was proved in complex dimension 2 by the third author, who also showed that the result would hold in general dimensions if a parallel result for continuous approximation were known. This paper establishes the required step by solving the obstacle problem. ",Smooth Approximation of Plurisubharmonic Functions on Almost Complex Manifolds " The skill of seasonal precipitation forecasts is assessed worldwide -grid point by grid point- for the forty-years period 1961-2000. To this aim, the ENSEMBLES multi-model hindcast is considered. Although predictability varies with region, season and lead-time, results indicate that 1) significant skill is mainly located in the tropics -20 to 40% of the total land areas-, 2) overall, SON (MAM) is the most (less) skillful season and 3) predictability does not decrease noticeably from one to four months lead-time -this is so especially in northern south America and the Malay archipelago, which seem to be the most skillful regions of the world-. An analysis of teleconnections revealed that most of the skillful zones exhibit significant teleconnections with El Ni\~no. Furthermore, models are shown to reproduce similar teleconnection patterns to those observed, especially in SON -with spatial correlations of around 0.6 in the tropics-. Moreover, these correlations are systematically higher for the skillful areas. Our results indicate that the skill found might be determined to a great extent by the models' ability to properly reproduce the observed El Ni\~no teleconnections, i.e., the better a model simulates the El Ni\~no teleconnections, the higher its performance is. ",Global Forty-Years Validation of Seasonal Precipitation Forecasts: Assessing El Ni\~no-Driven Skill I review the current status of lattice QCD results. I concentrate on new analytical developments and on numerical results relevant to phenomenology. ,Recent Developments in Lattice QCD " The hybrid photodetector (HPD) R9792U-40 has very high peak quantum efficiency ($>50$% at 500 nm), excellent charge resolution and very low after-pulsing probability (500 times less than that of currently used photomultipliers (PMTs)). These features will improve the sensitivity, the energy resolution and the energy threshold of the MAGIC telescope. On the other hand, its high photocathode voltage (-8 to -6 kV), relatively short photocathode lifetime, and relatively large temperature dependence of the gain need to be taken care of. In February 2010, 6 HPDs were installed in a corner of the MAGIC-II camera for a field test. Here we report the results of the field test and our future plans. ",Field test of the hybrid photodetector R9792U-40 on the MAGIC camera " Sphere decoding (SD) is a low complexity maximum likelihood (ML) detection algorithm, which has been adapted for different linear channels in digital communications. The complexity of the SD has been shown to be exponential in some cases, and polynomial in others and under certain assumptions. The sphere radius and the number of nodes visited throughout the tree traversal search are the decisive factors for the complexity of the algorithm. The radius problem has been addressed and treated widely in the literature. In this paper, we propose a new structure for SD, which drastically reduces the overall complexity. The complexity is measured in terms of the floating point operations per second (FLOPS) and the number of nodes visited throughout the algorithm tree search. This reduction in the complexity is due to the ability of decoding the real and imaginary parts of each jointly detected symbol independently of each other, making use of the new lattice representation. We further show by simulations that the new approach achieves 80% reduction in the overall complexity compared to the conventional SD for a 2x2 system, and almost 50% reduction for the 4x4 and 6x6 cases, thus relaxing the requirements for hardware implementation. ",Reduced Complexity Sphere Decoding for Square QAM via a New Lattice Representation " Sharing economy has become a socio-economic trend in transportation and housing sectors. It develops business models leveraging underutilized resources. Like those sectors, power grid is also becoming smarter with many flexible resources, and researchers are investigating the impact of sharing resources here as well that can help to reduce cost and extract value. In this work, we investigate sharing of energy storage devices among individual households in a cooperative fashion. Coalitional game theory is used to model the scenario where utility company imposes time-of-use (ToU) price and net metering billing mechanism. The resulting game has a non-empty core and we can develop a cost allocation mechanism with easy to compute analytical formula. Allocation is fair and cost effective for every household. We design the price for peer to peer network (P2P) and an algorithm for sharing that keeps the grand coalition always stable. Thus sharing electricity of storage devices among consumers can be effective in this set-up. Our mechanism is implemented in a community of 80 households in Texas using real data of demand and solar irradiance and the results show significant cost savings for our method. ",Peer-to-Peer Sharing of Energy Storage Systems under Net Metering and Time-of-Use Pricing " All-digital basestation (BS) architectures for millimeter-wave (mmWave) massive multi-user multiple-input multiple-output (MU-MIMO), which equip each radio-frequency chain with dedicated data converters, have advantages in spectral efficiency, flexibility, and baseband-processing simplicity over hybrid analog-digital solutions. For all-digital architectures to be competitive with hybrid solutions in terms of power consumption, novel signal-processing methods and baseband architectures are necessary. In this paper, we demonstrate that adapting the resolution of the analog-to-digital converters (ADCs) and spatial equalizer of an all-digital system to the communication scenario (e.g., the number of users, modulation scheme, and propagation conditions) enables orders-of-magnitude power savings for realistic mmWave channels. For example, for a 256-BS-antenna 16-user system supporting 1 GHz bandwidth, a traditional baseline architecture designed for a 64-user worst-case scenario would consume 23 W in 28 nm CMOS for the ADC array and the spatial equalizer, whereas a resolution-adaptive architecture is able to reduce the power consumption by 6.7x. ",Resolution-Adaptive All-Digital Spatial Equalization for mmWave Massive MU-MIMO " The size and structure of spatial molecular and atomic clustering can significantly impact material properties and is therefore important to accurately quantify. Ripley's K-function (K(r)), a measure of spatial correlation, can be used to perform such quantification when the material system of interest can be represented as a marked point pattern. This work demonstrates how machine learning models based on K(r)-derived metrics can accurately estimate cluster size and intra-cluster density in simulated three dimensional (3D) point patterns containing spherical clusters of varying size; over 90% of model estimates for cluster size and intra-cluster density fall within 11% and 18% error of the true values, respectively. These K(r)-based size and density estimates are then applied to an experimental APT reconstruction to characterize MgZn clusters in a 7000 series aluminum alloy. We find that the estimates are more accurate, consistent, and robust to user interaction than estimates from the popular maximum separation algorithm. Using K(r) and machine learning to measure clustering is an accurate and repeatable way to quantify this important material attribute. ",Using Ripley's K-function to Characterize Clustering In 3-Dimensional Point Patterns With a Case Study in Atom Probe Tomography " There is no single canonical polynomial-time version of the Axiom of Choice (AC); several statements of AC that are equivalent in Zermelo-Fraenkel (ZF) set theory are already inequivalent from a constructive point of view, and are similarly inequivalent from a complexity-theoretic point of view. In this paper we show that many classical formulations of AC, when restricted to polynomial time in natural ways, are equivalent to standard complexity-theoretic hypotheses, including several that were of interest to Selman. This provides a unified view of these hypotheses, and we hope provides additional motivation for studying some of the lesser-known hypotheses that appear here. Additionally, because several classical forms of AC are formulated in terms of cardinals, we develop a theory of polynomial-time cardinality. Nerode & Remmel (Contemp. Math. 106, 1990 and Springer Lec. Notes Math. 1432, 1990) developed a related theory, but restricted to unary sets. Downey (Math. Reviews MR1071525) suggested that such a theory over larger alphabets could have interesting connections to more standard complexity questions, and we illustrate some of those connections here. The connections between AC, cardinality, and complexity questions also allow us to highlight some of Selman's work. We hope this paper is more of a beginning than an end, introducing new concepts and raising many new questions, ripe for further research. ",Polynomial-Time Axioms of Choice and Polynomial-Time Cardinality " A study of the linearised gravitational field (spin 2 zero-rest-mass field) on a Minkowski background close to spatial infinity is done. To this purpose, a certain representation of spatial infinity in which it is depicted as a cylinder is used. A first analysis shows that the solutions generically develop a particular type of logarithmic divergence at the sets where spatial infinity touches null infinity. A regularity condition on the initial data can be deduced from the analysis of some transport equations on the cylinder at spatial infinity. It is given in terms of the linearised version of the Cotton tensor and symmetrised higher order derivatives, and it ensures that the solutions of the transport equations extend analytically to the sets where spatial infinity touches null infinity. It is later shown that this regularity condition together with the requirement of some particular degree of tangential smoothness ensures logarithm-free expansions of the time development of the linearised gravitational field close to spatial and null infinities. ",Polyhomogeneous expansions close to null and spatial infinity " We present a high order time-domain nodal discontinuous Galerkin method for wave problems on hybrid meshes consisting of both wedge and tetrahedral elements. We allow for vertically mapped wedges which can be deformed along the extruded coordinate, and present a simple method for producing quasi-uniform wedge meshes for layered domains. We show that standard mass lumping techniques result in a loss of energy stability on meshes of vertically mapped wedges, and propose an alternative which is both energy stable and efficient. High order convergence is demonstrated, and comparisons are made with existing low-storage methods on wedges. Finally, the computational performance of the method on Graphics Processing Units is evaluated. ",Reduced storage nodal discontinuous Galerkin methods on semi-structured prismatic meshes " In this paper, we study Serre's condition $(S_n)$ for tensor products of modules over a commutative noetherian local ring. The paper aims to show the following. Let $M$ and $N$ be finitely generated module over a commutative noetherian local ring $R$, either of which is $(n+1)$-Tor-rigid. If the tensor product $M \otimes_R N$ satisfies $(S_{n+1})$, then under some assumptions $\mathrm{Tor}_{i}^R(M, N) = 0$ for all $i \ge 1$. The key role is played by $(n+1)$-Tor-rigidity of modules. As applications, we will show that the result recovers several known results. ",Serre's condition for tensor products and $n$-Tor-rigidity of modules " In this contribution, based on the discussion at a round table of the XIII Quark Confinement and the Hadron Spectrum - Confinement conference, we review the main properties of the QCD axion and, more generally, of axion-like particles and their relevance in astrophysics and cosmology. In the last section we describe the experimental concepts to search for the QCD axion and axion-like particles (ALPs). ",Round Table on Axions and Axion-like Particles " We demonstrate the existence of robust bulk extended states in the disordered Kane-Mele model with vertical and horizontal Zeeman fields, in the presence of a large Rashba coupling. The phase diagrams are mapped out by using level statistics analysis and computations of the localization length and spin-Chern numbers $C_\pm$. $C_\pm$ are protected by the finite energy and spin mobility gaps. The latter is shown to stay open for arbitrarily large vertical Zeeman fields, or for horizontal Zeeman fields below a critical strength or at moderate disorder. In such cases, a change of $C_\pm$ is necessarily accompanied by the closing of the mobility gap at the Fermi level. The numerical simulations reveal sharp changes in the quantized values of $C_\pm$ when crossing the regions of bulk extended states, indicating that the topological nature of the extended states is indeed linked to the spin-Chern numbers. For large horizontal Zeeman fields, the spin-gap closes at strong disorder prompting a change in the quantized spin-Chern numbers without a closing of the energy mobility gap. ",Topologically Protected Extended States in Disordered Quantum Spin-Hall Systems without Time-Reversal Symmetry " Online misinformation poses a global risk with significant real-world consequences. To combat misinformation, current research relies on professionals like journalists and fact-checkers for annotating and debunking misinformation, and develops automated machine learning methods for detecting misinformation. Complementary to these approaches, recent research has increasingly concentrated on utilizing the power of ordinary social media users, a.k.a. ""crowd"", who act as eyes-on-the-ground proactively questioning and countering misinformation. Notably, recent studies show that 96% of counter-misinformation responses originate from them. Acknowledging their prominent role, we present the first systematic and comprehensive survey of research papers that actively leverage the crowds to combat misinformation. We first identify 88 papers related to crowd-based efforts, following a meticulous annotation process adhering to the PRISMA framework. We then present key statistics related to misinformation, counter-misinformation, and crowd input in different formats and topics. Upon holistic analysis of the papers, we introduce a novel taxonomy of the roles played by the crowds: (i)annotators who actively identify misinformation; (ii)evaluators who assess counter-misinformation effectiveness; (iii)creators who create counter-misinformation. This taxonomy explores the crowd's capabilities in misinformation detection, identifies prerequisites for effective counter-misinformation, and analyzes crowd-generated counter-misinformation. Then, we delve into (i)distinguishing individual, collaborative, and machine-assisted labeling for annotators; (ii)analyzing the effectiveness of counter-misinformation through surveys, interviews, and in-lab experiments for evaluators; and (iii)characterizing creation patterns and creator profiles for creators. Finally, we outline potential future research in this field. ","A Survey on the Role of Crowds in Combating Online Misinformation: Annotators, Evaluators, and Creators" " In this chapter we examine reduced order techniques for geometrical parametrized heat exchange systems, Poisson, and flows based on Stokes, steady and unsteady incompressible Navier-Stokes and Cahn-Hilliard problems. The full order finite element methods, employed in an embedded and/or immersed geometry framework, are the Shifted Boundary (SBM) and the Cut elements (CutFEM) methodologies, with applications mainly focused in fluids. We start by introducing the Nitsche's method, for both SBM/CutFEM and parametrized physical problems as well as the high fidelity approximation. We continue with the full order parameterized Nitsche shifted boundary variational weak formulation, and the reduced order modeling ideas based on a Proper Orthogonal Decomposition Galerkin method and geometrical parametrization, quoting the main differences and advantages with respect to a reference domain approach used for classical finite element methods, while stability issues may overcome employing supremizer enrichment methodologies. Numerical experiments verify the efficiency of the introduced ``hello world'' problems considering reduced order results in several cases for one, two, three and four dimensional geometrical kind of parametrization. We investigate execution times, and we illustrate transport methods and improvements. A list of important references related to unfitted methods and reduced order modeling are [11, 8, 9, 10, 7, 6, 12]. ","Reduced Basis, Embedded Methods and Parametrized Levelset Geometry" " Computation offloading in multi-access edge computing (MEC) is an effective paradigm for enabling resource-intensive smart applications. However, when the wireless channel utilized for offloading computing activities is hostile, the proper advantages of MEC may not be completely realized. Intelligent reflecting surface (IRS) is a new technology that has recently attracted significant interest can optimize the wireless transmission environment in a programmable way and improving the connectivity between user equipment (UE) and base station (BS). In this paper, the performance of MEC architecture is analyzed considering both IRS-assisted and without IRS communication scenarios in the context of the urban micro cellular scenarios. The research obtained that the deployment of IRS can reduce the spectrum and energy consumption significantly. ",IRS for Multi-Access Edge Computing in 6G Networks " Modern social and biomedical scientific publications require the reporting of covariate balance tables with not only covariate means by treatment group but also the associated $p$-values from significance tests of their differences. The practical need to avoid small $p$-values renders balance check and rerandomization by hypothesis testing standards an attractive tool for improving covariate balance in randomized experiments. Despite the intuitiveness of such practice and its arguably already widespread use in reality, the existing literature knows little about its implications on subsequent inference, subjecting many effectively rerandomized experiments to possibly inefficient analyses. To fill this gap, we examine a variety of potentially useful schemes for rerandomization based on $p$-values (ReP) from covariate balance tests, and demonstrate their impact on subsequent inference. Specifically, we focus on three estimators of the average treatment effect from the unadjusted, additive, and fully interacted linear regressions of the outcome on treatment, respectively, and derive their respective asymptotic sampling properties under ReP. The main findings are twofold. First, the estimator from the fully interacted regression is asymptotically the most efficient under all ReP schemes examined, and permits convenient regression-assisted inference identical to that under complete randomization. Second, ReP improves not only covariate balance but also the efficiency of the estimators from the unadjusted and additive regressions asymptotically. The standard regression analysis, in consequence, is still valid but can be overly conservative. ",No star is good news: A unified look at rerandomization based on $p$-values from covariate balance tests " We present a classification method with incremental capabilities based on the Optimum-Path Forest classifier (OPF). The OPF considers instances as nodes of a fully-connected training graph, arc weights represent distances between two feature vectors. Our algorithm includes new instances in an OPF in linear-time, while keeping similar accuracies when compared with the original quadratic-time model. ",An incremental linear-time learning algorithm for the Optimum-Path Forest classifier " To an arbitrary Lie superalgebra $L$ we associate its Jordan double ${\mathcal Jor}(L)$, which is a Jordan superalgebra. This notion was introduced by the second author before. Now we study further applications of this construction. First, we show that the Gelfand-Kirillov dimension of a Jordan superalgebra can be an arbitrary number $\{0\}\cup [1,+\infty]$. Thus, unlike associative and Jordan algebras, one hasn't an analogue of Bergman's gap $(1,2)$ for the Gelfand-Kirillov dimension of Jordan superalgebras. Second, using the Lie superalgebra $\mathbf R$ constructed before, we construct a Jordan superalgebra $\mathbf J={\mathcal Jor}({\mathbf R})$ that is nil finely $\mathbb Z^3$-graded, in contrast with non-existence of such examples (roughly speaking, analogues of the Grigorchuk and Gupta-Sidki groups) of Lie algebras in characteristic zero and Jordan algebras in characteristic not 2. Also, $\mathbf J$ is just infinite but not hereditary just infinite. A similar Jordan superalgebra of slow polynomial growth was constructed before. The virtue of the present example is that it is of linear growth, of finite width 4, namely, its $\mathbb N$-gradation by degree in the generators has components of dimensions $\{0,2,3,4\}$, and the sequence of these dimensions is non-periodic. Third, we review constructions of Poisson and Jordan superalgebras starting with another example of a Lie superalgebra. We discuss the notion of self-similarity for Lie, associative, Poisson, and Jordan superalgebras. We also discuss the notion of a wreath product in case of Jordan superalgebras. ",On Jordan doubles of slow growth of Lie superalgebras " In this letter, the problem of radiation in a fiber geometry interacting with a two level atom is mapped onto the anisotropic Kondo model. Thermodynamical and dynamical properties are then computed exploiting the integrability of this latter system. We compute some correlation functions, decay rates and Lamb shifts. In turn this leads to an analysis of the classical limit of the anisotropic Kondo model. ",The Maxwell-Bloch Theory in Quantum Optics and the Kondo Model " We present a new, dynamical way to study powers (that is, repetitions) in Sturmian words based on results from Diophantine approximation theory. As a result, we provide an alternative and shorter proof of a result by Damanik and Lenz characterizing powers in Sturmian words [Powers in Sturmian sequences, Eur. J. Combin. 24 (2003), 377--390]. Further, as a consequence, we obtain a previously known formula for the fractional index of a Sturmian word based on the continued fraction expansion of its slope. ",Characterization of repetitions in Sturmian words: A new proof " We study the entanglement game, which is a version of cops and robbers, on sparse graphs. While the minimum degree of a graph G is a lower bound for the number of cops needed to catch a robber in G, we show that the required number of cops can be much larger, even for graphs with small maximum degree. In particular, we show that there are 3-regular graphs where a linear number of cops are needed. ",Even flying cops should think ahead " We propose and study a constrained version of the Exceptional Supersymmetric Standard Model (E6SSM), which we call the cE6SSM, based on a universal high energy scalar mass m_0, trilinear scalar coupling A_0 and gaugino mass M_{1/2}. We derive the Renormalisation Group (RG) Equations for the cE6SSM, including the extra U(1)_{N} gauge factor and the low energy matter content involving three 27 representations of E6. We perform a numerical RG analysis for the cE6SSM, imposing the usual low energy experimental constraints and successful Electro-Weak Symmetry Breaking (EWSB). Our analysis reveals that the sparticle spectrum of the cE6SSM involves a light gluino, two light neutralinos and a light chargino. Furthermore, although the squarks, sleptons and Z' boson are typically heavy, the exotic quarks and squarks can also be relatively light. We finally specify a set of benchmark points which correspond to particle spectra, production modes and decay patterns peculiar to the cE6SSM, altogether leading to spectacular new physics signals at the Large Hadron Collider (LHC). ",The Constrained Exceptional Supersymmetric Standard Model " Any non-trivial concurrent system warrants synchronisation, regardless of the concurrency model. Actor-based concurrency serialises all computations in an actor through asynchronous message passing. In contrast, lock-based concurrency serialises some computations by following a lock--unlock protocol for accessing certain data. Both systems require sound reasoning about pointers and aliasing to exclude data-races. If actor isolation is broken, so is the single-thread-of-control abstraction. Similarly for locks, if a datum is accessible outside of the scope of the lock, the datum is not governed by the lock. In this paper we discuss how to balance aliasing and synchronisation. In previous work, we defined a type system that guarantees data-race freedom of actor-based concurrency and lock-based concurrency. This paper extends this work by the introduction of two programming constructs; one for decoupling isolation and synchronisation and one for constructing higher-level atomicity guarantees from lower-level synchronisation. We focus predominantly on actors, and in particular the Encore programming language, but our ultimate goal is to define our constructs in such a way that they can be used both with locks and actors, given that combinations of both models occur frequently in actual systems. We discuss the design space, provide several formalisations of different semantics and discuss their properties, and connect them to case studies showing how our proposed constructs can be useful. We also report on an on-going implementation of our proposed constructs in Encore. ","Bestow and Atomic: Concurrent Programming using Isolation, Delegation and Grouping" " The interference in wireless networks is temporally correlated, since the node or user locations are correlated over time and the interfering transmitters are a subset of these nodes. For a wireless network where (potential) interferers form a Poisson point process and use ALOHA for channel access, we calculate the joint success and outage probabilities of n transmissions over a reference link. The results are based on the diversity polynomial, which captures the temporal interference correlation. The joint outage probability is used to determine the diversity gain (as the SIR goes to infinity), and it turns out that there is no diversity gain in simple retransmission schemes, even with independent Rayleigh fading over all links. We also determine the complete joint SIR distribution for two transmissions and the distribution of the local delay, which is the time until a repeated transmission over the reference link succeeds. ",Diversity Polynomials for the Analysis of Temporal Correlations in Wireless Networks " I present a comprehensive study of the MBM12 young association (MBM12A). By combining infrared (IR) photometry from the Two-Micron All-Sky Survey (2MASS) survey with new optical imaging and spectroscopy, I have performed a census of the MBM12A membership that is complete to 0.03 Msun (H~15) for a 1.75deg X 1.4deg field encompassing the MBM12 cloud. I find five new members with masses of 0.1-0.4 Msun and a few additional candidates that have not been observed spectroscopically. From an analysis of optical and IR photometry for stars in the direction of MBM12, I identify M dwarfs in the foreground and background of the cloud. By comparing the magnitudes of these stars to those of local field dwarfs, I arrive at a distance modulus 7.2+/-0.5 (275 pc) to the MBM12 cloud; it is not the nearest molecular cloud and is not inside the local bubble of hot ionized gas as had been implied by previous distance estimates of 50-100 pc. I have also used Li strengths and H-R diagrams to constrain the absolute and relative ages of MBM12A and other young populations; these data indicate ages of 2 +3/-1 Myr for MBM12A and 10 Myr for the TW Hya and Eta Cha associations. MBM12A may be a slightly evolved version of the aggregates of young stars within the Taurus dark clouds (~1 Myr) near the age of the IC 348 cluster (~2 Myr). ",On the MBM12 Young Association " The $\phi$ scalar boson is a new mediator proposed to describe the direct light-by-light scattering observed recently in ultra-peripheral collisions (UPCs) with ATLAS detector where two photons interact directly to gives two photons in the final state. The proposed lagrangian present a description to the forbidden process. The description presented in this paper consist the interaction terms, the coupling to the Higgs boson to generate the mass for this new resonance and the simulation using proton proton in order to observe this process in other nominal LHC collisions, since the process is observed in heavy-ions collisions. ","$\phi$-Lagrangian, a new scalar mediator for light-by-light scattering process" " We present compact analytic results for the one-loop amplitude for the process $0 \rightarrow q \bar{q} \ell \bar\ell \ell^\prime \bar{\ell}^\prime g$, relevant for both the production of a pair of $Z$ and $W$-bosons in association with a jet. We focus on the gauge-invariant contribution mediated by a loop of quarks. We explicitly include all effects of the loop-quark mass $m$, appropriate for the production of a pair of $Z$-bosons. In the limit $m \to 0$, our results are also applicable to the production of $W$-boson pairs, mediated by a loop of massless quarks. Implemented in a numerical code, the results are fast. The calculation uses novel advancements in spinor-helicity simplification techniques, for the first time applied beyond five-point massless kinematics. We make use of primary decompositions from algebraic-geometry, which now involve non-radical ideals, and $p$-adic numbers from number theory. We show how to infer whether numerator polynomials belong to symbolic powers of non-radical ideals through numerical evaluations. ",Vector boson pair production at one loop: analytic results for the process $q \bar{q} \ell \bar\ell \ell^\prime \bar{\ell}^\prime g$ " We carried out observations of pulsar PSR B1919+21 at 324 MHz to study the distribution of interstellar plasma in the direction of this pulsar. We used the RadioAstron (RA) space radiotelescope together with two ground telescopes: Westerbork (WB) and Green Bank (GB). The maximum baseline projection for the space-ground interferometer was about 60000 km. We show that interstellar scintillation of this pulsar consists of two components: diffractive scintillations from inhomogeneities in a layer of turbulent plasma at a distance $z_{1} = 440$ pc from the observer or homogeneously distributed scattering material to pulsar; and weak scintillations from a screen located near the observer at $z_{2} = 0.14 \pm 0.05$ pc. Furthermore, in the direction to the pulsar we detected a prism that deflects radiation, leading to a shift of observed source position. We show that the influence of the ionosphere can be ignored for the space-ground baseline. Analysis of the spatial coherence function for the space-ground baseline (RA-GB) yielded the scattering angle in the observer plane: $\theta_{scat}$ = 0.7 mas. An analysis of the time-frequency correlation function for weak scintillations yielded the angle of refraction in the direction to the pulsar: $\theta_{ref, 0}$ = 110 ms and the distance to the prism $z_{prism} \le 2$ pc. ",Interstellar scintillations of PSR B1919+21: space-ground interferometry " In this paper, we study the asymptotic behavior of supremum distribution of some classes of iterated stochastic processes $\{X(Y(t)) : t \in [0, \infty)\}$, where $\{X(t) : t \in \mathbb{R} \}$ is a centered Gaussian process and $\{Y(t): t \in [0, \infty)\}$ is an independent of $\{X(t)\}$ stochastic process with a.s. continuous sample paths. In particular, the asymptotic behavior of $\mathbb{P}\left(\sup_{s \in [0,T]} X(Y(s)) > u\right)$ as $u \to \infty$, where $T > 0$, as well as $\lim_{u\to\infty} \mathbb{P}\left(\sup_{s \in [0, h(u)]} X(Y(s)) > u\right)$, for some suitably chosen function $h(u)$ are analyzed. As an illustration, we study the asymptotic behavior of the supremum distribution of iterated fractional Brownian motion process. ",On the asymptotics of supremum distribution for some iterated processes " Lesion segmentation in medical imaging has been an important topic in clinical research. Researchers have proposed various detection and segmentation algorithms to address this task. Recently, deep learning-based approaches have significantly improved the performance over conventional methods. However, most state-of-the-art deep learning methods require the manual design of multiple network components and training strategies. In this paper, we propose a new automated machine learning algorithm, T-AutoML, which not only searches for the best neural architecture, but also finds the best combination of hyper-parameters and data augmentation strategies simultaneously. The proposed method utilizes the modern transformer model, which is introduced to adapt to the dynamic length of the search space embedding and can significantly improve the ability of the search. We validate T-AutoML on several large-scale public lesion segmentation data-sets and achieve state-of-the-art performance. ",T-AutoML: Automated Machine Learning for Lesion Segmentation using Transformers in 3D Medical Imaging " Using numerical simulations and scaling theory we study the dynamics of the world-wide Web from the growth rules recently proposed in Ref. [1] with appropriate parameters. We demonstrate that the emergence of power-law behavior of the out- and in-degree distributions in the Web involves occurrence of temporal fractal structures, that are manifested in the scale-free growth of the local connectivity and in first-return time statistics. We also show how the scale-free behavior occurs in the statistics of random walks on the Web, where the walkers use information on the local graph connectivity. ",Temporal fractal structures: Origin of power-laws in the world-wide Web " The framework of inner product norm preserving relaxation Runge-Kutta methods (David I. Ketcheson, \emph{Relaxation Runge-Kutta Methods: Conservation and Stability for Inner-Product Norms}, SIAM Journal on Numerical Analysis, 2019) is extended to general convex quantities. Conservation, dissipation, or other solution properties with respect to any convex functional are enforced by the addition of a {\em relaxation parameter} that multiplies the Runge-Kutta update at each step. Moreover, other desirable stability (such as strong stability preservation) and efficiency (such as low storage requirements) properties are preserved. The technique can be applied to both explicit and implicit Runge-Kutta methods and requires only a small modification to existing implementations. The computational cost at each step is the solution of one additional scalar algebraic equation for which a good initial guess is available. The effectiveness of this approach is proved analytically and demonstrated in several numerical examples, including applications to high-order entropy-conservative and entropy-stable semi-discretizations on unstructured grids for the compressible Euler and Navier-Stokes equations. ",Relaxation Runge-Kutta Methods: Fully-Discrete Explicit Entropy-Stable Schemes for the Compressible Euler and Navier-Stokes Equations " The sudden widespread menace created by the present global pandemic COVID-19 has had an unprecedented effect on our lives. Man-kind is going through humongous fear and dependence on social media like never before. Fear inevitably leads to panic, speculations, and the spread of misinformation. Many governments have taken measures to curb the spread of such misinformation for public well being. Besides global measures, to have effective outreach, systems for demographically local languages have an important role to play in this effort. Towards this, we propose an approach to detect fake news about COVID-19 early on from social media, such as tweets, for multiple Indic-Languages besides English. In addition, we also create an annotated dataset of Hindi and Bengali tweet for fake news detection. We propose a BERT based model augmented with additional relevant features extracted from Twitter to identify fake tweets. To expand our approach to multiple Indic languages, we resort to mBERT based model which is fine-tuned over created dataset in Hindi and Bengali. We also propose a zero-shot learning approach to alleviate the data scarcity issue for such low resource languages. Through rigorous experiments, we show that our approach reaches around 89% F-Score in fake tweet detection which supercedes the state-of-the-art (SOTA) results. Moreover, we establish the first benchmark for two Indic-Languages, Hindi and Bengali. Using our annotated data, our model achieves about 79% F-Score in Hindi and 81% F-Score for Bengali Tweets. Our zero-shot model achieves about 81% F-Score in Hindi and 78% F-Score for Bengali Tweets without any annotated data, which clearly indicates the efficacy of our approach. ",No Rumours Please! A Multi-Indic-Lingual Approach for COVID Fake-Tweet Detection " Accurate photometric and kinematic modelling of disc galaxies requires the inclusion of radiative transfer models. Due to the complexity of the radiative transfer equation (RTE), sophisticated techniques are required. Various techniques have been employed for the attenuation in disc galaxies, but a quantitative comparison of them is difficult, because of the differing assumptions, approximations and accuracy requirements which are adopted in the literature. In this paper, we present an unbiased comparison of four methods to solve the RTE, in terms of accuracy, efficiency and flexibility. We apply them all on one problem that can serve as a first approximation of large portions of disc galaxies: a one-dimensional plane-parallel geometry, with both absorption and multiple scattering taken into account, with an arbitrary vertical distributions of stars and dust and an arbitrary angular redistribution of the scattering. We find that the spherical harmonics method is by far the most efficient way to solve the RTE, whereas both Monte Carlo simulations and the iteration method, which are straightforward to extend to more complex geometries, have a cost which is about 170 times larger. ",Radiative transfer in disc galaxies I - A comparison of four methods to solve the transfer equation in plane-parallel geometry " Planetary magnetic fields are generated by motions of electrically conducting fluids in their interiors. The dynamo problem has thus received much attention in spherical geometries, even though planetary bodies are non-spherical. To go beyond the spherical assumption, we develop an algorithm that exploits a fully spectral description of the magnetic field in triaxial ellipsoids to solve the induction equation with local boundary conditions (i.e. pseudo-vacuum or perfectly conducting boundaries). We use the method to compute the free-decay magnetic modes and to solve the kinematic dynamo problem for prescribed flows. The new method is thoroughly compared with analytical solutions and standard finite-element computations, which are also used to model an insulating exterior. We obtain dynamo magnetic fields at low magnetic Reynolds numbers in ellipsoids, which could be used as simple benchmarks for future dynamo studies in such geometries. We finally discuss how the magnetic boundary conditions can modify the dynamo onset, showing that a perfectly conducting boundary can strongly weaken dynamo action, whereas pseudo-vacuum and insulating boundaries often give similar results. ",Kinematic dynamos in triaxial ellipsoids " We report observations of dust continuum emission at 1.2 mm toward the star forming region NGC 6334 made with the SEST SIMBA bolometer array. The observations cover an area of $\sim 2$ square degrees with approximately uniform noise. We detected 181 clumps spanning almost three orders of magnitude in mass (3\Msun$-6\times10^3$ \Msun) and with sizes in the range 0.1--1.0 pc. We find that the clump mass function $dN/d\log M$ is well fit with a power law of the mass with exponent -0.6 (or equivalently $dN/dM \propto M^{-1.6}$). The derived exponent is similar to those obtained from molecular line emission surveys and is significantly different from that of the stellar initial mass function. We investigated changes in the mass spectrum by changing the assumptions on the temperature distribution of the clumps and on the contribution of free-free emission to the 1.2 mm emission, and found little changes on the exponent. The Cumulative Mass Distribution Function is also analyzed giving consistent results in a mass range excluding the high-mass end where a power-law fit is no longer valid. The masses and sizes of the clumps observed in NGC 6334 indicate that they are not direct progenitors of stars and that the process of fragmentation determines the distribution of masses later on or occurs at smaller spatial scales. The spatial distribution of the clumps in NGC 6334 reveals clustering which is strikingly similar to that exhibited by young stars in other star forming regions. A power law fit to the surface density of companions gives $\Sigma\propto \theta^{-0.62}$. ",Massive Clumps in the NGC 6334 Star Forming Region " Incorporating artificial intelligence and machine learning (AI/ML) methods within the 5G wireless standard promises autonomous network behavior and ultra-low-latency reconfiguration. However, the effort so far has purely focused on learning from radio frequency (RF) signals. Future standards and next-generation (nextG) networks beyond 5G will have two significant evolutions over the state-of-the-art 5G implementations: (i) massive number of antenna elements, scaling up to hundreds-to-thousands in number, and (ii) inclusion of AI/ML in the critical path of the network reconfiguration process that can access sensor feeds from a variety of RF and non-RF sources. While the former allows unprecedented flexibility in 'beamforming', where signals combine constructively at a target receiver, the latter enables the network with enhanced situation awareness not captured by a single and isolated data modality. This survey presents a thorough analysis of the different approaches used for beamforming today, focusing on mmWave bands, and then proceeds to make a compelling case for considering non-RF sensor data from multiple modalities, such as LiDAR, Radar, GPS for increasing beamforming directional accuracy and reducing processing time. This so called idea of multimodal beamforming will require deep learning based fusion techniques, which will serve to augment the current RF-only and classical signal processing methods that do not scale well for massive antenna arrays. The survey describes relevant deep learning architectures for multimodal beamforming, identifies computational challenges and the role of edge computing in this process, dataset generation tools, and finally, lists open challenges that the community should tackle to realize this transformative vision of the future of beamforming. ",Going Beyond RF: How AI-enabled Multimodal Beamforming will Shape the NextG Standard " In a recent paper [arXiv:1001.0785], Verlinde has shown that the Newton gravity appears as an entropy force. In this paper we show how gravity appears as entropy force in Einstein's equation of gravitational field in a general spherically symmetric spacetime. We mainly focus on the trapping horizon of the spacetime. We find that when matter fields are absent, the change of entropy associated with the trapping horizon indeed can be identified with an entropy force. When matter fields are present, we see that heat flux of matter fields also leads to the change of entropy. Applying arguments made by Verlinde and Smolin, respectively, to the trapping horizon, we find that the entropy force is given by the surface gravity of the horizon. The cases in the untrapped region of the spacetime are also discussed. ",Notes on Entropy Force in General Spherically Symmetric Spacetimes " We present rest-frame optical spectra of 60 faint ($R_{AB}\sim 27$; $L\sim0.1 L_*$) Ly$\alpha$-selected galaxies (LAEs) at $z\approx2.56$. The average LAE is consistent with the extreme low-metallicity end of the continuum-selected galaxy distribution at $z\approx2-3$. In particular, the LAEs have extremely high [OIII] $\lambda$5008/H$\beta$ ratios (log([OIII]/H$\beta$) $\sim$ 0.8) and low [NII] $\lambda$6585/H$\alpha$ ratios (log([NII]/H$\alpha$) $<-1.15$). Using the [OIII] $\lambda$4364 auroral line, we find that the star-forming regions in faint LAEs are characterized by high electron temperatures ($T_e\approx1.8\times10^4$K), low oxygen abundances (12 + log(O/H) $\approx$ 8.04, $Z_{neb}\approx0.22Z_\odot$), and high excitations with respect to more luminous galaxies. Our faintest LAEs have line ratios consistent with even lower metallicities, including six with 12 + log(O/H) $\approx$ 6.9$-$7.4 ($Z_{neb}\approx0.02-0.05Z_\odot$). We interpret these observations in light of new models of stellar evolution (including binary interactions). We find that strong, hard ionizing continua are required to reproduce our observed line ratios, suggesting that faint galaxies are efficient producers of ionizing photons and important analogs of reionization-era galaxies. Furthermore, we investigate physical trends accompanying Ly$\alpha$ emission across the largest current sample of combined Ly$\alpha$ and rest-optical galaxy spectroscopy, including 60 faint LAEs and 368 more luminous galaxies at similar redshifts. We find that Ly$\alpha$ emission is strongly correlated with nebular excitation and ionization and weakly correlated with dust attenuation, suggesting that metallicity plays a strong role in determining the observed properties of these galaxies by modulating their stellar spectra, nebular excitation, and dust content. ",The Rest-Frame Optical Spectroscopic Properties of Ly$\alpha$-Emitters at $z\sim2.5$: The Physical Origins of Strong Ly$\alpha$ Emission " We have investigated spin-wave excitations in a four-sublattice (4SL) magnetic ground state of a frustrated magnet CuFeO2, in which `electromagnon' (electric-field-active magnon) excitation has been discovered by recent terahertz time-domain spectroscopy [Seki et al. Phys. Rev. Lett. 105 097207 (2010)]. In previous study, we have identified two spin-wave branches in the 4SL phase by means of inelastic neutron scattering measurements under applied uniaxial pressure. [T. Nakajima et al. J. Phys. Soc. Jpn. 80 014714 (2011) ] In the present study, we have performed high-energy-resolution inelastic neutron scattering measurements in the 4SL phase, resolving fine structures of the lower-energy spin-wave branch near the zone center. Taking account of the spin-driven lattice distortions in the 4SL phase, we have developed a model Hamiltonian to describe the spin-wave excitations. The determined Hamiltonian parameters have successfully reproduced the spin-wave dispersion relations and intensity maps obtained in the inelastic neutron scattering measurements. The results of the spin-wave analysis have also revealed physical pictures of the magnon and electromagnon modes in the 4SL phase, suggesting that collinear and noncollinear characters of the two spin-wave modes are the keys to understand the dynamical coupling between the spins and electric dipole moments in this system. ",Magnons and electromagnons in a spin-lattice-coupled frustrated magnet CuFeO2 as seen via inelastic neutron scattering " We determined the atmospheric parameters and abundance pattern for a sample of metal-rich barium stars. We used high-resolution optical spectroscopy. Atmospheric parameters and abundances were determined using the local thermodynamic equilibrium atmosphere models of Kurucz and the spectral analysis code MOOG. We show that the stars have enhancement factors, [s/Fe], from 0.25 to 1.16. Their abundance pattern of the Na, Al, alpha-elements, and iron group elements as well as their kinematical properties are similar to the characteristics of the other metal-rich and super metal-rich stars already analyzed. We conclude that metal-rich barium stars do not belong to the bulge population. We also show that metal-rich barium stars are useful targets for probing the s-process enrichment in high-metallicity environments. ",Chemical abundances and kinematics of a sample of metal-rich barium stars The Tevatron collider has provided the CDF and D0 experiences with large datasets as input to a rich program of searches for physics beyond the standard model. The results presented here are a partial survey of recent searches conducted by the two collaborations using up to 6 /fb of data. ,Searches for new physics at the Tevatron " A search for QCD instanton (I) induced events in deep-inelastic scattering (DIS) at HERA is presented in the kinematic range of low x and low Q^2. After cutting into three characteristic variables for I-induced events yielding a maximum suppression of standard DIS background to the 0.1% level while still preserving 10% of the I-induced events, 549 data events are found while 363^{+22}_{-26} (CDM) and 435^{+36}_{-22} (MEPS) standard DIS events are expected. More events than expected by the standard DIS Monte Carlo models are found in the data. However, the systematic uncertainty between the two different models is of the order of the expected signal, so that a discovery of instantons can not be claimed. An outlook is given on the prospect to search for QCD instanton events using a discriminant based on range searching in the kinematical region Q^2\gtrsim100\GeV^2 where the I-theory makes safer predictions and the QCD Monte Carlos are expected to better describe the inclusive data. ",A Search for Instantons at HERA " T- and S-duality rules among the gauge potentials in type II supergravities are studied. In particular, by following the approach of arXiv:1909.01335, we determine the T- and S-duality rules for certain mixed-symmetry potentials, which couple to supersymmetric branes with tension $T\propto g_s^{-n}$ ($n\leq 4$). Although the T-duality rules are rather intricate, we find a certain redefinition of potentials which considerably simplifies the duality rules. After the redefinition, potentials are identified with components of the T-duality-covariant potentials, which have been predicted by the $E_{11}$ conjecture. We also discuss the field strengths of the mixed-symmetry potentials. ",Duality rules for more mixed-symmetry potentials " In some range of interlayer distances, the ground state of the two-dimensional electron gas at filling factor nu =4N+1 with N=0,1,2,... is a coherent stripe phase in the Hartree-Fock approximation. This phase has one-dimensional coherent channels that support charged excitations in the form of pseudospin solitons. In this work, we compute the transport gap of the coherent striped phase due to the creation of soliton-antisoliton pairs using a supercell microscopic unrestricted Hartree-Fock approach. We study this gap as a function of interlayer distance and tunneling amplitude. Our calculations confirm that the soliton-antisoliton excitation energy is lower than the corresponding Hartree-Fock electron-hole pair energy. We compare our results with estimates of the transport gap obtained from a field-theoretic model valid in the limit of slowly varying pseudospin textures. ",Solitonic Excitations in Linearly Coherent Channels of Bilayer Quantum Hall Stripes " Three topics about the application of quenched chiral perturbation theory to matter fields are studied. It is proved that the hairpin axial current couplings in quenched chiral perturbation theories do not contribute to the quenched chiral singularities for one chiral loop renormalization of matter field properties. The modification of mass corrections in the chiral limit due to nonzero mass splittings are studied, and selection rules for hadron decays in quenched QCD are obtained. ",Aspects of Quenched Chiral Perturbation Theory for Matter Fields " In the early universe or in some regions of supernovae, the neutrino refractive index is dominated by the neutrinos themselves. Several previous studies have found numerically that these self-interactions have the effect of coupling different neutrino modes in such a way as to synchronize the flavor oscillations which otherwise would depend on the energy of a given mode. We provide a simple explanation for this baffling phenomenon in analogy to a system of magnetic dipoles which are coupled by their self-interactions to form one large magnetic dipole which then precesses coherently in a weak external magnetic field. In this picture the synchronized neutrino oscillations are perfectly analogous to the weak-field Zeeman effect in atoms. ",Physics of Synchronized Neutrino Oscillations Caused by Self-Interactions " We investigate a model of large extra dimensions where the internal space has the geometry of a hyperbolic disc. Compared with the ADD model, this model provides a more satisfactory solution to the hierarchy problem between the electroweak scale and the Planck scale, and it also avoids constraints from astrophysics. In general, a novel feature of this model is that the physical results depend on the position of the brane in the internal space, and in particular, the signal almost disappears completely if the brane is positioned at the center of the disc. Since there is no known analytic form of the Kaluza-Klein spectrum for our choice of geometry, we obtain a spectrum based on a combination of approximations and numerical computations. We study the possible signatures of our model for hadron colliders, especially the LHC, where the most important processes are the production of a graviton together with a hadronic jet or a photon. We find that the signals are similar to those of the ADD model, regarding both qualitative behavior and strength. For the case of hadronic jet production, it is possible to obtain relatively strong signals, while for the case of photon production, this is much more difficult. ",Searches for hyperbolic extra dimensions at the LHC " The aim of this paper is to analyse algorithms for constructing presentations of graph braid groups from the point of view of anyonic quantum statistics on graphs. In the first part of this paper, we provide a comprehensive review of an algorithm for constructing so-called minimal Morse presentations of graph braid groups that relies on discrete Morse theory. Next, we introduce the notion of a physical presentation of a graph braid group as a presentation whose generators have a direct interpretation as particle exchanges. We show how to derive a physical presentation of a graph braid group from its minimal Morse presentation. In the second part of the paper, we study unitary representations of graph braid groups that are constructed from their presentations. We point out that algebraic objects called moduli spaces of flat bundles encode all unitary representations of graph braid groups. For $2$-connected graphs, we conclude the stabilisation of moduli spaces of flat bundles over graph configuration spaces for large numbers of particles. Moreover, we set out a framework for studying locally abelian anyons on graphs whose non-abelian properties are only encoded in non-abelian topological phases assigned to cycles of the considered graph. ",Non-abelian anyons on graphs from presentations of graph braid groups " This paper presents a physical model for Single Carrier-Frequency Division Mutliple Access (SC-FDMA). We specifically show that by using mutlirate signal processing we derive a general time domain description of Localised SC-FDMA systems relying on circular convolution. This general model has the advantage of encompassing different implementations with flexible rates as well as additional frequency precoding such as spectral shaping. Based on this time-domain model, we study the Power Spectral Density (PSD) and the Signal to Interference and Noise Ratio (SINR). Different implementations of SC-FDMA are investigated and analytical expressions of both PSD and SINR compared to simulations results. ",An efficient time domain representation for Single-Carrier Frequency Division Multiple Access " The phenomenon of gravitational particle production can take place for quantum fields in curved spacetime. The abundance and energy spectrum of gravitationally produced particles is typically calculated by solving the field's mode equations on a time-dependent background metric. For purposes of studying dark matter production in an inflationary cosmology, these mode equations are often solved numerically, which is computationally intensive, especially for the rapidly-oscillating high-momentum modes. However, these same modes are amenable to analytic evaluation via the Exact Wentzel-Kramers-Brillouin (EWKB) method, where gravitational particle production is a manifestation of the Stokes phenomenon. These analytic techniques have been used in the past to study gravitational particle production for spin-0 bosons. We extend the earlier work to study gravitational production of spin-1/2 and spin-3/2 fermions. We derive an analytic expression for the connection matrix (valid to all orders in perturbations) that relates Bogoliubov coefficients across a Stokes line connecting a merged pair of simple turning points. By comparing the analytic approximation with a direct numerical integration of the mode equations, we demonstrate an excellent agreement and highlight the utility of the Stokes phenomenon formalism applied to fermions. We discuss the implications for an analytic understanding of catastrophic particle production due to vanishing sound speed, which can occur for a spin-3/2 Rarita-Schwinger field. ",An analytic evaluation of gravitational particle production of fermions via Stokes phenomenon " We present results for two colliding black holes (BHs), with angular momentum, spin, and unequal mass. For the first time gravitational waveforms are computed for a grazing collision from a full 3D numerical evolution. The collision can be followed through the merger to form a single BH, and through part of the ringdown period of the final BH. The apparent horizon is tracked and studied, and physical parameters, such as the mass of the final BH, are computed. The total energy radiated in gravitational waves is shown to be consistent with the total mass of the spacetime and the final BH mass. The implication of these simulations for gravitational wave astronomy is discussed. ",The 3D Grazing Collision of Two Black Holes " A number of parametric and nonparametric methods for estimating cognitive diagnosis models (CDMs) have been developed and applied in a wide range of contexts. However, in the literature, a wide chasm exists between these two families of methods, and their relationship to each other is not well understood. In this paper, we propose a unified estimation framework to bridge the divide between parametric and nonparametric methods in cognitive diagnosis to better understand their relationship. We also develop iterative joint estimation algorithms and establish consistency properties within the proposed framework. Lastly, we present comprehensive simulation results to compare different methods, and provide practical recommendations on the appropriate use of the proposed framework in various CDM contexts. ",Bridging Parametric and Nonparametric Methods in Cognitive Diagnosis " Generative Adversarial Network (GAN) is a useful type of Neural Networks in various types of applications including generative models and feature extraction. Various types of GANs are being researched with different insights, resulting in a diverse family of GANs with a better performance in each generation. This review focuses on various GANs categorized by their common traits. ",On the Performance of Generative Adversarial Network (GAN) Variants: A Clinical Data Study " We study low temperature properties in the metallic magnets, considering the itinerant electron mediated ferromagnetism. Applying the Monte Carlo simulations to the extended double exchange model, we discuss reorientation phase transition and anisotropy field for the metallic magnets. ",Magnetic properties in the metallic magnets with large anisotropy Non-standard .sty file `equations.sty' now included inline. The critical exponents of the metal--insulator transition in disordered systems have been the subject of much published work containing often contradictory results. Values ranging between $\half$ and $2$ can be found even in the recent literature. In this paper the results of a long term study of the transition are presented. The data have been calculated with sufficient accuracy (0.2\%) that the calculated exponent can be quoted as $s=\nu=1.54 \pm 0.08$ with confidence. The reasons for the previous scatter of results is discussed. ,Critical Exponents for the Metal--Insulator Transition in Disordered Systems " We use purely topological tools to construct several infinite families of hyperbolic links in the 3-sphere that satisfy the Turaev-Viro invariant volume conjecture posed by Chen and Yang. To show that our links satisfy the volume conjecture, we prove that each has complement homeomorphic to the complement of a fundamental shadow link. These are links in connected sums of copies of $S^2 \times S^1$ for which the conjecture is known due to Belletti, Detcherry, Kalfagianni, and Yang. Our methods also verify the conjecture for several hyperbolic links with crossing number less than twelve. In addition, we show that every link in the $3$-sphere is a sublink of a link that satisfies the conjecture. As an application of our results, we extend the class of known examples that satisfy the AMU conjecture on quantum representations of surface mapping class groups. For example, we give explicit elements in the mapping class group of a genus $g$ surface with four boundary components for any $g$. For this, we use techniques developed by Detcherry and Kalfagianni which relate the Turaev-Viro invariant volume conjecture to the AMU conjecture. ",Fundamental shadow links realized as links in $S^3$ " We present numerical simulations modeling the orbital evolution of very wide binaries, pairs of stars separated by over ~1000 AU. Due to perturbations from other passing stars and the Milky Way's tide, the orbits of very wide binary stars occasionally become extremely eccentric, which forces close encounters between the companion stars (Kaib et al. 2013). We show that this process causes a stellar collision between very wide binary companion stars once every 1000-7500 years on average in the Milky Way. One of the main uncertainties in this collision rate is the amount of energy dissipated by dynamic tides during close (but not collisional) periastron passages. This dissipation presents a dynamical barrier to stellar collisions and can instead transform very wide binaries into close or contact binaries. However, for any plausible tidal dissipation model, very wide binary stars are an unrealized, and potentially the dominant, source of stellar collisions in our Galaxy. Such collisions should occur throughout the thin disk of the Milky Way. Stellar collisions within very wide binaries should yield a small population of single, Li-depleted, rapidly rotating massive stars. ",Very Wide Binary Stars as the Primary Source of Stellar Collisions in the Galaxy " We have identified a population of passive spiral galaxies from photometry and integral field spectroscopy. We selected z<0.035 spiral galaxies that have WISE colours consistent with little mid-infrared emission from warm dust. Matched aperture photometry of 51 spiral galaxies in ultraviolet, optical and mid-infrared show these galaxies have colours consistent with passive galaxies. Six galaxies form a spectroscopic pilot study and were observed using the Wide-Field Spectrograph (WiFeS) to check for signs of nebular emission from star formation. We see no evidence of substantial nebular emission found in previous red spiral samples. These six galaxies possess absorption-line spectra with 4000\AA\ breaks consistent with an average luminosity-weighted age of 2.3 Gyr. Our photometric and IFU spectroscopic observations confirm the existence of a population of local passive spiral galaxies, implying that transformation into early-type morphologies is not required for the quenching of star formation. ",A Photometrically and Spectroscopically Confirmed Population of Passive Spiral Galaxies " Photon loss is destructive to the performance of quantum photonic devices and therefore suppressing the effects of photon loss is paramount to photonic quantum technologies. We present two schemes to mitigate the effects of photon loss for a Gaussian Boson Sampling device, in particular, to improve the estimation of the sampling probabilities. Instead of using error correction codes which are expensive in terms of their hardware resource overhead, our schemes require only a small amount of hardware modifications or even no modification. Our loss-suppression techniques rely either on collecting additional measurement data or on classical post-processing once the measurement data is obtained. We show that with a moderate cost of classical post processing, the effects of photon loss can be significantly suppressed for a certain amount of loss. The proposed schemes are thus a key enabler for applications of near-term photonic quantum devices. ",Error mitigation on a near-term quantum photonic device " Recently, Morrison and Updike showed that many dissipative systems are naturally described as possessing a Riemann curvature-like bracket, which similar to the Poisson bracket, generates the dissipative equations of motion once suitable generators are chosen. In this paper, we use geometry to construct and explore the dynamics of these new brackets. Specifically, we consider the dynamics of a heavy top with dissipation imposed by a Euclidian contravariant curvature. We find that the equations of motion, despite their rather formal motivation, naturally generalize the energy-conserving dissipation considered by Matterasi and Morrison. In particular, with suitable initial conditions, we find that the geometrically motivated equations of motion cause the top to relax to rotation about a principal axis. ",Metriplectic Heavy Top: An Example of Geometrical Dissipation " The local chromatic number of a graph was introduced by Erdos et al. in 1986. It is in between the chromatic and fractional chromatic numbers. This motivates the study of the local chromatic number of graphs for which these quantities are far apart. Such graphs include Kneser graphs, their vertex color-critical subgraphs, the Schrijver (or stable Kneser) graphs; Mycielski graphs, and their generalizations; and Borsuk graphs. We give more or less tight bounds for the local chromatic number of many of these graphs. We use an old topological result of Ky Fan which generalizes the Borsuk-Ulam theorem. It implies the existence of a multicolored copy of the balanced complete bipartite graph on t points in every proper coloring of many graphs whose chromatic number t is determined via a topological argument. (This was in particular noted for Kneser graphs by Ky Fan.) This yields a lower bound of t/2+1 for the local chromatic number of these graphs. We show this bound to be tight or almost tight in many cases. As another consequence of the above we prove that the graphs considered here have equal circular and ordinary chromatic numbers if the latter is even. This partially proves a conjecture of Johnson, Holroyd, and Stahl and was independently attained by F. Meunier. We also show that odd chromatic Schrijver graphs behave differently, their circular chromatic number can be arbitrarily close to the other extreme. ","Local chromatic number, Ky Fan's theorem, and circular colorings" " We consider the \emph{Budgeted} version of the classical \emph{Connected Dominating Set} problem (BCDS). Given a graph $G$ and a budget $k$, we seek a connected subset of at most $k$ vertices maximizing the number of dominated vertices in $G$. We improve over the previous $(1-1/e)/13$ approximation in [Khuller, Purohit, and Sarpatwar,\ \emph{SODA 2014}] by introducing a new method for performing tree decompositions in the analysis of the last part of the algorithm. This new approach provides a $(1-1/e)/12$ approximation guarantee. By generalizing the analysis of the first part of the algorithm, we are able to modify it appropriately and obtain a further improvement to $(1-e^{-7/8})/11$. On the other hand, we prove a $(1-1/e+\epsilon)$ inapproximability bound, for any $\epsilon > 0$. We also examine the \emph{edge-vertex domination} variant, where an edge dominates its endpoints and all vertices neighboring them. In \emph{Budgeted Edge-Vertex Domination} (BEVD), we are given a graph $G$, and a budget $k$, and we seek a, not necessarily connected, subset of $k$ edges such that the number of dominated vertices in $G$ is maximized. We prove there exists a $(1-1/e)$-approximation algorithm. Also, for any $\epsilon > 0$, we present a $(1-1/e+\epsilon)$-inapproximability result by a gap-preserving reduction from the \emph{maximum coverage} problem. Finally, we examine the ""dual"" \emph{Partial Edge-Vertex Domination} (PEVD) problem, where a graph $G$ and a quota $n'$ are given. The goal is to select a minimum-size set of edges to dominate at least $n'$ vertices in $G$. In this case, we present a $H(n')$-approximation algorithm by a reduction to the \emph{partial cover} problem. ",Improved Budgeted Connected Domination and Budgeted Edge-Vertex Domination " Natural language processing (NLP) has been applied to various fields including text classification and sentiment analysis. In the shared task of sentiment analysis of code-mixed tweets, which is a part of the SemEval-2020 competition~\cite{patwa2020sentimix}, we preprocess datasets by replacing emoji and deleting uncommon characters and so on, and then fine-tune the Bidirectional Encoder Representation from Transformers(BERT) to perform the best. After exhausting top3 submissions, Our team MeisterMorxrc achieves an averaged F1 score of 0.730 in this task, and and our codalab username is MeisterMorxrc. ",MeisterMorxrc at SemEval-2020 Task 9: Fine-Tune Bert and Multitask Learning for Sentiment Analysis of Code-Mixed Tweets " In this paper, we establish global $W^{2,p}$ estimates for solutions to the linearized Monge-Amp\`ere equations under natural assumptions on the domain, Monge-Amp\`ere measures and boundary data. Our estimates are affine invariant analogues of the global $W^{2,p}$ estimates of Winter for fully nonlinear, uniformly elliptic equations, and also linearized counterparts of Savin's global $W^{2,p}$ estimates for the Monge-Amp\`ere equations. ","Global $W^{2,p}$ estimates for solutions to the linearized Monge--Amp\`ere equations" " We present the characterization of metric spaces that are micro-, macro- or bi-uniformly equivalent to the extended Cantor set $\{\sum_{i=-n}^\infty\frac{2x_i}{3^i}:n\in\IN ,\;(x_i)_{i\in\IZ}\in\{0,1\}^\IZ\}\subset\IR$, which is bi-uniformly equivalent to the Cantor bi-cube $2^{<\IZ}=\{(x_i)_{i\in\IZ}\in \{0,1\}^\IZ:\exists n\;\forall i\ge n\;x_i=0\}$ endowed with the metric $d((x_i),(y_i))=\max_{i\in\IZ}2^i|x_i-y_i|$. Those characterizations imply that any two (uncountable) proper isometrically homogeneous ultrametric spaces are coarsely (and bi-uniformly) equivalent. This implies that any two countable locally finite groups endowed with proper left-invariant metrics are coarsely equivalent. For the proof of these results we develop a technique of towers which can have an independent interest. ",Characterizing the Cantor bi-cube in asymptotic categories " We report a calculation of all two-loop QED corrections with closed fermion loops for the n=1 and n=2 states of H-like ions and for a wide range of the nuclear charge numbers Z=1-100. The calculation is performed to all orders in the binding-strength parameter (Z \alpha), with the exception that in a few cases the free-loop approximation is employed in the treatment of the fermion loops. Detailed comparison is made with previous (Z \alpha)-expansion calculations and the higher-order remainder term to order \alpha^2 (Z \alpha)^6 is identified. ",Two-loop QED corrections with closed fermion loops " The electronic structure of the enigmatic iron-based superconductor FeSe has puzzled researchers since spectroscopic probes failed to observe the expected electron pocket at the $Y$ point in the 1-Fe Brillouin zone. It has been speculated that this pocket, essential for an understanding of the superconducting state, is either absent or incoherent. Here, we perform a theoretical study of the preferred nematic order originating from nearest-neighbor Coulomb interactions in an electronic model relevant for FeSe. We find that at low temperatures the dominating nematic components are of inter-orbital $d_{xz}-d_{xy}$ and $d_{yz}-d_{xy}$ character, with spontaneously broken amplitudes for these two components. This inter-orbital nematic order naturally leads to distinct hybridization gaps at the $X$ and $Y$ points of the 1-Fe Brillouin zone, and may thereby produce highly anisotropic Fermi surfaces with only a single electron pocket at one of these momentum-space locations. The associated superconducting gap structure obtained with the generated low-energy electronic band structure from spin-fluctuation mediated pairing agrees well with that measured experimentally. Finally, from a comparison of the computed spin susceptibility to available neutron scattering data, we discuss the necessity of additional self-energy effects, and explore the role of orbital-dependent quasiparticle weights as a minimal means to include them. ",Inter-orbital nematicity and the origin of a single electron Fermi pocket in FeSe " Sources of event-by-event elliptic flow fluctuations in relativistic heavy-ion collisions are investigated in a multiphase transport model. Besides the well-known initial eccentricity fluctuations, several other sources of dynamical fluctuations are identified. One is fluctuations in initial parton configurations at a given eccentricity. Second is quantum fluctuations in parton interactions during system evolution. Third is fluctuations caused by hadronization and final-state hadronic scatterings. The magnitudes of these fluctuations are investigated relative to eccentricity fluctuations and average flow magnitude. The fluctuations from the latter two sources are found to be negative. The results may have important implications to the interpretation of elliptic flow data. ",Eccentricity is not the only source of elliptic flow fluctuations " We study exotic mesons with double charm and bottom flavor, whose quark configuration is \bar{Q}\bar{Q}qq. This quark configuration has no annihilation process of quark and antiquark, and hence is a genuinely exotic states. We take a hadronic picture by considering the molecular states composed of a pair of heavy mesons, such as DD, DD* and D*D* for charm flavor, and BB, BB* and B*B* for bottom flavor. The interactions between heavy mesons are derived from the heavy quark effective theory. All molecular states are classified by I(J^P) quantum numbers, and are systematically studied up to the total angular momentum J \leq 2. By solving the coupled channel Schrodinger equations, due to the strong tensor force of one pion exchanging, we find bound and/or resonant states of various quantum numbers. ",Exotic mesons with double charm and bottom flavor " The international migration of researchers is an important dimension of scientific mobility, and has been the subject of considerable policy debate. However, tracking the migration life courses of researchers is challenging due to data limitations. In this study, we use Scopus bibliometric data on eight million publications from 1.1 million researchers who have published at least once with an affiliation address from Germany in 1996-2020. We construct the partial life histories of published researchers in this period and explore both their out-migration and the subsequent return of a subset of this group: the returnees. Our analyses shed light on the career stages and gender disparities between researchers who remain in Germany, those who emigrate, and those who eventually return. We find that the return migration streams are even more gender imbalanced, which points to the need for additional efforts to encourage female researchers to come back to Germany. We document a slightly declining trend in return migration among more recent cohorts of researchers who left Germany, which, for most disciplines, was associated with a decrease in the German collaborative ties of these researchers. Moreover, we find that the gender disparities for the most gender imbalanced disciplines are unlikely to be mitigated by return migration given the gender compositions of the cohorts of researchers who have left Germany and of those who have returned. This analysis uncovers new dimensions of migration among scholars by investigating the return migration of published researchers, which is critical for the development of science policy. ","Return migration of German-affiliated researchers: Analyzing departure and return by gender, cohort, and discipline using Scopus bibliometric data 1996-2020" " Recently, Tao and Mo proposed an accurate meta-generalized gradient approximation for the exchange-correlation energy. The exchange part is derived from the density matrix expansion, while the correlation part is obtained by improving the TPSS correlation in the low-density limit. To better understand this exchange functional, in this work, we combine the TM exchange with the original TPSS correlation, which we call TMTPSS, and make a systematic assessment on molecular properties. The test sets include the 223 G3/99 enthalpies of formation, 58 electron affinities, 8 proton affinities, 96 bond lengths, 82 harmonic frequencies, and 10 hydrogen-bonded molecular complexes. Our calculations show that the TMTPSS functional is competitive with or even more accurate than TM functional for some properties. In particular, it is the most accurate nonempirical semilocal DFT for the enthalpies of formation and harmonic vibrational frequencies, suggesting the robustness of TM exchange. ",Performance of a nonempirical exchange functional from the density matrix expansion: comparative study with different correlation " Given a locally nilpotent derivation on an affine algebra $B$ over a field $k$ of characteristic zero, we consider a finitely generated $B$-module $M$ which admits a locally nilpotent module derivation $\delta_M$ (see Definition 1.1 below). Let $A=\Ker \delta$ and $M_0=\Ker \delta_M$. We ask if $M_0$ is a finitely generated $A$-module. In general, there exist counterexamples which are closely related to the fourteenth problem of Hilbert. We also look for some sufficient conditions for finite generation. ",Locally nilpotent module derivations and the fourteenth problem of Hilbert " We have analyzed a nonsingular model with a variable cosmological term following the Carvalho {\it et al}. ansatz. The model was shown to approximate to the model of Freese {\it et al}. in one direction and to the \""{O}zer-Taha in the other. We have then included the effect of viscosity in this cosmology, as this effect has not been considered before. The analysis showed that this viscous effect could be important with a present contribution to the cosmic pressure, at most, of order of that of radiation. The model puts a stronger upper bound on the baryonic matter than that required by the standard model. A variable gravitational and cosmological constant were then introduced in a scenario which conserves the energy and momentum in the presence of bulk viscosity. The result of the analysis reveals that various models could be viscous. A noteworthy result is that some nonsingular closed models evolve asymptotically into a singular viscous one. The considered models solve for many of the standard model problems. Though the introduction of bulk viscosity results in the creation of particles, this scenario conserves energy and momentum. As in the standard model the entropy remains constant. We have not explained the generation of bulk viscosity but some workers attributes this to neutrinos. Though the role of viscosity today is minute it could, nevertheless, have had an important contribution at early times. We have shown that these models encompass many of the old and recently proposed models, in particular, Brans-Dicke, Dirac, Freese {\it et al}., Berman, Abdel Rahman and Kalligas {\it et al}. models. Hence we claim that the introduction of bulk viscosity enriches the adopted cosmology. ",Cosmological Models With Variable G an Lambda and Bulk viscosity " In this work, we present a reconfigurable data glove design to capture different modes of human hand-object interactions, which are critical in training embodied artificial intelligence (AI) agents for fine manipulation tasks. To achieve various downstream tasks with distinct features, our reconfigurable data glove operates in three modes sharing a unified backbone design that reconstructs hand gestures in real time. In the tactile-sensing mode, the glove system aggregates manipulation force via customized force sensors made from a soft and thin piezoresistive material; this design minimizes interference during complex hand movements. The virtual reality (VR) mode enables real-time interaction in a physically plausible fashion: A caging-based approach is devised to determine stable grasps by detecting collision events. Leveraging a state-of-the-art finite element method (FEM), the simulation mode collects data on fine-grained 4D manipulation events comprising hand and object motions in 3D space and how the object's physical properties (e.g., stress and energy) change in accordance with manipulation over time. Notably, the glove system presented here is the first to use high-fidelity simulation to investigate the unobservable physical and causal factors behind manipulation actions. In a series of experiments, we characterize our data glove in terms of individual sensors and the overall system. More specifically, we evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses, respectively. Based on these three modes, our reconfigurable data glove collects and reconstructs fine-grained human grasp data in both physical and virtual environments, thereby opening up new avenues for the learning of manipulation skills for embodied AI agents. ",A Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps " We report measurements of the critical temperature of YBCO-Co doped YBCO Superconductor-Normal bilayer films. Depending on the morphology of the S-N interface, the coupling between S and N layers can be turned on to depress the critical temperature of S by tens of degrees, or turned down so the layers appear almost totally decoupled. This novel effect can be explained by the mechanism of quasiparticle transmission into an anisotropic superconductor. ",Role of Interfaces in the Proximity Effect in Anisotropic Superconductors " Molecular electronics offers unique scientific and technological possibilities, resulting from both the nanometre scale of the devices and their reproducible chemical complexity. Two fundamental yet different effects, with no classical analogue, have been demonstrated experimentally in single-molecule junctions: quantum interference due to competing electron transport pathways, and the Kondo effect due to entanglement from strong electronic interactions. Here we unify these phenomena, showing that transport through a spin-degenerate molecule can be either enhanced or blocked by Kondo correlations, depending on molecular structure, contacting geometry and applied gate voltages. An exact framework is developed, in terms of which the quantum interference properties of interacting molecular junctions can be systematically studied and understood. We prove that an exact Kondo-mediated conductance node results from destructive interference in exchange-cotunneling. Nonstandard temperature dependences and gate-tunable conductance peaks/nodes are demonstrated for prototypical molecular junctions, illustrating the intricate interplay of quantum effects beyond the single-orbital paradigm. ",Kondo blockade due to quantum interference in single-molecule junctions " Given a symmetric exchange of three intervals, we provide a detailed description of the return times to a subinterval and the corresponding itineraries. We apply our results to morphisms fixing words coding non-degenerate three interval exchange transformation. This allows us to prove that the conjecture stated by Hof, Knill and Simon is valid for such infinite words. ","Exchange of three intervals: itineraries, substitutions and palindromicity" " The valley degeneracy of electron states in graphene stimulates intensive research of valley-related optical and transport phenomena. While many proposals on how to manipulate valley states have been put forward, experimental access to the valley polarization in graphene is still a challenge. Here, we develop a theory of the second optical harmonic generation in graphene and show that this effect can be used to measure the degree and sign of the valley polarization. We show that, at the normal incidence of radiation, the second harmonic generation stems from imbalance of carrier populations in the valleys. The effect has a specific polarization dependence reflecting the trigonal symmetry of electron valley and is resonantly enhanced if the energy of incident photons is close to the Fermi energy. ",Valley polarization induced second harmonic generation in graphene " We study quantum dynamics on noncommutative spaces of negative curvature, focusing on the hyperbolic plane with spatial noncommutativity in the presence of a constant magnetic field. We show that the synergy of noncommutativity and the magnetic field tames the exponential divergence of operator growth caused by the negative curvature of the hyperbolic space. Their combined effect results in a first-order transition at a critical value of the magnetic field in which strong quantum effects subdue the exponential divergence for {\it all} energies, in stark contrast to the commutative case, where for high enough energies operator growth always diverge exponentially. This transition manifests in the entanglement entropy between the `left' and `right' Hilbert spaces of spatial degrees of freedom. In particular, the entanglement entropy in the lowest Landau level vanishes beyond the critical point. We further present a non-linear solvable bosonic model that realizes the underlying algebraic structure of the noncommutative hyperbolic plane with a magnetic field. ",Lyapunov exponents and entanglement entropy transition on the noncommutative hyperbolic plane " We reduce the problem of proving a ""Boolean Unique Games Conjecture"" (with gap 1-delta vs. 1-C*delta, for any C> 1, and sufficiently small delta>0) to the problem of proving a PCP Theorem for a certain non-unique game. In a previous work, Khot and Moshkovitz suggested an inefficient candidate reduction (i.e., without a proof of soundness). The current work is the first to provide an efficient reduction along with a proof of soundness. The non-unique game we reduce from is similar to non-unique games for which PCP theorems are known. Our proof relies on a new concentration theorem for functions in Gaussian space that are restricted to a random hyperplane. We bound the typical Euclidean distance between the low degree part of the restriction of the function to the hyperplane and the restriction to the hyperplane of the low degree part of the function. ",Reduction From Non-Unique Games To Boolean Unique Games " We outline a holographic recipe to reconstruct $\alpha'$ corrections to AdS (quantum) gravity from an underlying CFT in the strictly planar limit ($N\rightarrow\infty$). Assuming that the boundary CFT can be solved in principle to all orders of the 't Hooft coupling $\lambda$, for scalar primary operators, the $\lambda^{-1}$ expansion of the conformal dimensions can be mapped to higher curvature corrections of the dual bulk scalar field action. Furthermore, for the metric pertubations in the bulk, the AdS/CFT operator-field isomorphism forces these corrections to be of the Lovelock type. We demonstrate this by reconstructing the coefficient of the leading Lovelock correction, aka the Gauss-Bonnet term in a bulk AdS gravity action using the expression of stress-tensor two-point function up to sub-leading order in $\lambda^{-1}$. ",Holographic bulk reconstruction beyond (super)gravity " A standard method for picoammeter calibrations is the capacitor charging technique, which allows generating traceable currents in the sub-nA range. However, its accuracy is limited by the ac-dc differences of the capacitances involved. The Ultrastable Low-noise Current Amplifier (ULCA) is a novel high-precision amperemeter for direct current measurements in the pA range, developed at PTB. Its amplifier stages, based on resistor networks and op-amps, can be calibrated traceably with a cryogenic current comparator (CCC) system. We compare the results from both independent calibration routes for two different ULCA prototypes. We find agreement between both methods at an uncertainty level below 10 microA/A, limited by the uncertainty of the currents generated with the capacitor charging method. The investigations confirm the superior performance of the new ULCA picoammeter. ",Traceable precision pA direct current measurements with the ULCA " We study the order of capillary condensation and evaporation transitions of a simple fluid adsorbed in a deep capillary groove using a fundamental measure density functional theory (DFT). The walls of the capillary interact with the fluid particles via long-ranged, dispersion, forces while the fluid-fluid interaction is modelled as a truncated Lennard-Jones-like potential. We find that below the wetting temperature $T_w$ condensation is first-order and evaporation is continuous with the metastability of the condensation being well described by the complementary Kelvin equation. In contrast above $T_w$ both phase transitions are continuous and their critical singularities are determined. In addition we show that for the evaporation transition above $T_w$ there is an elegant mapping, or covariance, with the complete wetting transition occurring at a planar wall. Our numerical DFT studies are complemented by analytical slab model calculations which explain how the asymmetry between condensation and evaporation arises out of the combination of long-ranged forces and substrate geometry. ",Condensation and evaporation transitions in deep capillary grooves " We study the problem of parameterized matching in a stream where we want to output matches between a pattern of length m and the last m symbols of the stream before the next symbol arrives. Parameterized matching is a natural generalisation of exact matching where an arbitrary one-to-one relabelling of pattern symbols is allowed. We show how this problem can be solved in constant time per arriving stream symbol and sublinear, near optimal space with high probability. Our results are surprising and important: it has been shown that almost no streaming pattern matching problems can be solved (not even randomised) in less than Theta(m) space, with exact matching as the only known problem to have a sublinear, near optimal space solution. Here we demonstrate that a similar sublinear, near optimal space solution is achievable for an even more challenging problem. The proof is considerably more complex than that for exact matching. ",Parameterized Matching in the Streaming Model " Herbig Ae/Be objects, like their lower mass counterparts T Tauri stars, are seen to form a stable circumstellar disk which is initially gas-rich and could ultimately form a planetary system. We present Herschel SPIRE 460-1540 GHz spectra of five targets out of a sample of 13 young disk sources, showing line detections mainly due to warm CO gas. ",Gas signatures of Herbig Ae/Be disks probed with Herschel SPIRE spectroscopy " We evaluate the production cross section for direct J/psi integrated in P_T for various collision energies of the LHC in the QCD-based Colour-Singlet Model. We consider the LO contribution from gluon fusion as well as the one from a fusion of a gluon and a charm quark from the colliding protons. The rapidity distribution of the yield is evaluated in the central region relevant for the ATLAS and CMS detectors, as well as in the more forward region relevant for the ALICE and LHC-b detectors. The results obtained here are compatible with those of other approaches within the range of the theoretical uncertainties which are admittedly very large. This suggests that the ""mere"" measurements of the yield at the LHC will not help disentangle between the different possible quarkonium production mechanisms. ",Total J/psi production cross section at the LHC We study a version of the Hermitian curvature flow on compact homogeneous complex manifolds. We prove that the solution has a finite exstinction time $T>0$ and we analyze its behaviour when $t\to T$. We also determine the invariant static metrics and we study the convergence of the normalized flow to one of them. ,Hermitian Curvature Flow on compact homogeneous spaces " We investigate the primordial gravitational waves (PGWs) in the general scenario where the inflation is preceded by a pre-inflationary stage with the effective equation of state $w$. Comparing with the results in the usual inflationary models, the power spectrum of PGWs is modified in two aspects: One is the mixture of the perturbation modes caused by he presence of the pre-inflationary period, and the other is the thermal initial state formed at the Planck era of the early Universe. By investigating the observational imprints of these modifications on the B-mode polarization of cosmic microwave background (CMB) radiation, we obtain the constraints on the conformal temperature of the thermal gravitational-wave background $T<5.01\times 10^{-4}$Mpc$^{-1}$ and a tensor-to-scalar ratio $r<0.084$ ($95\%$ confident level), which follows the bounds on total number of e-folds $N>63.5$ for the model with $w=1/3$, and $N>65.7$ for that with $w=1$. By taking into account various noises and the foreground radiations, we forecast the detection possibility of the thermal gravitational-wave background by the future CMBPol mission, and find that if $r>0.01$, the detection is possible as long as $T>1.5\times 10^{-4}$Mpc$^{-1}$. However, the effect of different $w$ is quite small, and it seems impossible to determine its value from the potential observations of CMBPol mission. ",Thermal gravitational-wave background in the general pre-inflationary scenario " By results of the author there exists a projective (holomorphic) symplectic desingularization of the moduli space of rank-two torsion-free sheaves on a genus-two Jacobian with $c_1=0$ and $c_2=2$. This desingularization has a natural map to the self-product of the Jacobian. We show that the fiber over $(0,0)$ is a 6-dimensional projective irreducible symplectic variety (and hence a 12-dimensional compact Hyperkahler manifold) with second Betti number equal to 8. Thus it is not deformation equivalent to any of the (few) known examples of irreducible symplectic varieties, even up to birational equivalence. ",A new six dimensional irreducible symplectic variety " Sub-arcsecond radio continuum observations of the Galactic center region at $\lambda$6 and 2cm reveal a 0.5$^{\prime\prime}$ diameter source with a shell-like morphology. This source is linearly polarized at a level of 16% at $\lambda$6cm and has a steep nonthermal spectrum with spectral index 1.6 between $\lambda$6 and 2 cm. The distance to this source is not known but the large rotation measure value of 3000 rad m$^{-2}$ suggests that G359.87+0.18 is likely to be located in the inner Galaxy or at an extragalactic distance. We discuss possible interpretations of this object as a recent supernova, a very young supernova remnant, a nova remnant, or an extragalactic source. All possibilities are highly problematic. ",G359.87+0.18: A Young SNR Candidate Near the Galactic Center? " The MNESIS Project aims to see whether the use of computerized environment by elderly people in medicalized residences stimulates their cognitive capacities and contributes to a better integration, recognition or acceptance within their social environment (friends, family, medical staff). In this paper we present the protocol of evaluation that is defined to check this assumption. This protocol is between users' centred traditional protocols (built on investigations and indirect observation) and studies of Web Usage Mining (where knowledge databases about the uses are built from traces of use). It allows collecting direct and indirect information on a large scale and over long periods. ",D\'emarche d'\'evaluation de l'usage et des r\'epercussions psychosociales d'un environnement STIC sur une population de personnes \^ag\'ees en r\'esidence m\'edicalis\'ee " Indoor localization services are a crucial aspect for the realization of smart cyber-physical systems within cities of the future. Such services are poised to reinvent the process of navigation and tracking of people and assets in a variety of indoor and subterranean environments. The growing ownership of computationally capable smartphones has laid the foundations of portable fingerprinting-based indoor localization through deep learning. However, as the demand for accurate localization increases, the computational complexity of the associated deep learning models increases as well. We present an approach for reducing the computational requirements of a deep learning-based indoor localization framework while maintaining localization accuracy targets. Our proposed methodology is deployed and validated across multiple smartphones and is shown to deliver up to 42% reduction in prediction latency and 45% reduction in prediction energy as compared to the best-known baseline deep learning-based indoor localization model. ",QuickLoc: Adaptive Deep-Learning for Fast Indoor Localization with Mobile Devices " We develop a model of the generation of coherent radio emission in the Crab pulsar, magnetars and Fast Radio Bursts (FRBs). Emission is produced by a reconnection-generated beam of particles via a variant of Free Electron Laser (FEL) mechanism, operating in a weakly-turbulent, guide-field dominated plasma. We first consider nonlinear Thomson scattering in a guide-field dominated regime, and apply to model to explain emission bands observed in Crab pulsar and in Fast Radio Bursts. We consider particle motion in a combined fields of the electromagnetic wave and thee lectromagnetic (Alfvenic) wiggler. Charge bunches, created via a ponderomotive force, Compton/Raman scatter the wiggler field coherently. The model is both robust to the underlying plasma parameters and succeeds in reproducing a number of subtle observed features: (i) emission frequencies depend mostly on the length $\lambda_t$ of turbulence and the Lorentz factor of the reconnection generated beam, $\omega \sim \gamma_b^2 ( c/\lambda_t) $ - it is independent of the absolute value of the underlying magnetic field. (ii) The model explains both broadband emission and the presence of emission stripes, including multiple stripes observed in the High Frequency Interpulse of the Crab pulsar. (iii) The model reproduces correlated polarization properties: presence of narrow emission bands in the spectrum favors linear polarization, while broadband emission can have arbitrary polarization. (iv) The mechanism is robust to the momentum spread of the particle in the beam. We also discuss a model of wigglers as non-linear force-free Alfven solitons (light darts). ","Coherent emission in pulsars, magnetars and Fast Radio Bursts: reconnection-driven free electron laser" " Partial differential equations (PDEs) are used, with huge success, to model phenomena arising across all scientific and engineering disciplines. However, across an equally wide swath, there exist situations in which PDE models fail to adequately model observed phenomena or are not the best available model for that purpose. On the other hand, in many situations, nonlocal models that account for interaction occurring at a distance have been shown to more faithfully and effectively model observed phenomena that involve possible singularities and other anomalies. In this article, we consider a generic nonlocal model, beginning with a short review of its definition, the properties of its solution, its mathematical analysis, and specific concrete examples. We then provide extensive discussions about numerical methods, including finite element, finite difference, and spectral methods, for determining approximate solutions of the nonlocal models considered. In that discussion, we pay particular attention to a special class of nonlocal models that are the most widely studied in the literature, namely those involving fractional derivatives. The article ends with brief considerations of several modeling and algorithmic extensions which serve to show the wide applicability of nonlocal modeling. ",Numerical methods for nonlocal and fractional models " Recursive analysis was introduced by A. Turing [1936], A. Grzegorczyk [1955], and D. Lacombe [1955]. It is based on a discrete mechanical framework that can be used to model computation over the real numbers. In this context the computational complexity of real functions defined over compact domains has been extensively studied. However, much less have been done for other kinds of real functions. This article is divided into two main parts. The first part investigates polynomial time computability of rational functions and the role of continuity in such computation. On the one hand this is interesting for its own sake. On the other hand it provides insights into polynomial time computability of real functions for the latter, in the sense of recursive analysis, is modeled as approximations of rational computations. The main conclusion of this part is that continuity does not play any role in the efficiency of computing rational functions. The second part defines polynomial time computability of arbitrary real functions, characterizes it, and compares it with the corresponding notion over rational functions. Assuming continuity, the main conclusion is that there is a conceptual difference between polynomial time computation over the rationals and the reals manifested by the fact that there are polynomial time computable rational functions whose extensions to the reals are not polynomial time computable and vice versa. ",Characterizing Polynomial Time Computability of Rational and Real Functions " A particle-in-cell (PIC) simulation study is performed to investigate the discharge asymmetry, higher harmonic generations and electron heating mechanism in a low pressure very high frequency capacitively coupled plasma (CCP) excited by a saw-tooth like current waveform. Two current densities, 50 A/m2 and 100 A/m2 are chosen for a constant gas pressure of 5 mTorr in argon plasma. The driving frequency is varied from 13.56 MHz to 54.24 MHz. At a lower driving frequency, high frequency modulations on the instantaneous sheath edge position at the grounded electrode are observed. These high frequency oscillations create multiple ionization beam like structures near to the sheath edge that drives the plasma density in the discharge and responsible for discharge/ionization asymmetry at lower driving frequency. Conversely, the electrode voltage shows higher harmonics generation at higher driving frequencies and corresponding electric field transients are observed into the bulk plasma. At lower driving frequency, the electron heating is maximum near to the sheath edge followed by electron cooling within plasma bulk, however, alternate heating and cooling i.e. burst like structures are obtained at higher driving frequencies. These results suggest that electron heating in these discharges will not be described accurately by simple analytical models. ",High frequency sheath modulation and higher harmonic generation in a low pressure very high frequency capacitively coupled plasma excited by sawtooth waveform " Interacting Bose-Fermi systems play a central role in condensed matter physics. Here, we analyze a novel Bose-Fermi mixture formed by a cavity exciton-polariton condensate interacting with a two-dimensional electron system. We show that that previous predictions of superconductivity [F.P. Laussy, Phys. Rev. Lett. 10, 104 (2010)] and excitonic supersolid formation [I.A. Shelykh, Phys. Rev. Lett. 14, 105 (2010)] in this system are closely intertwined- resembling the predictions for strongly correlated electron systems such as high temperature superconductors. In stark contrast to a large majority of Bose-Fermi systems analyzed in solids and ultracold atomic gases, the renormalized interaction between the polaritons and electrons in our system is long-ranged and strongly peaked at a tunable wavevector, which can be rendered incommensurate with the Fermi momentum. We analyze the prospects for experimental observation of superconductivity and find that critical temperatures on the order of a few Kelvins can be achieved in heterostructures consisting of transition metal dichalcogenide monolayers that are embedded in an open cavity structure. All optical control of superconductivity in semiconductor heterostructures could enable the realization of new device concepts compatible with semiconductor nanotechnology. In addition the possibility to interface quantum Hall physics, superconductivity and nonequilibrium polariton condensates is likely to provide fertile ground for investigation of completely new physical phenomena. ",Superconductivity and other phase transitions in a hybrid Bose-Fermi mixture formed by a polariton condensate and an electron system in two dimensions " We construct cubature formulas on spheres supported by homothetic images of shells in some Euclidian lattices. Our analysis of these cubature formulas uses results from the theory of modular forms. Examples are worked out on the sphere of dimension n-1 for n=4, 8, 12, 14, 16, 20, 23, and 24, and the sizes of the cubature formulas we obtain are compared with the lower bounds given by Linear Programming. ",Construction of spherical cubature formulas using lattices " As left adjoint to the dual algebra functor, Sweedler's finite dual construction is an important tool in the theory of Hopf algebras over a field. We show in this note that the left adjoint to the dual algebra functor, which exists over arbitrary rings, shares a number of properties with the finite dual. Nonetheless the requirement that it should map Hopf algebras to Hopf algebras needs the extra assumption that this left adjoint should map an algebra into its linear dual. We identify a condition guaranteeing that Sweedler's construction works when generalized to noetherian commutative rings. We establish the following two apparently previously unnoticed dual adjunctions: For every commutative ring $R$ the left adjoint of the dual algebra functor on the category of $R$-bialgebras has a right adjoint. This dual adjunction can be restricted to a dual adjunction on the category of Hopf $R$-algebras, provided that $R$ is noetherian and absolutely flat. ",Generalizations of the Sweedler dual " Semiconductor nanostructures hold great promise for high-efficiency waste heat recovery exploiting thermoelectric energy conversion, a technological breakthrough that could significantly contribute to providing environmentally friendly energy sources as well as in enabling the realization of self-powered biomedical and wearable devices. A crucial requirement in this field is the reduction of the thermal conductivity of the thermoelectric material without detrimentally affecting its electrical transport properties. In this work we demonstrate a drastic reduction of thermal conductivity in III-V semiconductor nanowires due to the presence of intentionally realized periodic crystal lattice twin planes. The electrical and thermal transport of these nanostructures, known as twinning superlattice nanowires, have been probed and compared with their twin-free counterparts, showing a one order of magnitude decrease of thermal conductivity while maintaining unaltered electrical transport properties, thus yielding a factor ten enhancement of the thermoelectric figure of merit, ZT. Our study reports for the first time the experimental measurement of electrical and thermal properties in twinning superlattice nanowires, which emerge as a novel class of nanomaterials for high efficiency thermoelectric energy harvesting. ",Giant reduction of thermal conductivity in twinning superlattice InAsSb nanowires " Many amorphous glassy materials exhibit complex spatio-temporal mechanical response and rheology, characterized by an intermittent stress-strain response and a fluctuating velocity profile. Under quasistatic and athermal deformation protocols this heterogeneous plastic flow was shown to be composed of plastic events of various sizes. In this paper, through numerical study of a 2D LJ amorphous solid, we generalize the study of the heterogeneous dynamics of glassy materials to the finite shear-rate and temperature case. The global mechanical response obtained through the use of Molecular Dynamics is shown to converge to the quasistatic limit obtained with an energy minimization protocol. The detailed analysis of the plastic deformation at different shear rates shows that the glass follows different flow regimes. At sufficiently low shear rates the mechanical response reaches a shear-rate independent regime that exhibits all the characteristics of the quasistatic response (finite size effects, yield stress...). At intermediate shear rates the rheological properties are determined by the externally applied shear-rate. Finally at higher shear the system reaches a shear-rate independent homogeneous regime. The existence of these three regimes is also confirmed by the detailed analysis of the atomic motion. The computation of the four-point correlation function shows that the transition from the shear-rate dominated to the quasistatic regime is accompanied by the growth of a dynamical cooperativity length scale $\xi$ that is shown to diverge with shear rate. This divergence is compared with the prediction of a simple model that assumes the diffusive propagation of plastic events. ",Plasticity and dynamical heterogeneity in driven glassy materials " We study the Nonlinear (Polynomial, N-fold,...) Supersymmetry algebra in one-dimensional QM. Its structure is determined by the type of conjugation operation (Hermitian conjugation or transposition) and described with the help of the Super-Hamiltonian projection on the zero-mode subspace of a supercharge. We show that the SUSY algebra with transposition symmetry is always polynomial in the Hamiltonian if supercharges represent differential operators of finite order. The appearance of the extended SUSY with several (complex or real) supercharges is analyzed in details and it is established that no more than two independent supercharges may generate a Nonlinear superalgebra which can be appropriately specified as {\cal N} = 2 SUSY. In this case we find a non-trivial hidden symmetry operator and rephrase it as a non-linear function of the Super-Hamiltonian on the physical state space. The full {\cal N} = 2 Non-linear SUSY algebra includes ""central charges"" both polynomial and non-polynomial (due to a symmetry operator) in the Super-Hamiltonian. ",Nonlinear supersymmetry in Quantum Mechanics: algebraic properties and differential representation " Evolution of galaxies is one of the most actual topics in astrophysics. Among the most important factors determining the evolution are two galactic components which are difficult or even impossible to detect optically: the gaseous disks and the dark matter halo. We use deep Hubble Space Telescope images to construct a two-component (bulge + disk) model for stellar matter distribution of galaxies. Properties of the galactic components are derived using a three-dimensional galaxy modeling software, which also estimates disk thickness and inclination angle. We add a gas disk and a dark matter halo and use hydrodynamical equations to calculate gas rotation and dispersion profiles in the resultant gravitational potential. We compare the kinematic profiles with the Team Keck Redshift Survey observations. In this pilot study, two galaxies are analyzed deriving parameters for their stellar components; both galaxies are found to be disk-dominated. Using the kinematical model, the gas mass and stellar mass ratio in the disk are estimated. ",Modeling the Kinematics of Distant Galaxies " Galaxies are surrounded by massive gas reservoirs (i.e. the circumgalactic medium; CGM) which play a key role in their evolution. The properties of the CGM, which are dependent on a variety of internal and environmental factors, are often inferred from absorption line surveys which rely on a limited number of single lines-of-sight. In this work we present an analysis of 28 galaxy haloes selected from the Auriga project, a cosmological magneto-hydrodynamical zoom-in simulation suite of isolated Milky Way-mass galaxies, to understand the impact of CGM diversity on observational studies. Although the Auriga haloes are selected to populate a narrow range in halo mass, our work demonstrates that the CGM of L* galaxies is extremely diverse: column densities of commonly observed species span ~3-4 dex and their covering fractions range from ~5 to 90 per cent. Despite this diversity, we identify the following correlations: 1) the covering fractions (CF) of hydrogen and metals of the Auriga haloes positively correlate with stellar mass, 2) the CF of H I, C IV, and Si II anticorrelate with active galactic nucleus luminosity due to ionization effects, and 3) the CF of H I, C IV, and Si II positively correlate with galaxy disc fraction due to outflows populating the CGM with cool and dense gas. The Auriga sample demonstrates striking diversity within the CGM of L* galaxies, which poses a challenge for observations reconstructing CGM characteristics from limited samples, and also indicates that long-term merger assembly history and recent star formation are not the dominant sculptors of the CGM. ",The diversity of the circumgalactic medium around z = 0 Milky Way-mass galaxies from the Auriga simulations " The Jiangmen Underground Neutrino Observatory is a multipurpose neutrino experiment designed to determine neutrino mass hierarchy and precisely measure oscillation parameters by detecting reactor neutrinos from the Yangjiang and Taishan Nuclear Power Plants, observe supernova neutrinos, study the atmospheric, solar neutrinos and geo-neutrinos, and perform exotic searches, with a 20-thousand-ton liquid scintillator detector of unprecedented 3\% energy resolution (at 1 MeV) at 700-meter deep underground. In this proceeding, the subsystems of the experiment, including the cental detector, the online scintillator internal radioactivity investigation system, the PMT, the veto detector, the calibration system and the taishan antineutrino observatory, will be described. The construction is expected to be completed in 2021. ",Status of the Jiangmen Underground Neutrino Observatory " A result of G. Godefroy asserts that a Banach space $X$ contains an isomorphic copy of $\ell_1$ if and only if there is an equivalent norm $|||\cdot|||$ such that, for every finite-dimensional subspace $Y$ of $X$ and every $\varepsilon>0$, there exists $x\in S_X$ so that $|||y+r x|||\geq (1-\varepsilon)(|||y|||+\vert r\vert)$ for every $y\in Y$ and every $r\in\mathbb R$. In this paper we generalize this result to larger cardinals, showing that if $\kappa$ is an uncountable cardinal then a Banach space $X$ contains a copy of $\ell_1(\kappa)$ if and only if there is an equivalent norm $|||\cdot|||$ on $X$ such that for every subspace $Y$ of $X$ with $dens(Y)<\kappa$ there exists a norm-one vector $x$ so that $||| y+r x|||=|||y|||+\vert r\vert$ whenever $y\in Y$ and $r\in\mathbb{R}$. This result answers a question posed by S. Ciaci, J. Langemets and A. Lissitsin, where the authors wonder whether the previous statement holds for infinite succesor cardinals. We also show that, in the countable case, the result of Godefroy cannot be improved to take $\varepsilon=0$. ",A renorming characterization of Banach spaces containing $\ell_1(\kappa)$ " This paper develops a Deep Reinforcement Learning (DRL)-agent for navigation and control of autonomous surface vessels (ASV) on inland waterways. Spatial restrictions due to waterway geometry and the resulting challenges, such as high flow velocities or shallow banks, require controlled and precise movement of the ASV. A state-of-the-art bootstrapped Q-learning algorithm in combination with a versatile training environment generator leads to a robust and accurate rudder controller. To validate our results, we compare the path-following capabilities of the proposed approach to a vessel-specific PID controller on real-world river data from the lower- and middle Rhine, indicating that the DRL algorithm could effectively prove generalizability even in never-seen scenarios while simultaneously attaining high navigational accuracy. ",Robust Path Following on Rivers Using Bootstrapped Reinforcement Learning " This paper proposes a novel scheme for mitigating strong interferences, which is applicable to various wireless scenarios, including full-duplex wireless communications and uncoordinated heterogenous networks. As strong interferences can saturate the receiver's analog-to-digital converters (ADC), they need to be mitigated both before and after the ADCs, i.e., via hybrid processing. The key idea of the proposed scheme, namely the Hybrid Interference Mitigation using Analog Prewhitening (HIMAP), is to insert an M-input M-output analog phase shifter network (PSN) between the receive antennas and the ADCs to spatially prewhiten the interferences, which requires no signal information but only an estimate of the covariance matrix. After interference mitigation by the PSN prewhitener, the preamble can be synchronized, the signal channel response can be estimated, and thus a minimum mean squared error (MMSE) beamformer can be applied in the digital domain to further mitigate the residual interferences. The simulation results verify that the HIMAP scheme can suppress interferences 80dB stronger than the signal by using off-the-shelf phase shifters (PS) of 6-bit resolution. ",Hybrid Interference Mitigation Using Analog Prewhitening " Two different models exhibiting self-organized criticality are analyzed by means of the dynamic renormalization group. Although the two models differ by their behavior under a parity transformation of the order parameter, it is shown that they both belong to the same universality class, in agreement with computer simulations. The asymptotic values of the critical exponents are estimated up to one loop order from a systematic expansion of a nonlinear equation in the number of coupling constants. ",Dynamic Renormalization Group Approach to Self-Organized Critical Phenomena " Optical phase measurement is critical for many applications and traditional approaches often suffer from mechanical instability, temporal latency, and computational complexity. In this paper, we describe compact phase sensor arrays based on integrated photonics, which enable accurate and scalable reference-free phase sensing in a few measurement steps. This is achieved by connecting multiple two-port phase sensors into a graph to measure relative phases between neighboring and distant spatial locations. We propose an efficient post-processing algorithm, as well as circuit design rules to reduce random and biased error accumulations. We demonstrate the effectiveness of our system in both simulations and experiments with photonic integrated circuits. The proposed system measures the optical phase directly without the need for external references or spatial light modulators, thus providing significant benefits for applications including microscope imaging and optical phased arrays. ",Scalable Low-latency Optical Phase Sensor Array " Using nuclear magnetic resonance (NMR) techniques with three-qubit sample, we have experimentally implemented the highly structured algorithm for the 1-SAT problem proposed by Hogg. A simplified temporal averaging procedure was employed to the three-qubit spin pseudo-pure state. The algorithm was completed with only a single evaluation of structure of the problem and the solutions were found with probability 100%, which outperform both unstructured quantum and the best classical search algorithm. ",Experimental Implementation of Hogg's Algorithm on a Three-Quantum-bit NMR Quantum Computer " M giants selected from the Two Micron All Sky Survey (2MASS) have been used to trace streams of tidal debris apparently associated with the Sagittarius dwarf spheroidal galaxy (Sgr) that entirely encircle the Galaxy. While the Sgr M giants are generally aligned with a single great circle on the sky, we measure a difference of 10.4 +- 2.6 degrees between the mean orbital poles of the great circles that best fit debris leading and trailing Sgr, which can be attributed to the precession of Sgr's orbit over the range of phases explored by the data set. Simulations of the destruction of Sgr in potentials containing bulge, disk and halo components best reproduce this level of precession along the same range of orbital phases if the potential contours of the halo are only slightly flattened, with the ratio between the axis length perpendicular to and in the disk in the range q = 0.90-0.95 (corresponding to isodensity contours with q_\rho ~ 0.83 - 0.92). Oblate halos are strongly preferred over prolate (q_\rho > 1) halos, and flattenings in the potential of q <= 0.85 (q_\rho <= 0.75) and q >= 1.05 (q_\rho >= 1.1) are ruled out at the 3-sigma level. More extreme values of q <= 0.80 (q_\rho <= 0.6) and q >= 1.25 (q_\rho >= 1.6) are ruled out at the 7-sigma and 5-sigma levels respectively. These constraints will improve as debris with larger separation in orbital phase can be found. ",A 2MASS All-Sky View of the Sagittarius Dwarf Galaxy: III. Constraints on the Flattening of the Galactic Halo " Serverless computing has emerged as a new execution model which gained a lot of attention in cloud computing thanks to the latest advances in containerization technologies. Recently, serverless has been adopted at the edge, where it can help overcome heterogeneity issues, constrained nature and dynamicity of edge devices. Due to the distributed nature of edge devices, however, the scaling of serverless functions presents a major challenge. We address this challenge by studying the optimality of serverless function scaling. To this end, we propose Semi-Markov Decision Process-based (SMDP) theoretical model, which yields optimal solutions by solving the serverless function scaling problem as a decision making problem. We compare the SMDP solution with practical, monitoring-based heuristics. We show that SMDP can be effectively used in edge computing networks, and in combination with monitoring-based approaches also in real-world implementations. ",Towards Optimal Serverless Function Scaling in Edge Computing Network " The density profile of simulated dark matter structures is fairly well-established, and several explanations for its characteristics have been put forward. In contrast, the radial variation of the velocity anisotropy has still not been explained. We suggest a very simple origin, based on the shapes of the velocity distributions functions, which are shown to differ between the radial and tangential directions. This allows us to derive a radial variation of the anisotropy profile which is in good agreement with both simulations and observations. One of the consequences of this suggestion is that the velocity anisotropy is entirely determined once the density profile is known. We demonstrate how this explains the origin of the \gamma-\beta relation, which is the connection between the slope of the density profile and the velocity anisotropy. These findings provide us with a powerful tool, which allows us to close the Jeans equations. ",Might we eventually understand the origin of the dark matter velocity anisotropy? " A comment on the paper by S. M. Mahajan and F. A. Asenjo ""Interacting quantum and classical waves: Resonant and non-resonant energy transfer to electrons immersed in an intense electromagnetic wave"" [Phys. Plasmas 29, 022107 (2022)] where the authors use a model based on the Klein-Gordon equation to discuss particle energization by a transverse electromagnetic wave in a plasma. It is shown that the results of the paper are easily obtained in a classical approach, so that no quantum effect has to be invoked. Moreover, some mistakes and misinterpretations in the paper have been corrected. The (un)suitability of the proposed mechanism to account for generation of extremely energetic particles in both laboratory and astrophysical scenarios is also discussed. ","Comment on: ""Interacting quantum and classical waves: Resonant and non-resonant energy transfer to electrons immersed in an intense electromagnetic wave'' [Phys. Plasmas 29, 022107 (2022)]" " ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge. Therefore, there is growing interest in exploring whether ChatGPT can replace traditional knowledge-based question answering (KBQA) models. Although there have been some works analyzing the question answering performance of ChatGPT, there is still a lack of large-scale, comprehensive testing of various types of complex questions to analyze the limitations of the model. In this paper, we present a framework that follows the black-box testing specifications of CheckList proposed by Ribeiro et. al. We evaluate ChatGPT and its family of LLMs on eight real-world KB-based complex question answering datasets, which include six English datasets and two multilingual datasets. The total number of test cases is approximately 190,000. In addition to the GPT family of LLMs, we also evaluate the well-known FLAN-T5 to identify commonalities between the GPT family and other LLMs. The dataset and code are available at https://github.com/tan92hl/Complex-Question-Answering-Evaluation-of-GPT-family.git ",Can ChatGPT Replace Traditional KBQA Models? An In-depth Analysis of the Question Answering Performance of the GPT LLM Family The neutrino emission due to formation and breaking of Cooper pairs of protons in superconducting cores of neutron stars is considered with taking into account the electromagnetic coupling of protons to ambient electrons. Our calculation shows that the contribution of the vector weak current to the $\nu\bar\nu$ emissivity of protons is much larger than that calculated by different authors without taking into account the plasma effects. Partial contribution of the pairing protons to the total neutrino radiation from the neutron star core is very sensitive to the critical temperatures for the proton and neutron pairing and can dominate in some domains of these parameters. ,Plasma effects in neutrino-pair emission due to Cooper pairing of protons in superconducting neutron stars " We study steady-states of semiconductor nanowires subjected to strong resonant time-periodic drives. The steady-states arise from the balance between electron-phonon scattering, electron-hole recombination via photo-emission, and Auger scattering processes. We show that tuning the strength of the driving field drives a transition between an electron-hole metal (EHM) phase and a Floquet insulator (FI) phase. We study the critical point controlling this transition. The EHM-to-FI transition can be observed by monitoring the presence of peaks in the density-density response function which are associated with the Fermi momentum of the EHM phase, and are absent in the FI phase. Our results may help guide future studies towards inducing novel non-equilibrium phases of matter by periodic driving. ",Floquet metal to insulator phase transitions in semiconductor nanowires " We investigate the gravitational wave background produced by magnetars. The statistical properties of these highly magnetized stars were derived by population synthesis methods and assumed to be also representative of extragalactic objects. The adopted ellipticity was calculated from relativistic models using equations of state and assumptions concerning the distribution of currents in the neutron star interior. The maximum amplitude occurs around 1.2 kHz, corresponding to $\Omega_{gw} \sim 10^{-9}$ for a type I superconducting neutron star model. The expected signal is a continuous background that could mask the cosmological contribution produced in the early stage of the Universe. ",Gravitational Wave Background from Magnetars " We study the thermodynamic topology of four dimensional dyonic Anti-de-Sitter(AdS) black hole in three different ensembles: canonical, mixed and grand canonical ensemble. While canonical ensemble refers to the ensemble with fixed electric and magnetic charges, mixed ensemble is an ensemble where we fix magnetic charge and electric potential. In the grand canonical ensemble, potentials corresponding to both electric and magnetic charges are kept fixed. In each of these ensembles, we first compute the topological charges associated with critical points. We find that while in both canonical and mixed ensembles, there exists one conventional critical point with topological charge $-1$, in the grand canonical ensemble, we find no critical point. Then, we consider the dyonic AdS black hole as topological defects in thermodynamic space and study its local and global topology by computing the winding numbers at the defects. We observe that while the topologies of the black hole in canonical and mixed ensembles are identical with total topological charge equaling $1$, in the grand canonical ensemble, depending on the values of potentials, the total topological charge is either equal to $0$ or $1$. In canonical and mixed ensembles, either one generation and one annihilation points or no generation/annihilation points are found. In the grand canonical ensemble, depending on the values of potentials, we find either one generation point or no generation/annihilation point. Thus, we infer that the topological class of $4$D dyonic AdS black hole is ensemble dependent. ",Thermodynamic topology of 4D Dyonic AdS black holes in different ensembles " We study the rest-frame optical properties of 74 luminous (L_bol=10^46.2-48.2 erg/s), 1.50.1 TeV) regime. The IACT technique observes the VHE photons indirectly, using the Earth's atmosphere as a calorimeter. Much of the calibration of Cherenkov telescope experiments is done using Monte Carlo simulations of the air shower development, Cherenkov radiation and detector, assuming certain models for the atmospheric conditions. Any deviation of the real conditions during observations from the assumed atmospheric model will result in a wrong reconstruction of the primary gamma-ray energy and the resulting source spectra. During eight years of observations, the High Energy Stereoscopic System (H.E.S.S.) has experienced periodic natural as well as anthropogenic variations of the atmospheric transparency due to aerosols created by biomass burning. In order to identify data that have been taken under such long-term reductions in atmospheric transparency, a new monitoring quantity, the Cherenkov transparency coefficient, has been developed and will be presented here. This quantity is independent of hardware changes in the detector and, therefore, isolates atmospheric factors that can impact the performance of the instrument, and in particular the spectral results. Its positive correlation with independent measurements of the atmospheric optical depth (AOD) retrieved from data of the Multi-angle Imaging SpectroRadiometer (MISR) on board of the Terra NASA's satellite is also presented here. ",Influence of aerosols from biomass burning on the spectral analysis of Cherenkov telescopes " We compute the tail algebras of exchangeable monotone stochastic processes. This allows us to prove the analogue of de Finetti's theorem for this type of processes. In addition, since the vacuum state on the $q$-deformed $C^*$-algebra is the only exchangeable state when $|q|<1$, we draw our attention to its tail algebra, which turns out to obey a zero-one law. ",Tail algebras for monotone and $q$-deformed exchangeable stochastic processes " With increasing usage of deep learning algorithms in many application, new research questions related to privacy and adversarial attacks are emerging. However, the deep learning algorithm improvement needs more and more data to be shared within research community. Methodologies like federated learning, differential privacy, additive secret sharing provides a way to train machine learning models on edge without moving the data from the edge. However, it is very computationally intensive and prone to adversarial attacks. Therefore, this work introduces a privacy preserving FedCollabNN framework for training machine learning models at edge, which is computationally efficient and robust against adversarial attacks. The simulation results using MNIST dataset indicates the effectiveness of the framework. ",Private Dataset Generation Using Privacy Preserving Collaborative Learning " We give an interpretation of the bilateral exit problem for L\'{e}vy processes via the study of an elementary Markov chain. We exhibit a strong connection between this problem and Krein's theory on strings. For instance, for symmetric L\'{e}vy processes with bounded variations, the L\'{e}vy exponent is the correspondant spectral density and the Wiener-Hopf factorization turns out to be a version of Krein's entropy formula. ",Krein's Theory applied to fluctuations of L\'{e}vy processes " he properties of excitons formed in spherical quantum dots are studied using the $\mathbf{k}\cdot\mathbf{p}$ method within the Hartree approximation. The spherical quantum dots considered have a central core and several concentric layers of different semiconductor materials that are modeled as a succession of potential wells and barriers. The $\mathbf{k}\cdot\mathbf{p}$ Hamiltonian and the Coulomb equations for the electron-hole pair are solved using a self-consistent iterative method. The calculation of the spectrum of the empty quantum dot and the electron-hole pair is performed by means of a very accurate numerical approximation. It is found that the exciton binding energy as a function of the core radius of the quantum dot shows a strong non-linear behaviour. In particular, for quantum dots with two potential wells, the binding energy presents a large steep change. This last behaviour is explained in terms of the polarization charges at the interfaces between different materials and the matching conditions for the eigenfunctions. ",Excitonic states in spherical layered quantum dots " Clustering data into meaningful subsets is a major task in scientific data analysis. To date, various strategies ranging from model-based approaches to data-driven schemes, have been devised for efficient and accurate clustering. One important class of clustering methods that is of a particular interest is the class of exemplar-based approaches. This interest primarily stems from the amount of compressed information encoded in these exemplars that effectively reflect the major characteristics of the respective clusters. Affinity propagation (AP) has proven to be a powerful exemplar-based approach that refines the set of optimal exemplars by iterative pairwise message updates. However, a critical limitation is its inability to capitalize on known networked relations between data points often available for various scientific datasets. To mitigate this shortcoming, we propose geometric-AP, a novel clustering algorithm that effectively extends AP to take advantage of the network topology. Geometric-AP obeys network constraints and uses max-sum belief propagation to leverage the available network topology for generating smooth clusters over the network. Extensive performance assessment reveals a significant enhancement in the quality of the clustering results when compared to benchmark clustering schemes. Especially, we demonstrate that geometric-AP performs extremely well even in cases where the original AP fails drastically. ",Geometric Affinity Propagation for Clustering with Network Knowledge " Schur's Theorem and its generalisation, Baer's Theorem, are distinguished results in group theory, connecting the upper central quotients with the lower central series. The aim of this paper is to generalise these results in two different directions, using novel methods related with the non-abelian tensor product. In particular, we prove a version of Schur-Baer Theorem for finitely generated groups. Then, we apply these newly obtained results to describe the $k$-nilpotent multiplier, for $k\geq 2$, and other invariants of groups. ",Some generalisations of Schur's and Baer's theorem and their connection with homological algebra " We extend previous work on dynamical AdS/QCD models by introducing an extra ingredient under the form of a background magnetic field, this to gain insight into the influence such field can have on crucial QCD observables. Therefore, we construct a closed form analytic solution to an Einstein-Maxwell-dilaton system with a magnetic field. We specifically focus on the deconfinement transition, reporting inverse magnetic catalysis, and on the string tension, reporting a weaker/stronger confinement along/perpendicular to the magnetic field. The latter, being of importance to potential modelling of heavy quarkonia, is in qualitative agreement with lattice findings. ",Anisotropic string tensions and inversely magnetic catalyzed deconfinement from a dynamical AdS/QCD model " Standard numerical methods for hyperbolic PDEs require for stability a CFL-condition which implies that the time step size depends on the size of the elements of the mesh. On cut-cell meshes, elements can become arbitrarily small and thus the time step size cannot take the size of small cut-cells into account but has to be chosen based on the background mesh elements. A remedy for this is the so called DoD (domain of dependence) stabilization for which several favorable theoretical and numerical properties have been shown in one and two space dimensions. Up to now the method is restricted to stabilization of cut-cells with exactly one inflow and one outflow face, i.e. triangular cut-cells with a no-flow face. We extend the DoD stabilization to cut-cells with multiple in- and out-flow faces by properly considering the flow distribution inside the cut-cell. We further prove L2-stability for the semi-discrete formulation in space and present numerical results to validate the proposed extension. ",DoD Stabilization of linear hyperbolic PDEs on general cut-cell meshes " We present the model of a diffusion-absorption process in a system which consists of two media separated by a thin partially permeable membrane. The kind of diffusion as well as the parameters of the process may be different in both media. Based on a simply model of particle's random walk in a membrane system we derive the Green's functions, then we find the boundary conditions at the membrane. One of the boundary conditions are rather complicated and takes a relatively simple form in terms of the Laplace transform. Assuming that particles diffuse independently of one another, the obtained boundary conditions can be used to solve to differential or differential-integral equations describing the processes in multilayered systems for any initial condition. We consider normal diffusion, subdiffusion and slow subdiffusion processes, and we also suggest how superdiffusion could be included in this model. The presented method provides the functions in terms of the Laplace transform and some useful methods of calculation the inverse Laplace transform are shown. ",Model of anomalous diffusion--absorption process in a system consisting of two different media separated by a thin membrane " The first law of thermodynamics, which governs energy conservation, is traditionally formulated as an equality. Surprisingly, we demonstrate that the first law alone implies a universal Landauer-like inequality linking changes in system entropy and energy. However, contrasting with the Landauer principle derived from the second law of thermodynamics, our obtained Landauer-like inequality solely relies on system information and is applicable in scenarios where implementing the Landauer principle becomes challenging. Furthermore, the Landauer-like inequality can complement the Landauer principle by establishing a dual {\it upper} bound on heat dissipation. We illustrate the practical utility of the Landauer-like inequality in dissipative quantum state preparation and quantum information erasure applications. Our findings offer new insights into identifying thermodynamic constraints relevant to the fields of quantum thermodynamics and the energetics of quantum information processing and more specifically, this approach could facilitate investigations into systems coupled to non-thermal baths or scenarios where access to bath information is limited. ",Universal Landauer-Like Inequality from the First Law of Thermodynamics " In this paper we study the molecular gas content of a representative sample of 67 of the most massive early-type galaxies in the local universe, drawn uniformly from the MASSIVE survey. We present new IRAM-30m telescope observations of 30 of these galaxies, allowing us to probe the molecular gas content of the entire sample to a fixed molecular-to-stellar mass fraction of 0.1%. The total detection rate in this representative sample is 25$^{+5.9}_{-4.4}$%, and by combining the MASSIVE and ATLAS$^{\rm 3D}$ molecular gas surveys we find a joint detection rate of 22.4$^{+2.4}_{-2.1}$%. This detection rate seems to be independent of galaxy mass, size, position on the fundamental plane, and local environment. We show here for the first time that true slow rotators can host molecular gas reservoirs, but the rate at which they do so is significantly lower than for fast-rotators. Objects with a higher velocity dispersion at fixed mass (a higher kinematic bulge fraction) are less likely to have detectable molecular gas, and where gas does exist, have lower molecular gas fractions. In addition, satellite galaxies in dense environments have $\approx$0.6 dex lower molecular gas-to-stellar mass ratios than isolated objects. In order to interpret these results we created a toy model, which we use to constrain the origin of the gas in these systems. We are able to derive an independent estimate of the gas-rich merger rate in the low-redshift universe. These gas rich mergers appear to dominate the supply of gas to ETGs, but stellar mass loss, hot halo cooling and transformation of spiral galaxies also play a secondary role. ",The MASSIVE survey - XI. What drives the molecular gas properties of early-type galaxies " We study the problem of computing weighted sum-of-squares (WSOS) certificates for positive polynomials over a compact semialgebraic set. Building on the theory of interior-point methods for convex optimization, we introduce the concept of dual cone certificates, which allows us to interpret vectors from the dual of the sum-of-squares cone as rigorous nonnegativity certificates of a WSOS polynomial. Whereas conventional WSOS certificates are alternative representations of the polynomials they certify, dual certificates are distinct from the certified polynomials; moreover, each dual certificate certifies a full-dimensional convex cone of WSOS polynomials. As a result, rational WSOS certificates can be constructed from numerically computed dual certificates at little additional cost, without any rounding or projection steps applied to the numerical certificates. As an additional algorithmic application, we present an almost entirely numerical hybrid algorithm for computing the optimal WSOS lower bound of a given polynomial along with a rational dual certificate, with a polynomial-time computational cost per iteration and linear rate of convergence. ",Dual certificates and efficient rational sum-of-squares decompositions for polynomial optimization over compact sets " We consider the problem of obtaining effective representations for the solutions of linear, vector-valued stochastic differential equations (SDEs) driven by non-Gaussian pure-jump L\'evy processes, and we show how such representations lead to efficient simulation methods. The processes considered constitute a broad class of models that find application across the physical and biological sciences, mathematics, finance and engineering. Motivated by important relevant problems in statistical inference, we derive new, generalised shot-noise simulation methods whenever a normal variance-mean (NVM) mixture representation exists for the driving L\'evy process, including the generalised hyperbolic, normal-Gamma, and normal tempered stable cases. Simple, explicit conditions are identified for the convergence of the residual of a truncated shot-noise representation to a Brownian motion in the case of the pure L\'evy process, and to a Brownian-driven SDE in the case of the L\'evy-driven SDE. These results provide Gaussian approximations to the small jumps of the process under the NVM representation. The resulting representations are of particular importance in state inference and parameter estimation for L\'evy-driven SDE models, since the resulting conditionally Gaussian structures can be readily incorporated into latent variable inference methods such as Markov chain Monte Carlo (MCMC), Expectation-Maximisation (EM), and sequential Monte Carlo. ",Generalised shot noise representations of stochastic systems driven by non-Gaussian L\'evy processes We approach the study of complete bifix decodings of (uniformly) recurrent languages with the help of the free profinite monoid. We show that the complete bifix decoding of a uniformly recurrent language $F$ by an $F$-charged rational complete bifix code is uniformly recurrent. An analogous result is obtained for recurrent languages. ,A profinite approach to complete bifix decodings of recurrent languages " We present a combined analytical and numerical micromagnetic study of the equilibrium energy, size and shape of anti-skyrmionic magnetic configurations. Anti-skyrmions can be stabilized when the Dzyaloshinskii-Moriya interaction has opposite signs along two orthogonal in-plane directions, breaking the magnetic circular symmetry. We compare the equilibrium energy, size and shape of anti-skyrmions and skyrmions that are stabilized respectively in environments with anisotropic and isotropic Dzyaloshinskii-Moriya interaction, but with the same strength of the magnetic interactions.When the dipolar interactions are neglected the skyrmion and the anti-skyrmion have the same energy, shape and size in their respective environment. However, when dipolar interactions are considered, the energy of the anti-skyrmion is strongly reduced and its equilibrium size increased with respect to the skyrmion. While the skyrmion configuration shows homochiral N\'{e}el magnetization rotations, anti-skyrmions show partly N\'{e}el and partly Bloch rotations. The latter do not produce magnetic charges and thus cost less dipolar energy. Both magnetic configurations are stable when the magnetic energies almost cancel each other, which means that a small variation of one parameter can drastically change their configuration, size and energy. ",Micromagnetics of anti-skyrmions in ultrathin films " The unexpected high incidence of carbon-enhanced, s-process enriched unevolved stars amongst extremely metal-poor stars in the halo provides a significant constraint on the Initial Mass Function (IMF) in the early Galaxy. We argue that these objects are evidence for the past existence of a large population of intermediate-mass stars, and conclude that the IMF in the early Galaxy was different from the present, and shifted toward higher masses. ",Observational evidence for a different IMF in the early Galaxy " In the present contribution three means of measuring the geometrical and topological complexity of photons' paths in random media are proposed. This is realized by investigating the behavior of the average crossing number, the mean writhe, and the minimal crossing number of photons' paths generated by Monte Carlo (MC) simulations, for different sets of optical parameters. It is observed that the complexity of the photons' paths increases for increasing light source/detector spacing, and that highly ""knotted"" paths are formed. Due to the particular rules utilized to generate the MC photons' paths, the present results may have an interest not only for the biomedical optics community, but also from a pure mathematical point of view. ",Topological complexity of photons' paths in biological tissues " We present a novel method to compute $\textit{assume-guarantee contracts}$ in non-zerosum two-player games over finite graphs where each player has a different $ \omega $-regular winning condition. Given a game graph $G$ and two parity winning conditions $\Phi_0$ and $\Phi_1$ over $G$, we compute $\textit{contracted strategy-masks}$ ($\texttt{csm}$) $(\Psi_{i},\Phi_{i})$ for each Player $i$. Within a $\texttt{csm}$, $\Phi_{i}$ is a $\textit{permissive strategy template}$ which collects an infinite number of winning strategies for Player $i$ under the assumption that Player $1-i$ chooses any strategy from the $\textit{permissive assumption template}$ $\Psi_{i}$. The main feature of $\texttt{csm}$'s is their power to $\textit{fully decentralize all remaining strategy choices}$ -- if the two player's $\texttt{csm}$'s are compatible, they provide a pair of new local specifications $\Phi_0^\bullet$ and $\Phi_1^\bullet$ such that Player $i$ can locally and fully independently choose any strategy satisfying $\Phi_i^\bullet$ and the resulting strategy profile is ensured to be winning in the original two-objective game $(G,\Phi_0,\Phi_1)$. In addition, the new specifications $\Phi_i^\bullet$ are $\textit{maximally cooperative}$, i.e., allow for the distributed synthesis of any cooperative solution. Further, our algorithmic computation of $\texttt{csm}$'s is complete and ensured to terminate. We illustrate how the unique features of our synthesis framework effectively address multiple challenges in the context of \enquote{correct-by-design} logical control software synthesis for cyber-physical systems and provide empirical evidence that our approach possess desirable structural and computational properties compared to state-of-the-art techniques. ",Contract-Based Distributed Synthesis in Two-Objective Parity Games " We describe the physical model, numerical algorithms, and software structure of WRF-Fire. WRF-Fire consists of a fire-spread model, implemented by the level-set method, coupled with the Weather Research and Forecasting model. In every time step, the fire model inputs the surface wind, which drives the fire, and outputs the heat flux from the fire into the atmosphere, which in turn influences the atmosphere. The level-set method allows submesh representation of the burning region and flexible implementation of various ignition modes. WRF-Fire is distributed as a part of WRF and it uses the WRF parallel infrastructure for parallel computing. ",Coupled atmosphere-wildland fire modeling with WRF-Fire " In this note, we present an infinite family of promise problems which can be solved exactly by just tuning transition amplitudes of a two-state quantum finite automata operating in realtime mode, whereas the size of the corresponding classical automata grow without bound. ",Superiority of exact quantum automata for promise problems " We discuss a metal-insulator transition caused by random couplings of magnetic moments in itinerant systems. An analytic solution for the single particle Green function is derived from dynamical self consistency equations, the corresponding density of states is characterized by the opening of a gap. The scaling behavior of observables is analyzed in the framework of a scaling theory and different crossover lines are identified. A fluctuation expansion around the mean field solution accounts for both interaction and localization effects in a consistent manner and is argued to be relevant for the description of the recently discovered metal-insulator transition in 2d electronic systems. ",Metal-Insulator Transition in Randomly Interacting Systems " Gravitino problem is discussed in detail. We derive an upperbound on the reheating temperature from the constraints of the big-bang nucleosynthesis and the present mass density of the universe. Compared to previous works, we have improve the following three points; (i) the gravitino production cross sections are calculated by taking all the relevant terms in the supergravity lagrangian into account, (ii) high energy photon spectrum is obtained by solving the Boltzmann equations numerically, and (iii) the evolutions of the light elements (D, T, $^3$He, $^4$He) at the temperature lower than $\sim$1MeV are calculated by using modified Kawano's computer code. ",Effects of the Gravitino on the Inflationary Universe " The large-$N_c$ masses of light vector, axial, scalar and pseudoscalar mesons are calculated from QCD spectral sum rules for a particular ansatz interpolating the radial Regge trajectories. The ansatz includes a linear part plus exponentially degreasing corrections to the meson masses and residues. The form of corrections was proposed some time ago from consistency with analytical structure of Operator Product Expansion of the two-point correlation functions. Two solutions are found and compared with the experimental data. ",Large-$N_c$ masses of light mesons from QCD sum rules for non-linear radial Regge trajectories " Antiferromagnetism is relevant to high temperature (high-Tc) superconductivity because copper oxide and iron arsenide high-Tc superconductors arise from electron- or hole-doping of their antiferromagnetic (AF) ordered parent compounds. There are two broad classes of explanation for the phenomenon of antiferromagnetism: in the local moment picture, appropriate for the insulating copper oxides, AF interactions are well described by a Heisenberg Hamiltonian; while in the itinerant model, suitable for metallic chromium, AF order arises from quasiparticle excitations of a nested Fermi surface. There has been contradictory evidence regarding the microscopic origin of the AF order in iron arsenide materials, with some favoring a localized picture while others supporting an itinerant point of view. More importantly, there has not even been agreement about the simplest effective ground state Hamiltonian necessary to describe the AF order. Here we report inelastic neutron scattering mapping of spin-wave excitations in CaFe2As2, a parent compound of the iron arsenide family of superconductors. We find that the spin waves in the entire Brillouin zone can be described by an effective three-dimensional local moment Heisenberg Hamiltonian, but the large in-plane anisotropy cannot. Therefore, magnetism in the parent compounds of iron arsenide superconductors is neither purely local nor purely itinerant; rather it is a complicated mix of the two. ",Spin Waves and Magnetic Exchange Interactions in CaFe2As2 " The aims of this study are to identify risk factors and develop a composite risk factor of initial stage of COVID-19 pandemic in regency level in Indonesia. Three risk factors, i.e., exposure, transmission and susceptibility, are investigated. Multivariate regression, and Canonical correlation analysis are implemented to measure the association between the risk factors and the initial stage of reported COVID -19 cases. The result reveals strong correlation between the composite risk factor and the number of COVID-19 cases at the initial stage of pandemic. The influence of population density, percentage of people commuting, international exposures, and number of public places which prone to COVID-19 transmission are observed. Large regencies and cities, mostly in Java, have high risk score. The largest risk score owned by regencies that are part of the Jakarta Metropolitan Area. ",Mapping of Covid-19 Risk Factors of Cities and Regencies in Indonesia during the Initial Stages of the Pandemic " The black hole candidate XTE J1817-330 was discovered in outburst on 26 January 2006 with RXTE/ASM. One year later, on 28 February 2007, another X-ray transient discovered in 1996, XTE J1856+053, was detected by RXTE during a new outburst. We report on the spectra obtained by XMM-Newton of these two black hole candidates. ",XMM-Newton observations of XTE J1817-330 and XTE J1856+053 Ramsey's method of separated oscillatory fields is applied to the excitation of the cyclotron motion of short-lived ions in a Penning trap to improve the precision of their measured mass. The theoretical description of the extracted ion-cyclotron-resonance line shape is derived out and its correctness demonstrated experimentally by measuring the mass of the short-lived $^{38}$Ca nuclide with an uncertainty of $1.6\cdot 10^{-8}$ using the ISOLTRAP Penning trap mass spectrometer at CERN. The mass value of the superallowed beta-emitter $^{38}$Ca is an important contribution for testing the conserved-vector-current hypothesis of the electroweak interaction. It is shown that the Ramsey method applied to mass measurements yields a statistical uncertainty similar to that obtained by the conventional technique ten times faster. ,Separated Oscillatory Fields for High-Precision Penning Trap Mass Spectrometry " 1T-TiSe2 has a semimetallic band structure at room temperature and undergoes phase transition to a triple-q charge density wave (CDW) state with a commensurate superlattice structure (2a * 2a * 2c) below Tc ~ 200 K at ambient pressure. This phase transition is caused by cooperative phenomena involving electron-phonon and electron-hole (excitonic) interactions, and cannot be described by a standard CDW framework. By Cu intercalation or the application of pressure, this phase transition temperature is suppressed and superconductivity (SC) appears. However, it is not clear what kind of order parameters are affected by these two procedures. We investigated the crystal structure of CuxTiSe2 and pressurized 1T-TiSe2 around the SC state by synchrotron x-ray diffraction on single crystals. In the high-temperature phase, the variation of structural parameters for the case of Cu intercalation and application of pressure are considerably different. Moreover, the relationship between the critical points of the CDW phase transition and the SC dome are also different for the two cases. The excitonic interaction appears to play an important role in the P-T phase diagram of 1T-TiSe2, but not in the x-T phase diagram. ",Effect of Cu intercalation and pressure on excitonic interaction in 1T-TiSe2 " It is shown that a periodic perturbation of the quantum pendulum (similarly to the classical one) in the neighbourhood of the separatrix can bring about irreversible phenomena. As a result of recurrent passages between degenerate states, the system gets self-chaotized and passes from the pure state to the mixed one. Chaotization involves the states, the branch points of whose levels participate in a slow ""drift"" of the system along the Mathieu characteristics this ""drift"" being caused by a slowly changing variable field. Recurrent relations are obtained for populations of levels participating in the irreversible evolution process. It is shown that the entropy of the system first grows and, after reaching the equilibrium state, acquires a constant value. ",Quantum chaos at the Kinetic Stage of Evolution " Early observations with JWST have led to the discovery of an unexpected large density (stellar mass density $\rho_*\approx 10^{6}\,M_{\odot}\,Mpc^{-3}$) of massive galaxies (stellar masses $M_*\geq 10^{10.5}M_{\odot}$) at extremely high redshifts $z\approx 10$. We show that - under the most conservative assumptions, and independently of the baryon physics involved in galaxy formation - such abundance is not only in tension with the standard $\Lambda$CDM cosmology, but provides extremely tight constraints on the expansion history of the Universe and on the growth factors corresponding to a wide class of Dark Energy (DE) models. The constraints we derive rule out with high ($>2\sigma$) confidence level a major portion of the parameter space of Dynamical DE models allowed (or even favoured) by existing cosmological probes. ",High-Redshift Galaxies from Early JWST Observations: Constraints on Dark Energy Models " The Linial-Meshulam complex model is a natural higher-dimensional analog of the Erd\H{o}s-R\'enyi graph model. In recent years, Linial and Peled established a limit theorem for Betti numbers of Linial-Meshulam complexes with an appropriate scaling of the underlying parameter. The present paper aims to extend that result to more-general random simplicial complex models. We introduce a class of homogeneous and spatially independent random simplicial complexes, including the Linial-Meshulam complex model and the random clique complex model as special cases, and we study the asymptotic behavior of their Betti numbers. Moreover, we obtain the convergence of the empirical spectral distributions of their Laplacians. A key element in the argument is the local weak convergence of simplicial complexes. Inspired by the work of Linial and Peled, we establish the local weak limit theorem for homogeneous and spatially independent random simplicial complexes. ",Law of large numbers for Betti numbers of homogeneous and spatially independent random simplicial complexes We present equivalent conditions for a space $X$ with an unconditional basis to admit an equivalent norm with a strictly convex dual norm. ,Unconditional bases and strictly convex dual renormings " Checkpoint/restart (C/R) provides fault-tolerant computing capability, enables long running applications, and provides scheduling flexibility for computing centers to support diverse workloads with different priority. It is therefore vital to get transparent C/R capability working at NERSC. MANA, by Garg et. al., is a transparent checkpointing tool that has been selected due to its MPI-agnostic and network-agnostic approach. However, originally written as a proof-of-concept code, MANA was not ready to use with NERSC's diverse production workloads, which are dominated by MPI and hybrid MPI+OpenMP applications. In this talk, we present ongoing work at NERSC to enable MANA for NERSC's production workloads, including fixing bugs that were exposed by the top applications at NERSC, adding new features to address system changes, evaluating C/R overhead at scale, etc. The lessons learned from making MANA production-ready for HPC applications will be useful for C/R tool developers, supercomputing centers and HPC end-users alike. ",Improving scalability and reliability of MPI-agnostic transparent checkpointing for production workloads at NERSC " Various soft materials share some common features, such as significant entropic effect, large fluctuations, sensitivity to thermodynamic conditions, and mesoscopic characteristic spatial and temporal scales. However, no quantitative definitions have yet been provided for soft matter, and the intrinsic mechanisms leading to their common features are unclear. In this work, from the viewpoint of statistical mechanics, we show that soft matter works in the vicinity of a specific thermodynamic state named moderate point, at which entropy and enthalpy contributions among substates along a certain order parameter are well balanced or have a minimal difference. Around the moderate point, the order parameter fluctuation, the associated response function, and the spatial correlation length maximize, which explains the large fluctuation, the sensitivity to thermodynamic conditions, and mesoscopic spatial and temporal scales of soft matter, respectively. Possible applications to switching chemical bonds or allosteric biomachines determining their best working temperatures are also discussed. ",Moderate Point: Balanced Entropy and Enthalpy Contributions in Soft Matter " ""Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much powerful, though usually unconscious, sensor motor knowledge. We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.""- Hans Moravec Moravec's paradox is involved with the fact that it is the seemingly easier day to day problems that are harder to implement in a machine, than the seemingly complicated logic based problems of today. The results prove that most artificially intelligent machines are as adept if not more than us at under-taking long calculations or even play chess, but their logic brings them nowhere when it comes to carrying out everyday tasks like walking, facial gesture recognition or speech recognition. ",To study the phenomenon of the Moravec's Paradox " The uplink where both the transmitter and receiver can use a large antenna array is considered. This is proposed as a method of antenna offloading and connecting small cell access points (SCAP) in a Two-Tier cellular network. Due to having a limited number of RF-chains, hybrid beamformers are designed where phase-only processing is done at the RF-band, followed by digital processing at the baseband. The proposed receiver is a row combiner that clusters sufficiently correlated antenna elements, and its' performance is compared against random projection via a Discrete Fourier Transform (DFT) matrix. The analogue to the row combiner is a column spreader, which is dependent on the transmit correlation, and repeats the transmitted signal over antenna elements that are correlated. A key benefit of this approach is to reduce the number of phase shifters used, while outperforming the DFT scheme. When only the transmitter has correlation and is RF-chain limited, the baseband precoding vectors are shown to be the eigenvectors of the effective transmit correlation matrix. Depending on the channel correlation, this matrix can be approximated to have a tridiagonal Toeplitz structure with the proposed column spreader (CS). The resulting eigenvalues have a closed form solution which allows us to characterize the sum rate of the system. Most interestingly, the associated eigenvectors do not require knowledge of the effective transmit correlation matrix to be calculated using an Eigenvalue Decomposition (EVD) method. ",Hybrid Beamforming for Massive MIMO Backhaul (Working Title) " Event-driven multi-threaded programming is an important idiom for structuring concurrent computations. Stateless Model Checking (SMC) is an effective verification technique for multi-threaded programs, especially when coupled with Dynamic Partial Order Reduction (DPOR). Existing SMC techniques are often ineffective in handling event-driven programs, since they will typically explore all possible orderings of event processing, even when events do not conflict. We present Event-DPOR , a DPOR algorithm tailored to event-driven multi-threaded programs. It is based on Optimal-DPOR, an optimal DPOR algorithm for multi-threaded programs; we show how it can be extended for event-driven programs. We prove correctness of Event-DPOR for all programs, and optimality for a large subclass. One complication is that an operation in Event-DPOR, which checks for redundancy of new executions, is NP-hard, as we show in this paper; we address this by a sequence of inexpensive (but incomplete) tests which check for redundancy efficiently. Our implementation and experimental evaluation show that, in comparison with other tools in which handler threads are simulated using locks, Event-DPOR can be exponentially faster than other state-of-the-art DPOR algorithms on a variety of programs and manages to completely avoid unnecessary exploration of executions. ",Tailoring Stateless Model Checking for Event-Driven Multi-Threaded Programs " The charged particles storage capacity of microtraps (micro-Penning-Malmberg traps) with large length to radius aspect ratios and radii of the order of tens of microns was explored. Simulation studies of the motions of charged particles were conducted with particle-in-cell WARP code and the Charged Particle Optics (CPO) program. The new design of the trap consisted of an array of microtraps with substantially lower end electrodes potential than conventional Penning-Malmberg traps, which makes this trap quite portable. It was computationally shown that each microtrap with 50 micron radius stored positrons with a density (1.6x10^11 cm^-3) even higher than that in conventional Penning-Malmberg traps (about 10^11 cm^-3) while the confinement voltage was only 10 V. It was presented in this work how to evaluate and lower the numerical noise by controlling the modeling parameters so the simulated plasma can evolve toward computational equilibrium. The local equilibrium distribution, where longitudinal force balance is satisfied along each magnetic field line, was attained in time scales of the simulation for plasmas initialized with a uniform density and Boltzmann energy distribution. The charge clouds developed the expected radial soft edge density distribution and rigid rotation evolved to some extent. To reach global equilibrium (i.e. rigid rotation) longer runs are required. The plasma confinement time and its thermalization were independent of the length. The length-dependency, reported in experiments, is due to fabrication and field errors. Computationally, more than one hundred million positrons were trapped in one microtrap with 50 micron radius and 10 cm length immersed in a 7 T uniform, axial magnetic field, and the density scaled as r^-2 down to 3 micron. Larger densities were trapped with higher barrier potentials. ",Simulation studies of the behavior of positrons in a microtrap with long aspect ratio " In secret sharing protocols, a secret is to be distributed among several partners so that leaving out any number of them, the rest do not have the complete information. Strong multiqubit correlations in the state by which secret sharing is carried out, had been proposed as a criterion for security of such protocols against individual attacks by an eavesdropper. However we show that states with weak multiqubit correlations can also be used for secure secret sharing. That our state has weak multiqubit correlations, is shown from the perspective of violation of local realism, and also by showing that its higher order correlations are described by lower ones. We then present a unified criterion for security of secret sharing in terms of violation of local realism, which works when the secret sharing state is the Greenberger-Horne-Zeilinger state (with strong multiqubit correlations), as well as states of a different class (with weak multiqubit correlations). ",Unified criterion for security of secret sharing in terms of violation of Bell inequality " The current work presents an experimental investigation of the dynamic interactions between flow scales caused by repeated actions of the nonlinear term of the Navier-Stokes equation. Injecting a narrow band oscillation, representing a single Fourier mode, into a round jet flow allows the measurement of the downstream generation and development of higher harmonic spectral components and to measure when these components are eventually absorbed into fully developed turbulence. Furthermore, the dynamic evolution of the measured power spectra observed corresponds well to the measured cascaded delays reported by others. Closely matching spectral development and cascade delays have also been derived directly from a one-dimensional solution of the Navier-Stokes equation described in a companion paper. The results in the current work provide vital information about how initial conditions influence development of the shape of the spectrum and about the extent of the time scales in the triad interaction process, which should be of significance to turbulence modelers. ",Experimental investigation of the turbulent cascade development by injection of single large-scale Fourier modes " Motivated by Xing's method [7], we show that there exist [n,k,d] linear Hermitian codes over F_{q^2} with k+d>=n-3 for all sufficiently large q. This improves the asymptotic bound of Algebraic-Geometry codes from Hermitian curves given in [9,10]. ",Improvement on Parameters of Algebraic-Geometry Codes from Hermitian Curves " This work addresses the cross-corpora generalization issue for the low-resourced spoken language identification (LID) problem. We have conducted the experiments in the context of Indian LID and identified strikingly poor cross-corpora generalization due to corpora-dependent non-lingual biases. Our contribution to this work is twofold. First, we propose domain diversification, which diversifies the limited training data using different audio data augmentation methods. We then propose the concept of maximally diversity-aware cascaded augmentations and optimize the augmentation fold-factor for effective diversification of the training data. Second, we introduce the idea of domain generalization considering the augmentation methods as pseudo-domains. Towards this, we investigate both domain-invariant and domain-aware approaches. Our LID system is based on the state-of-the-art emphasized channel attention, propagation, and aggregation based time delay neural network (ECAPA-TDNN) architecture. We have conducted extensive experiments with three widely used corpora for Indian LID research. In addition, we conduct a final blind evaluation of our proposed methods on the Indian subset of VoxLingua107 corpus collected in the wild. Our experiments demonstrate that the proposed domain diversification is more promising over commonly used simple augmentation methods. The study also reveals that domain generalization is a more effective solution than domain diversification. We also notice that domain-aware learning performs better for same-corpora LID, whereas domain-invariant learning is more suitable for cross-corpora generalization. Compared to basic ECAPA-TDNN, its proposed domain-invariant extensions improve the cross-corpora EER up to 5.23%. In contrast, the proposed domain-aware extensions also improve performance for same-corpora test scenarios. ",Cross-Corpora Spoken Language Identification with Domain Diversification and Generalization " Self-poisoning is a kinetic trap that can impair or prevent crystal growth in a wide variety of physical settings. Here we use dynamic mean-field theory and computer simulation to argue that poisoning is ubiquitous because its emergence requires only the notion that a molecule can bind in two (or more) ways to a crystal; that those ways are not energetically equivalent; and that the associated binding events occur with sufficiently unequal probability. If these conditions are met then the steady-state growth rate is in general a non-monotonic function of the thermodynamic driving force for crystal growth, which is the characteristic of poisoning. Our results also indicate that relatively small changes of system parameters could be used to induce recovery from poisoning. ",Minimal physical requirements for crystal growth self-poisoning " We calculate the polarization of prompt $J/\psi$ and $\Upsilon$(1S) production using the color evaporation model at leading order. We present the polarization parameter $\lambda_\vartheta$ as a function of center of mass energy and rapidity in $p+p$ collisions. We also compare the $x_F$ dependence to experimental results in $p$+Cu and $\pi$+W collisions, and predict the $x_F$ dependence in $p$+Pb collisions at fixed-target energies. At energies far above the $Q\overline{Q}$ production threshold, we find the prompt $J/\psi$ and $\Upsilon$(1S) production to be longitudinally polarized with $\lambda_\vartheta^{J/\psi}=-0.51^{+0.05}_{-0.16}$ and $\lambda_\vartheta^{\Upsilon \rm{(1S)}}=-0.69^{+0.03}_{-0.02}$. Both prompt $J/\psi$ and prompt $\Upsilon$(1S) are also longitudinally polarized at central rapidity, becoming transversely polarized at the most forward rapidities. ",Polarization of prompt $J/\psi$ and $\Upsilon$(1S) production in the color evaporation model " Using Schwinger's quantum action principle, dispersion relations are obtained for neutral scalar mesons interacting with bi-local sources. These relations are used as the basis of a method for representing the effect of interactions in the Gaussian approximation to field theory, and it is argued that a marked inhomogeneity, in space-time dependence of the sources, forces a discrete spectrum on the field. The development of such a system is characterized by features commonly associated with chaos and self-organization (localization by domain or cell formation). The Green functions play the role of an iterative map in phase space. Stable systems reside at the fixed points of the map. The present work can be applied to self-interacting theories by choosing suitable properties for the sources. Rapid transport leads to a second order phase transition and anomalous dispersion. Finally, it is shown that there is a compact representation of the non-equilibrium dynamics in terms of generalized chemical potentials, or equivalently as a pseudo-gauge theory, with an imaginary charge. This analogy shows, more clearly, how dissipation and entropy production are related to the source picture and transform a flip-flop like behaviour between two reservoirs into the Landau problem in a constant `magnetic field'. A summary of conventions and formalism is provided as a basis for future work. ","Quantum fields in disequilibrium: neutral scalar bosons with long-range, inhomogeneous perturbations" " We propose Mask CycleGAN, a novel architecture for unpaired image domain translation built based on CycleGAN, with an aim to address two issues: 1) unimodality in image translation and 2) lack of interpretability of latent variables. Our innovation in the technical approach is comprised of three key components: masking scheme, generator and objective. Experimental results demonstrate that this architecture is capable of bringing variations to generated images in a controllable manner and is reasonably robust to different masks. ",Mask CycleGAN: Unpaired Multi-modal Domain Translation with Interpretable Latent Variable " Optofluidic force induction (OF2i) is an optical nanoparticle characterization scheme which achieves real-time optical counting with single-particle sensitivity and high throughput. In a recent paper [\v{S}imi\'c et al., Phys. Rev. Appl. 18, 024056 (2022)], we have demonstrated the working principle for standardized polystrene nanoparticles, and have developed a theoretical model to analyze the experimental data. In this paper we give a detailed account of the model ingredients including the full working equations, provide additional justification for the assumptions underlying OF2i, and discuss directions for further developments and future research. ",Theoretical description of optofluidic force induction " We prove Taylor scaling for dislocation lines characterized by line-tension and moving by curvature under the action of an applied shear stress in a plane containing a random array of obstacles. Specifically, we show--in the sense of optimal scaling--that the critical applied shear stress for yielding, or percolation-like unbounded motion of the dislocation, scales in proportion to the square root of the obstacle density. For sufficiently small obstacle densities, Taylor scaling dominates the linear-scaling that results from purely energetic considerations and, therefore, characterizes the dominant rate-limiting mechanism in that regime. ",A proof of Taylor scaling for curvature-driven dislocation motion through random arrays of obstacles " Given a sufficient statistic for a parametric family of distributions, one can estimate the parameter without access to the data. However, the memory or code size for storing the sufficient statistic may nonetheless still be prohibitive. Indeed, for $n$ independent samples drawn from a $k$-nomial distribution with $d=k-1$ degrees of freedom, the length of the code scales as $d\log n+O(1)$. In many applications, we may not have a useful notion of sufficient statistics (e.g., when the parametric family is not an exponential family) and we also may not need to reconstruct the generating distribution exactly. By adopting a Shannon-theoretic approach in which we allow a small error in estimating the generating distribution, we construct various {\em approximate sufficient statistics} and show that the code length can be reduced to $\frac{d}{2}\log n+O(1)$. We consider errors measured according to the relative entropy and variational distance criteria. For the code constructions, we leverage Rissanen's minimum description length principle, which yields a non-vanishing error measured according to the relative entropy. For the converse parts, we use Clarke and Barron's formula for the relative entropy of a parametrized distribution and the corresponding mixture distribution. However, this method only yields a weak converse for the variational distance. We develop new techniques to achieve vanishing errors and we also prove strong converses. The latter means that even if the code is allowed to have a non-vanishing error, its length must still be at least $\frac{d}{2}\log n$. ",Minimum Rates of Approximate Sufficient Statistics " We investigate the structure of the spectrum of antiferromagnetically coupled spin-1 bosons on a square lattice using degenerate perturbation theory and exact diagonalizations of finite clusters. We show that the superfluid phase develops an Anderson tower of states typical of nematic long-range order with broken SU(2) symmetry.We further show that this order persists into the Mott insulating phase down to zero hopping for one boson per site, and down to a critical hopping for two bosons per site, in agreement with mean-field and Quantum Monte Carlo results. The connection with the transition between a fragmented condensate and a polar one in a single trap is briefly discussed. ",Anderson tower of states and nematic order of spin-1 bosonic atoms on a 2D lattice " A recently proposed ""DFT+dispersion"" treatment (Rajchel et al., Phys. Rev. Lett., 2010, 104, 163001) is described in detail and illustrated by more examples. The formalism derives the dispersion-free density functional theory (DFT) interaction energy and combines it with the dispersion energy from separate DFT calculations. It consists in the self-consistent polarization of DFT monomers restrained by the exclusion principle via the Pauli blockade technique. Within the monomers a complete exchange-correlation potential should be used, but between them only the exact exchange operates. The applications to wide range of molecular complexes from rare-gas dimers to H-bonds to pi-electron interactions show good agreement with benchmark values. ",Density Functional Theory Approach to Noncovalent Interactions via Interacting Monomer Densities " We present new, full-orbit observations of the infrared phase variations of the canonical hot Jupiter HD 189733b obtained in the 3.6 and 4.5 micron bands using the Spitzer Space Telescope. When combined with previous phase curve observations at 8.0 and 24 micron, these data allow us to characterize the exoplanet's emission spectrum as a function of planetary longitude. We utilize improved methods for removing the effects of intrapixel sensitivity variations and accounting for the presence of time-correlated noise in our data. We measure a phase curve amplitude of 0.1242% +/- 0.0061% in the 3.6 micron band and 0.0982% +/- 0.0089% in the 4.5 micron band. We find that the times of minimum and maximum flux occur several hours earlier than predicted for an atmosphere in radiative equilibrium, consistent with the eastward advection of gas by an equatorial super-rotating jet. The locations of the flux minima in our new data differ from our previous observations at 8 micron, and we present new evidence indicating that the flux minimum observed in the 8 micron is likely caused by an over-shooting effect in the 8 micron array. We obtain improved estimates for HD 189733b's dayside planet-star flux ratio of 0.1466% +/- 0.0040% at 3.6 micron and 0.1787% +/- 0.0038% at 4.5 micron; these are the most accurate secondary eclipse depths obtained to date for an extrasolar planet. We compare our new dayside and nightside spectra for HD 189733b to the predictions of models from Burrows et al. (2008) and Showman et al. (2009). We find that HD 189733b's 4.5 micron nightside flux is 3.3 sigma smaller than predicted by the Showman et al. models, which assume that the chemistry is in local thermal equilibrium. We conclude that this discrepancy is best-explained by vertical mixing, which should lead to an excess of CO and correspondingly enhanced 4.5 micron absorption in this region. [abridged] ",3.6 and 4.5 Micron Phase Curves and Evidence for Non-Equilibrium Chemistry in the Atmosphere of Extrasolar Planet HD 189733b " This paper aims to improve the explainability of Autoencoder's (AE) predictions by proposing two explanation methods based on the mean and epistemic uncertainty of log-likelihood estimate, which naturally arise from the probabilistic formulation of the AE called Bayesian Autoencoders (BAE). To quantitatively evaluate the performance of explanation methods, we test them in sensor network applications, and propose three metrics based on covariate shift of sensors : (1) G-mean of Spearman drift coefficients, (2) G-mean of sensitivity-specificity of explanation ranking and (3) sensor explanation quality index (SEQI) which combines the two aforementioned metrics. Surprisingly, we find that explanations of BAE's predictions suffer from high correlation resulting in misleading explanations. To alleviate this, a ""Coalitional BAE"" is proposed, which is inspired by agent-based system theory. Our comprehensive experiments on publicly available condition monitoring datasets demonstrate the improved quality of explanations using the Coalitional BAE. ",Coalitional Bayesian Autoencoders -- Towards explainable unsupervised deep learning " We give complete details on an alternative formulation of the Polya-Szego principle that was mentioned in Remark 1 of our paper ""Isoperimetry and Symmetrization for Logarithmic Sobolev inequalities"". We also provide an alternative proof to a result in the same paper. ",Addendum to Isoperimetry and Symmetrization for Logarithmic Sobolev inequalities " We present an implementation of the hierarchical tree algorithm on the individual timestep algorithm (the Hermite scheme) for collisional $N$-body simulations, running on GRAPE-9 system, a special-purpose hardware accelerator for gravitational many-body simulations. Such combination of the tree algorithm and the individual timestep algorithm was not easy on the previous GRAPE system mainly because its memory addressing scheme was limited only to sequential access to a full set of particle data. The present GRAPE-9 system has an indirect memory addressing unit and a particle memory large enough to store all particles data and also tree nodes data. The indirect memory addressing unit stores interaction lists for the tree algorithm, which is constructed on host computer, and, according to the interaction lists, force pipelines calculate only the interactions necessary. In our implementation, the interaction calculations are significantly reduced compared to direct $N^2$ summation in the original Hermite scheme. For example, we can archive about a factor 30 of speedup (equivalent to about 17 teraflops) against the Hermite scheme for a simulation of $N=10^6$ system, using hardware of a peak speed of 0.6 teraflops for the Hermite scheme. ",Hierarchical Tree Algorithm for Collisional N-body Simulations on GRAPE " In this paper, we study any K\""ahler manifold where the positive orthogonal bisectional curvature is preserved on the K\""ahler Ricci flow. Naturally, we always assume that the first Chern class $C_1$ is positive. In particular, we prove that any irreducible K\""ahler manifold with such property must be biholomorphic to $\mathbb{C}\mathbb{P}^n. $ This can be viewed as a generalization of Siu-Yau\cite{Siuy80}, Morri's solution \cite{Mori79} of the Frankel conjecture. According to [8], note that any K\""ahler manifold with 2-positive traceless bisectional curvature operator is preserved under Kahler Ricci flow; which in turns implies the positivity of orthogonal bisectional curvature under the flow. ","On K\""ahler manifolds with positive orthogonal bisectional curvature" " Among various soft computing approaches for time series forecasting, Fuzzy Cognitive Maps (FCM) have shown remarkable results as a tool to model and analyze the dynamics of complex systems. FCM have similarities to recurrent neural networks and can be classified as a neuro-fuzzy method. In other words, FCMs are a mixture of fuzzy logic, neural network, and expert system aspects, which act as a powerful tool for simulating and studying the dynamic behavior of complex systems. The most interesting features are knowledge interpretability, dynamic characteristics and learning capability. The goal of this survey paper is mainly to present an overview on the most relevant and recent FCM-based time series forecasting models proposed in the literature. In addition, this article considers an introduction on the fundamentals of FCM model and learning methodologies. Also, this survey provides some ideas for future research to enhance the capabilities of FCM in order to cover some challenges in the real-world experiments such as handling non-stationary data and scalability issues. Moreover, equipping FCMs with fast learning algorithms is one of the major concerns in this area. ",Time Series Forecasting Using Fuzzy Cognitive Maps: A Survey " We have obtained two-dimensional velocity fields in the ionized gas of a set of 8 double-barred galaxies, at high spatial and spectral resolution, using their H$\alpha$ emission fields measured with a scanning Fabry-Perot spectrometer. Using the technique by which phase reversals in the non-circular motion indicate a radius of corotation, taking advantage of the high angular and velocity resolution we have obtained the corotation radii and the pattern speeds of both the major bar and the small central bar in each of the galaxies; there are few such measurements in the literature. Our results show that the inner bar rotates more rapidly than the outer bar by a factor between 3.3 and 3.6. ",The ratio of pattern speeds in double-barred galaxies " Recent studies indicate that hierarchical Vision Transformer with a macro architecture of interleaved non-overlapped window-based self-attention \& shifted-window operation is able to achieve state-of-the-art performance in various visual recognition tasks, and challenges the ubiquitous convolutional neural networks (CNNs) using densely slid kernels. Most follow-up works attempt to replace the shifted-window operation with other kinds of cross-window communication paradigms, while treating self-attention as the de-facto standard for window-based information aggregation. In this manuscript, we question whether self-attention is the only choice for hierarchical Vision Transformer to attain strong performance, and the effects of different kinds of cross-window communication. To this end, we replace self-attention layers with embarrassingly simple linear mapping layers, and the resulting proof-of-concept architecture termed as LinMapper can achieve very strong performance in ImageNet-1k image recognition. Moreover, we find that LinMapper is able to better leverage the pre-trained representations from image recognition and demonstrates excellent transfer learning properties on downstream dense prediction tasks such as object detection and instance segmentation. We also experiment with other alternatives to self-attention for content aggregation inside each non-overlapped window under different cross-window communication approaches, which all give similar competitive results. Our study reveals that the \textbf{macro architecture} of Swin model families, other than specific aggregation layers or specific means of cross-window communication, may be more responsible for its strong performance and is the real challenger to the ubiquitous CNN's dense sliding window paradigm. Code and models will be publicly available to facilitate future research. ",What Makes for Hierarchical Vision Transformer? In this paper we give a detailed proof of a result we announced a year ago. This result is an effective version of the theorem of Mazur-Kamienny-Merel concerning uniform bounds for rational torsion points on elliptic curves over number fields. ,Bornes effectives pour la torsion des courbes elliptiques sur les corps de nombres " We have measured the tunnelling current in Nb/Nb$_x$O$_y$/Ni planar tunnel junctions at different temperatures. The junctions are in the intermediate transparency regime. We have extracted the current polarization of the metal/ferromagnet junction without applying a magnetic field. We have used a simple theoretical model, that provides consistent fitting parameters for the whole range of temperatures analyzed. We have also been able to gain insight into the microscopic structure of the oxide barriers of our junctions.% ",Spin polarized current and Andreev transmission in planar superconducting/ferromagnetic Nb/Ni junctions " The influence of neutrino flavor oscillations on the propagation of magnetohydrodynamic (MHD) waves and instabilities is studied in neutrino-beam driven magnetoplasmas. Using the neutrino MHD model, a general dispersion relation is derived which manifests the resonant interactions of MHD waves, not only with the neutrino beam, but also with the neutrino flavor oscillations. It is found that the latter contribute to the wave dispersion and enhance the magnitude of the instability of oblique magnetosonic waves. However, the shear-Alfv{\'e}n wave remains unaffected by the neutrino beam and neutrino flavor oscillations. Such an enhancement of the magnitude of the instability of magnetosonic waves can be significant for relatively long-wavelength perturbations in the regimes of high neutrino number density and/or strong magnetic field, giving a convincing mechanism for type-II core-collapse supernova explosion. ",Neutrino magnetohydrodynamic instabilities in presence of two-flavor oscillations " Although there have been recent claims that there is a large dispersion in the abundances of the heavy neutron capture elements in the old Galactic globular cluster M92, we show that the measured dispersion for the absolute abundances of four of the rare earth elements within a sample of 12 luminous red giants in M92 (less than or equal to 0.07 dex) does not exceed the relevant sources of uncertainty. As expected from previous studies, the heavy elements show the signature of the r-process. Their abundance ratios are essentially identical to those of M30, another nearby globular cluster of similar metallicity. ",No Heavy Element Dispersion in the Globular Cluster M92 " X-rays produced by compact flares co-rotating with a Keplerian accretion disc are modulated in time by Doppler effects. We improve on previous calculations of these effects by considering recent models of intrinsic X-ray variability, and compute the expected strength of the relativistic signal in current data of Seyfert galaxies and black hole binaries. Such signals could clearly be seen in, for example, recent XMM-Newton data from MCG-6-30-15, if indeed the X-rays were produced by co-rotating flares concentrated toward the inner disc edge around an extreme Kerr black hole. Lack of the signal in the data collected so far gives support to models, where the X-ray sources in active galaxies do not follow Keplerian orbits close to the black hole. ",On the influence of relativistic effects on X-ray variability of accreting black holes " The main technical result of this paper is to characterize the contracting isometries of a CAT(0) cube complex without any assumption on its local finiteness. Afterwards, we introduce the combinatorial boundary of a CAT(0) cube complex, and we show that contracting isometries are strongly related to isolated points at infinity, when the complex is locally finite. This boundary turns out to appear naturally in the context of Guba and Sapir's diagram groups, and we apply our main criterion to determine precisely when an element of a diagram group induces a contracting isometry on the associated Farley cube complex. As a consequence, in some specific case, we are able to deduce a criterion to determine precisely when a diagram group is acylindrically hyperbolic. ",Contracting isometries of CAT(0) cube complexes and acylindrical hyperbolicity of diagram groups " Brain tumor segmentation based on multi-modal magnetic resonance imaging (MRI) plays a pivotal role in assisting brain cancer diagnosis, treatment, and postoperative evaluations. Despite the achieved inspiring performance by existing automatic segmentation methods, multi-modal MRI data are still unavailable in real-world clinical applications due to quite a few uncontrollable factors (e.g. different imaging protocols, data corruption, and patient condition limitations), which lead to a large performance drop during practical applications. In this work, we propose a Deeply supervIsed knowledGE tranSfer neTwork (DIGEST), which achieves accurate brain tumor segmentation under different modality-missing scenarios. Specifically, a knowledge transfer learning frame is constructed, enabling a student model to learn modality-shared semantic information from a teacher model pretrained with the complete multi-modal MRI data. To simulate all the possible modality-missing conditions under the given multi-modal data, we generate incomplete multi-modal MRI samples based on Bernoulli sampling. Finally, a deeply supervised knowledge transfer loss is designed to ensure the consistency of the teacher-student structure at different decoding stages, which helps the extraction of inherent and effective modality representations. Experiments on the BraTS 2020 dataset demonstrate that our method achieves promising results for the incomplete multi-modal MR image segmentation task. ",DIGEST: Deeply supervIsed knowledGE tranSfer neTwork learning for brain tumor segmentation with incomplete multi-modal MRI scans " Most deep learning based image inpainting approaches adopt autoencoder or its variants to fill missing regions in images. Encoders are usually utilized to learn powerful representational spaces, which are important for dealing with sophisticated learning tasks. Specifically, in image inpainting tasks, masks with any shapes can appear anywhere in images (i.e., free-form masks) which form complex patterns. It is difficult for encoders to capture such powerful representations under this complex situation. To tackle this problem, we propose a self-supervised Siamese inference network to improve the robustness and generalization. It can encode contextual semantics from full resolution images and obtain more discriminative representations. we further propose a multi-scale decoder with a novel dual attention fusion module (DAF), which can combine both the restored and known regions in a smooth way. This multi-scale architecture is beneficial for decoding discriminative representations learned by encoders into images layer by layer. In this way, unknown regions will be filled naturally from outside to inside. Qualitative and quantitative experiments on multiple datasets, including facial and natural datasets (i.e., Celeb-HQ, Pairs Street View, Places2 and ImageNet), demonstrate that our proposed method outperforms state-of-the-art methods in generating high-quality inpainting results. ",Free-Form Image Inpainting via Contrastive Attention Network " Context. Blazars are a subset of active galactic nuclei (AGN) with jets that are oriented along our line of sight. Variability and spectral energy distribution (SED) studies are crucial tools for understanding the physical processes responsible for observed AGN emission. Aims. We report peculiar behaviour in the bright gamma-ray blazar PKS 1424-418 and use its strong variability to reveal information about the particle acceleration and interactions in the jet. Methods. Correlation analysis of the extensive optical coverage by the ATOM telescope and nearly continuous gamma-ray coverage by the Fermi Large Area Telescope is combined with broadband, time-dependent modeling of the SED incorporating supplemental information from radio and X-ray observations of this blazar. Results. We analyse in detail four bright phases at optical-GeV energies. These flares of PKS 1424-418 show high correlation between these energy ranges, with the exception of one large optical flare that coincides with relatively low gamma-ray activity. Although the optical/gamma-ray behaviour of PKS 1424-418 shows variety, the multiwavelength modeling indicates that these differences can largely be explained by changes in the flux and energy spectrum of the electrons in the jet that are radiating. We find that for all flares the SED is adequately represented by a leptonic model that includes inverse Compton emission from external radiation fields with similar parameters. Conclusions. Detailed studies of individual blazars like PKS 1424-418 during periods of enhanced activity in different wavebands are helping us identify underlying patterns in the physical parameters in this class of AGN. ",Unusual Flaring Activity in the Blazar PKS 1424-418 during 2008-2011 " Dynamic games arise when multiple agents with differing objectives control a dynamic system. They model a wide variety of applications in economics, defense, energy systems and etc. However, compared to single-agent control problems, the computational methods for dynamic games are relatively limited. As in the single-agent case, only specific dynamic games can be solved exactly, so approximation algorithms are required. In this paper, we show how to extend a recursive Newton's algorithm and the popular differential dynamic programming (DDP) for single-agent optimal control to the case of full-information non-zero sum dynamic games. In the single-agent case, the convergence of DDP is proved by comparison with Newton's method, which converges locally at a quadratic rate. We show that the iterates of Newton's method and DDP are sufficiently close for the DDP to inherit the quadratic convergence rate of Newton's method. We also prove both methods result in an open-loop Nash equilibrium and a local feedback $O(\epsilon^2)$-Nash equilibrium. Numerical examples are provided. ",Newton's Method and Differential Dynamic Programming for Unconstrained Nonlinear Dynamic Games " It has recently been suggested that, in the field, $\sim\!\!56\%$ of Sun-like stars ($0.8\,{\rm M}_{_\odot}\lesssim M_\star\lesssim 1.2\,{\rm M}_{_\odot}$) are single. We argue here that this suggestion may be incorrect, since it appears to be based on the multiplicity frequency of systems with Sun-like primaries, and therefore takes no account of Sun-like stars that are secondary (or higher-order) components in multiple systems. When these components are included in the reckoning, it seems likely that only $\sim\!46\%$ of Sun-like stars are single. This estimate is based on a model in which the system mass function has the form proposed by Chabrier, with a power-law Salpeter extension to high masses; there is a flat distribution of mass ratios; and the probability that a system of mass $M$ is a binary is $\,0.50 + 0.46\log_{_{10}}\!\left(M/{\rm M}_{_\odot}\right)\,$ for $\,0.08\,{\rm M}_{_\odot}\leq M\leq 12.5\,{\rm M}_{_\odot}$, $\,0\,$ for $\,M<0.08\,{\rm M}_{_\odot}$, and $\,1\,$ for $\,M>12.5\,{\rm M}_{_\odot}$. The constants in this last relation are chosen so that the model also reproduces the observed variation of multiplicity frequency with primary mass. However, the more qualitative conclusion, that a minority of Sun-like stars are single, holds up for virtually all reasonable values of the model parameters. Parenthetically, it is still likely that the majority of {\it all} stars in the field are single, but that is because most M Dwarfs probably are single. ",Are the majority of Sun-like stars single? " In this paper we describe an automatic generator to support the data scientist to construct, in a user-friendly way, dashboards from data represented as networks. The generator called SBINet (Semantic for Business Intelligence from Networks) has a semantic layer that, through ontologies, describes the data that represents a network as well as the possible metrics to be calculated in the network. Thus, with SBINet, the stages of the dashboard constructing process that uses complex network metrics are facilitated and can be done by users who do not necessarily know about complex networks. ",Geracao Automatica de Paineis de Controle para Analise de Mobilidade Urbana Utilizando Redes Complexas The polarizability of a charged pion is estimated in the framework of the nonlocal chiral quark model of the Nambu--Jona-Lasinio type. Nonlocality is described by quark form factors of the Gaussian type. It is shown that the polarizability in this model is very sensitive to the form of nonlocality and choice of the model parameters. ,Charged pion polarizability in the nonlocal quark model of Nambu-Jona-Lasinio type We study the role of temperature and density inhomogeneities on the freeze-out of relativistic heavy ion collisions at CERN SPS. Especially the impact on the particle abundancies is investigated. The quality of the fits to the measured particle ratios in 158 AGeV Pb+Pb collisions significantly improves as compared to a homogeneous model. ,Inhomogeneities in the freeze-out of relativistic heavy ion collisions at CERN SPS " We describe a measurement of the mass of the top quark from the purely hadronic decay modes of t-tbar pairs using all-jet data produced in p-pbar collisions at sqrt{s} = 1.8 TeV at the Fermilab Tevatron Collider. The data, which correspond to an integrated luminosity of 110.2 pb^-1, were collected with the Dzero detector from 1992 to 1996. We find a top quark mass of 178.5 + 13.7 (stat) + 7.7(syst) GeV/c^2. ",Measurement of the Top Quark Mass In All-Jet Events " Charge density waves (CDWs) are ubiquitous in under-doped cuprate superconductors. As a modulation of the valence electron density, CDWs in hole-doped cuprates possess both Cu-3d and O-2p orbital character owing to the strong hybridization of these orbitals near the Fermi level. Here, we investigate under-doped Bi$_2$Sr$_{1.4}$La$_{0.6}$CuO$_{6+\delta}$ using resonant inelastic X-ray scattering (RIXS) and find that a short-range CDW exists at both Cu and O sublattices in the copper-oxide (CuO2) planes with a comparable periodicity and correlation length. Furthermore, we uncover bond-stretching and bond-buckling phonon anomalies concomitant to the CDWs. Comparing to slightly over-doped Bi$_2$Sr$_{1.8}$La$_{0.2}$CuO$_{6+\delta}$, where neither CDWs nor phonon anomalies appear, we highlight that a sharp intensity anomaly is induced in the proximity of the CDW wavevector (QCDW) for the bond-buckling phonon, in concert with the diffused intensity enhancement of the bond-stretching phonon at wavevectors much greater than QCDW. Our results provide a comprehensive picture of the quasi-static CDWs, their dispersive excitations, and associated electron-phonon anomalies, which are key for understanding the competing electronic instabilities in cuprates. ",Multi-orbital charge density wave excitations and concomitant phonon anomalies in Bi$_2$Sr$_2$LaCuO$_{6+\delta}$ " Discharge of hazardous substances into the marine environment poses a substantial risk to both public health and the ecosystem. In such incidents, it is imperative to accurately estimate the release strength of the source and reconstruct the spatio-temporal dispersion of the substances based on the collected measurements. In this study, we propose an integrated estimation framework to tackle this challenge, which can be used in conjunction with a sensor network or a mobile sensor for environment monitoring. We employ the fundamental convection-diffusion partial differential equation (PDE) to represent the general dispersion of a physical quantity in a non-uniform flow field. The PDE model is spatially discretised into a linear state-space model using the dynamic transient finite-element method (FEM) so that the characterisation of time-varying dispersion can be cast into the problem of inferring the model states from sensor measurements. We also consider imperfect sensing phenomena, including miss-detection and signal quantisation, which are frequently encountered when using a sensor network. This complicated sensor process introduces nonlinearity into the Bayesian estimation process. A Rao-Blackwellised particle filter (RBPF) is designed to provide an effective solution by exploiting the linear structure of the state-space model, whereas the nonlinearity of the measurement model can be handled by Monte Carlo approximation with particles. The proposed framework is validated using a simulated oil spill incident in the Baltic sea with real ocean flow data. The results show the efficacy of the developed spatio-temporal dispersion model and estimation schemes in the presence of imperfect measurements. Moreover, the parameter selection process is discussed, along with some comparison studies to illustrate the advantages of the proposed algorithm over existing methods. ",Bayesian estimation and reconstruction of marine surface contaminant dispersion " The speed limit changes frequently throughout the transportation network, due to either safety (e.g., change in geometry) or congestion management (e.g., speed harmonization systems). Any abrupt reduction in the speed limit can create a shockwave that propagates upstream in traffic. Dealing with such an abrupt reduction in speed limit is particularly important while designing control laws for a platoon of automated vehicles from both stability and efficiency perspectives. This paper focuses on Adaptive Cruise Control (ACC) based platooning under a constant spacing policy, and investigates the possibility of designing a controller that ensures stability, while tracking a given target velocity profile that changes as a function of location. An ideal controller should maintain a constant spacing between successive vehicles, while tracking the desired velocity profile. The analytical investigations of this paper suggest that such a controller does not exist. ",Assessing Strong String Stability of Constant Spacing Policy under Speed Limit Fluctuations " A unifying description of lattice potentials generated by aperiodic one-dimensional sequences is proposed in terms of their local reflection or parity symmetry properties. We demonstrate that the ranges and axes of local reflection symmetry possess characteristic distributional and dynamical properties which can be determined for every aperiodic binary lattice. A striking aspect of such a property is given by the return maps of sequential spacings of local symmetry axes, which typically traverse few-point symmetry orbits. This local symmetry dynamics allows for a classification of inherently different aperiodic lattices according to fundamental symmetry principles. Illustrating the local symmetry distributional and dynamical properties for several representative binary lattices, we further show that the renormalized axis spacing sequences follow precisely the particular type of underlying aperiodic order. Our analysis thus reveals that the long-range order of aperiodic lattices is characterized in a compellingly simple way by its local symmetry dynamics. ",Local symmetry dynamics in one-dimensional aperiodic lattices " The spin-orbit (SO) interaction, emerging naturally from the Relativistic Mean Field (RMF) theory is examined critically in the light of the recently measured excitation energy differences between the terminating states built on two different configurations for nuclei belonging to the lower pf shell. The calculations are carried out using the cranked RMF framework. To further probe the iso-vector dependence of the spin-orbit potential, the energy spacing between the g_{7/2} and h_{11/2} states in the Sb-chain is compared to experiment. It is found that the calculation at the quantitative level deviates strongly from the experiment. In particular the balance of the iso-scalar and iso-vector strengths of the effective one body SO potential indicates that additional terms like tensor couplings may be needed to account for the experimental data. ",Critical Survey of Isoscalar and Isovector Contributions to the Spin Orbit Potential in Relativistic Mean Field Theory " We develop a distributed algorithm for convex Empirical Risk Minimization, the problem of minimizing large but finite sum of convex functions over networks. The proposed algorithm is derived from directly discretizing the second-order heavy-ball differential equation and results in an accelerated convergence rate, i.e, faster than distributed gradient descent-based methods for strongly convex objectives that may not be smooth. Notably, we achieve acceleration without resorting to the well-known Nesterov's momentum approach. We provide numerical experiments and contrast the proposed method with recently proposed optimal distributed optimization algorithms. ",Achieving Acceleration in Distributed Optimization via Direct Discretization of the Heavy-Ball ODE " The measurement of alloying core-level binding energy (CLBE) shifts has been used to give a precise meaning to the fundamental concept of charge transfer. Here, ab-initio density-functional calculations for the intermetallic compound MgAu are used to investigate models which try to make a connection between the core levels shifts and charge transfer. The calculated CLBE shifts agree well with experiment, and permit an unambiguous separation into initial-state and screening contributions. Interestingly, the screening contribution is large and cannot be neglected in any reasonable description. Comparison of the calculated results with the predictions of simple models show that these models are not adequate to describe the realistic situation. On the positive side, the accuracy of the density-functional calculations indicates that the combination of experiments with such calculations is a powerful tool to investigate unknown systems. ",Connection between charge transfer and alloying core-level shifts based on density-functional calculations " The fabrication of novel soft materials is an important scientific and technological challenge. We investigate the response of magnetic ellipsoidal particles adsorbed at fluid-fluid interfaces to external magnetic fields. By exploiting previously discovered first-order orientation phase transitions we show how to switch on and off dipolar capillary interactions between particles, leading to the formation of distinctive self-assembled structures and allowing dynamic control of the bottom-up fabrication of reconfigurable novel-structured materials ",Assembling Ellipsoidal Particles at Fluid Interfaces using Switchable Dipolar Capillary Interactions In this review we summarize the ongoing effort to study extra-dimensional gauge theories with lattice simulations. In these models the Higgs field is identified with extra-dimensional components of the gauge field. The Higgs potential is generated by quantum corrections and is protected from divergencies by the higher dimensional gauge symmetry. Dimensional reduction to four dimensions can occur through compactification or localization. Gauge-Higgs unification models are often studied using perturbation theory. Numerical lattice simulations are used to go beyond these perturbative expectations and to include non-perturbative effects. We describe the known perturbative predictions and their fate in the strongly-coupled regime for various extra-dimensional models. ,Extra-dimensional models on the lattice We classify the complete hyperbolic 3-manifolds admitting a maximal cusp of volume at most 2.62. We use this to show that the figure-8 knot complement is the unique 1-cusped hyperbolic 3-manifold with nine or more non-hyperbolic fillings; to show that the figure-8 knot complement and its sister are the unique hyperbolic 3-manifolds with minimal volume maximal cusps; and to extend results on determining low volume closed and cusped hyperbolic 3-manifolds. ,Hyperbolic 3-manifolds of low cusp volume " The scintillation mechanism in NaI:Tl crystals produces different pulse shapes that are dependent on the incoming particle type. The time distribution of scintillation light from nuclear recoil events decays faster than for electron recoil events and this difference can be categorised using various Pulse Shape Discrimination (PSD) techniques. In this study, we measured nuclear and electron recoils in a NaI:Tl crystal, with electron equivalent energies between 2 and 40 keV. We report on a new PSD approach, based on an event-type likelihood; this outperforms the charge-weighted mean-time, which is the conventional metric for PSD in NaI:Tl. Furthermore, we show that a linear combination of the two methods improves the discrimination power at these energies. ",Pulse Shape Discrimination of low-energy nuclear and electron recoils for improved particle identification in NaI:Tl " Finding communities in evolving networks is a difficult task and raises issues different from the classic static detection case. We introduce an approach based on the recent vertex-centred paradigm. The proposed algorithm, named DynLOCNeSs, detects communities by scanning and evaluating each vertex neighbourhood, which can be done independently in a parallel way. It is done by means of a preference measure, using these preferences to handle community changes. We also introduce a new vertex neighbourhood preference measure, CWCN, more efficient than current existing ones in the considered context. Experimental results show the relevance of this measure and the ability of the proposed approach to detect classical community evolution patterns such as grow-shrink and merge-split. ",Vertex-centred Method to Detect Communities in Evolving Networks " Very recently Berkovits and Vafa have argued that the $N{=}0$ string is a particular choice of background of the $N{=}1$ string. Under the assumption that the physical states of the $N{=}0$ string theory came essentially from the matter degrees of freedom, they proved that the amplitudes of both string theories agree. They also conjectured that this should persist whatever the form of the physical states. The aim of this note is to prove that both theories have the same spectrum of physical states without making any assumption on the form of the physical states. We also notice in passing that this result is reminiscent of a well-known fact in the theory of induced representations and we explore what repercussions this may have in the search for the universal string theory. ",On the Universal String Theory " Quantum annealing is a promising technique which leverages quantum mechanics to solve hard optimization problems. Considerable progress has been made in the development of a physical quantum annealer, motivating the study of methods to enhance the efficiency of such a solver. In this work, we present a quantum annealing approach to measure similarity among molecular structures. Implementing real-world problems on a quantum annealer is challenging due to hardware limitations such as sparse connectivity, intrinsic control error, and limited precision. In order to overcome the limited connectivity, a problem must be reformulated using minor-embedding techniques. Using a real data set, we investigate the performance of a quantum annealer in solving the molecular similarity problem. We provide experimental evidence that common practices for embedding can be replaced by new alternatives which mitigate some of the hardware limitations and enhance its performance. Common practices for embedding include minimizing either the number of qubits or the chain length, and determining the strength of ferromagnetic couplers empirically. We show that current criteria for selecting an embedding do not improve the hardware's performance for the molecular similarity problem. Furthermore, we use a theoretical approach to determine the strength of ferromagnetic couplers. Such an approach removes the computational burden of the current empirical approaches, and also results in hardware solutions that can benefit from simple local classical improvement. Although our results are limited to the problems considered here, they can be generalized to guide future benchmarking studies. ",Enhancing Quantum Annealing Performance for the Molecular Similarity Problem " We outline key elements of a theory that accounts for anomalous properties of the PuCoGa$_5$ and PuRhGa$_5$ compounds as a consequence of a two-body interference between two Kondo screening channels. Virtual valence fluctuations of the magnetic Pu configurations create two conduction channels of different symmetry. Using the symplectic large-N approach, we are able to demonstrate our pairing mechanism in an exactly solvable large-N limit. The critical temperature reaches its maximum when the energy levels of excited valence configurations are almost degenerate. The symmetry of the order parameter is determined by the product of the Wannier form factors in the interfering conduction channels. ",Superconductivity due to co-operative Kondo effect in Pu 115's " We study the decay of the neutral B meson to $K^* \gamma \gamma$ within the framework of the Standard Model, including long distance contributions. ",Short and long distance contributions to $B \to K^* \gamma \gamma$ General results on convex bodies are reviewed and used to derive an exact closed-form parametric formula for the boundary of the geometric (Minkowski) sum of $k$ ellipsoids in $n$-dimensional Euclidean space. Previously this was done through iterative algorithms in which each new ellipsoid was added to an ellipsoid approximation of the sum of the previous ellipsoids. Here we provide one shot formulas to add $k$ ellipsoids directly with no intermediate approximations required. This allows us to observe a new degree of freedom in the family of ellipsoidal bounds on the geometric sum. We demonstrate an application of these tools to compute the reachable set of a discrete-time dynamical system. ,Generalized Outer Bounds on the Finite Geometric Sum of Ellipsoids " Various upper and lower bounds are provided for the (angular) Kronecker constants of sets of integers. Some examples are provided where the bounds are attained. It is proved that 5=16 bounds the angular Kronecker constants of 3-element sets of positive integers. However, numerous examples suggest that the minimum upper bound is 1=4 for 3-element sets of positive integers. ",Upper and Lower Bounds for Kronecker Constants of Three-Element Sets of Integers " Standard big bang nucleosynthesis (SBBN) has been remarkably successful, and it may well be the correct and sufficient account of what happened. However, interest in variations from the standard picture come from two sources: First, big bang nucleosynthesis can be used to constrain physics of the early universe. Second, there may be some discrepancy between predictions of SBBN and observations of abundances. Various alternatives to SBBN include inhomogeneous nucleosynthesis, nucleosynthesis with antimatter, and nonstandard neutrino physics. ",Alternative Solutions to Big Bang Nucleosynthesis " In the field of machine learning (ML) for materials optimization, active learning algorithms, such as Bayesian Optimization (BO), have been leveraged for guiding autonomous and high-throughput experimentation systems. However, very few studies have evaluated the efficiency of BO as a general optimization algorithm across a broad range of experimental materials science domains. In this work, we evaluate the performance of BO algorithms with a collection of surrogate model and acquisition function pairs across five diverse experimental materials systems, namely carbon nanotube polymer blends, silver nanoparticles, lead-halide perovskites, as well as additively manufactured polymer structures and shapes. By defining acceleration and enhancement metrics for general materials optimization objectives, we find that for surrogate model selection, Gaussian Process (GP) with anisotropic kernels (automatic relevance detection, ARD) and Random Forests (RF) have comparable performance and both outperform the commonly used GP without ARD. We discuss the implicit distributional assumptions of RF and GP, and the benefits of using GP with anisotropic kernels in detail. We provide practical insights for experimentalists on surrogate model selection of BO during materials optimization campaigns. ",Benchmarking the Performance of Bayesian Optimization across Multiple Experimental Materials Science Domains " Wireless communication applications has acquired a vastly increasing range over the past decade. This rapidly increasing demand implies limitations on utilizing wireless resources. One of the most important resources in wireless communication is frequency spectrum. This thesis provides different solutions towards increasing the spectral efficiency. The first solution provided in this thesis is to use a more accurate optimization metric: maximal acheivable rate (compared to traditional metric: ergodic capacity) to optimize training data size in wireless communication. Training data symbols are previously known symbols to the receiver inserted in data packets which are used by receiver to acquire channel state information (CSI). Optimizing training data size with respect to our proposed tight optimization metric, we could achieve higher rates especially for short packet and ultra reliable applications. Our second proposed solution to increase spectral efficiency is to design a multifunction communication and sensing platform utilizing a special training sequence design. We proposed a platform where two training sequences are designed, one for the base-station and the other for the user. By designing these two training sequence such that they are uncorrelated to each other, the base station will be able to distinguish between the two training sequence. Having one of the sequences especially designed for radar purposes (by designing it such that it has an impulse-like autocorrelation), the system will be able to sense the environment, transmit and receive the communication data simultaneously. ",Efficient Use of Spectral Resources in Wireless Communication Using Training Data Optimization " We look at whether machine learning can predict the final objective function value of a difficult combinatorial optimisation problem from the input. Our context is the pattern reduction problem, one industrially important but difficult aspect of the cutting stock problem. Machine learning appears to have higher prediction accuracy than a na\""ive model, reducing mean absolute percentage error (MAPE) from 12.0% to 8.7%. ",Can ML predict the solution value for a difficult combinatorial problem? " We study the impact of primordial non-Gaussianity generated during inflation on the bias of halos using excursion set theory. We recapture the familiar result that the bias scales as $k^{-2}$ on large scales for local type non-Gaussianity but explicitly identify the approximations that go into this conclusion and the corrections to it. We solve the more complicated problem of non-spherical halos, for which the collapse threshold is scale dependent. ",Non-Gaussianity and Excursion Set Theory: Halo Bias " We study Bergman-Lorentz spaces on tube domains over symmetric cones, i.e. spaces of holomorphic functions which belong to Lorentz spaces $L(p, q).$ We establish boundedness and surjectivity of Bergman projectors from Lorentz spaces to the corresponding Bergman-Lorentz spaces and real interpolation between Bergman-Lorentz spaces. Finally we ask a question whose positive answer would enlarge the interval of parameters $p\in (1, \infty)$ such that the relevant Bergman projector is bounded on $L^p$ for cones of rank $r\geq 3.$ ",Bergman-Lorentz spaces on tube domains over symmetric cones " We present size measurements of 78 high-redshift ($z\geq 5.5$) galaxy candidates from the Reionisation Lensing Cluster Survey (RELICS). These distant galaxies are well-resolved due to the gravitational lensing power of foreground galaxy clusters, imaged by the Hubble Space Telescope (HST) and the Spitzer Space Telescope. We compute sizes using the forward-modeling code Lenstruction and account for magnification using public lens models. The resulting size-magnitude measurements confirm the existence of many small galaxies with effective radii $R_{\rm{eff}}<200$ pc in the early universe, in agreement with previous studies. In addition, we highlight compact and highly star-forming sources with star formation rate surface densities $\Sigma_\text{SFR}>10M_\odot\text{yr}^{-1}\text{kpc}^{-2}$ as possible Lyman continuum leaking candidates that could be major contributors to the process of reionisation. Future spectroscopic follow-up of these compact galaxies (e.g., with the James Webb Space Telescope) will further clarify their role in reionisation and the physics of early star formation. ",RELICS: Small Lensed $z\geq5.5$ Galaxies Selected as Potential Lyman Continuum Leakers " Recurrence networks are a powerful nonlinear tool for time series analysis of complex dynamical systems. {While there are already many successful applications ranging from medicine to paleoclimatology, a solid theoretical foundation of the method has still been missing so far. Here, we interpret an $\varepsilon$-recurrence network as a discrete subnetwork of a ""continuous"" graph with uncountably many vertices and edges corresponding to the system's attractor. This step allows us to show that various statistical measures commonly used in complex network analysis can be seen as discrete estimators of newly defined continuous measures of certain complex geometric properties of the attractor on the scale given by $\varepsilon$.} In particular, we introduce local measures such as the $\varepsilon$-clustering coefficient, mesoscopic measures such as $\varepsilon$-motif density, path-based measures such as $\varepsilon$-betweennesses, and global measures such as $\varepsilon$-efficiency. This new analytical basis for the so far heuristically motivated network measures also provides an objective criterion for the choice of $\varepsilon$ via a percolation threshold, and it shows that estimation can be improved by so-called node splitting invariant versions of the measures. We finally illustrate the framework for a number of archetypical chaotic attractors such as those of the Bernoulli and logistic maps, periodic and two-dimensional quasi-periodic motions, and for hyperballs and hypercubes, by deriving analytical expressions for the novel measures and comparing them with data from numerical experiments. More generally, the theoretical framework put forward in this work describes random geometric graphs and other networks with spatial constraints which appear frequently in disciplines ranging from biology to climate science. ",Analytical framework for recurrence-network analysis of time series " The aim of this paper is to propose an optimal control optimization algorithm for reconstructing admittivity distributions (i.e., both conductivity and permittivity) from multi-frequency micro-electrical impedance tomography. A convergent and stable optimization scheme is shown to be obtainable from multi-frequency data. The results of this paper have potential applicability in cancer imaging, cell culturing and differentiation, food sciences, and biotechnology. ",Admittivity imaging from multi-frequency micro-electrical impedance tomography " We consider the problem of parameter estimation in slowly varying regression models with sparsity constraints. We formulate the problem as a mixed integer optimization problem and demonstrate that it can be reformulated exactly as a binary convex optimization problem through a novel exact relaxation. The relaxation utilizes a new equality on Moore-Penrose inverses that convexifies the non-convex objective function while coinciding with the original objective on all feasible binary points. This allows us to solve the problem significantly more efficiently and to provable optimality using a cutting plane-type algorithm. We develop a highly optimized implementation of such algorithm, which substantially improves upon the asymptotic computational complexity of a straightforward implementation. We further develop a heuristic method that is guaranteed to produce a feasible solution and, as we empirically illustrate, generates high quality warm-start solutions for the binary optimization problem. We show, on both synthetic and real-world datasets, that the resulting algorithm outperforms competing formulations in comparable times across a variety of metrics including out-of-sample predictive performance, support recovery accuracy, and false positive rate. The algorithm enables us to train models with 10,000s of parameters, is robust to noise, and able to effectively capture the underlying slowly changing support of the data generating process. ",Slowly Varying Regression under Sparsity " The superior intrinsic properties of graphene have been a key research focus for the past few years. However, external components, such as metallic contacts, serve not only as essential probing elements, but also give rise to an effective electron cavity, which can form the basis for new quantum devices. In previous studies, quantum interference effects were demonstrated in graphene heterojunctions formed by a top gate. Here phase coherent transport behavior is demonstrated in a simple two terminal graphene structure with clearly-resolved Fabry-Perot oscillations in sub-100 nm devices. By aggressively scaling the channel length down to 50 nm, we study the evolution of the graphene transistor from the channel-dominated diffusive regime to the contact-dominated ballistic regime. Key issues such as the current asymmetry, the question of Fermi level pinning by the contacts, the graphene screening determining the heterojunction barrier width, the scaling of minimum conductivity and of the on/off current ratio, are investigated. ",Quantum behavior of graphene transistors near the scaling limit " We examine the kinetics of a snowball that is gaining mass while is rolling downhill. This dynamical system combines rotational effects with effects involving the variation of mass. In order to understand the consequences of both effects we compare its behavior with the one of some objects in which such effects are absent. Environmental conditions are also included. We conclude that the comparative velocity of the snowball is very sensitive to the hill profile and the retardation factors. We emphasize that the increase of mass (inertia), could surprisingly diminish the retardation effect due to the drag force. Additionally, when an exponential trajectory is assumed, the maximum velocity of the snowball can be reached at an intermediate step of the trip. ",Comparative kinetics of the snowball respect to other dynamical objects " Consider finite sequences $X_{[1,n]}=X_1\dots X_n$ and $Y_{[1,n]}=Y_1\dots Y_n$ of length $n$, consisting of i.i.d.\ samples of random letters from a finite alphabet, and let $S$ and $T$ be chosen i.i.d.\ randomly from the unit ball in the space of symmetric scoring functions over this alphabet augmented by a gap symbol. We prove a probabilistic upper bound of linear order in $n^{0.75}$ for the deviation of the score relative to $T$ of optimal alignments with gaps of $X_{[1,n]}$ and $Y_{[1,n]}$ relative to $S$. It remains an open problem to prove a lower bound. Our result contributes to the understanding of the microstructure of optimal alignments relative to one given scoring function, extending a theory begun by the first two authors. ",An Upper Bound on the Convergence Rate of a Second Functional in Optimal Sequence Alignment " We present a comprehensive study of the diverse properties of heteronuclear Rydberg molecules, placing a special emphasis on those composed of the light alkali atoms, Li, Na, and K. Electron-atom scattering phase shifts, which determine the strength of the molecular bond, are calculated at very low energy and then used in a spin-dependent theoretical model to calculate accurate Rydberg molecule potential energy curves. The wide parameter range accessible by combining the various properties of different alkali atoms often leads to hybridized electronic states accessible via one or two photon excitation schemes. This analysis of heteronuclear molecules leads to a prediction that the relative densities and spatial distributions of atoms in an ultracold mixture can be probed at controllable length scales via spectroscopic detection of these molecules. ",Formation of long-range Rydberg molecules in two-component ultracold gases " The locations of multicritical points on many hierarchical lattices are numerically investigated by the renormalization group analysis. The results are compared with an analytical conjecture derived by using the duality, the gauge symmetry and the replica method. We find that the conjecture does not give the exact answer but leads to locations slightly away from the numerically reliable data. We propose an improved conjecture to give more precise predictions of the multicritical points than the conventional one. This improvement is inspired by a new point of view coming from renormalization group and succeeds in deriving very consistent answers with many numerical data. ",Multicritical points for the spin glass models on hierarchical lattices " The continuous surge of environmental noise levels has become a vital challenge for humanity. Earlier studies have reported that prolonged exposure to loud noise may cause auditory and non-auditory disorders. Therefore, there is a growing demand for suitable noise barriers. Herein, we have investigated several commercially available curtain fabrics' acoustic performance, potentially used for sound insulation purposes. Thorough experimental investigations have been performed on PVC coated polyester fabrics' acoustical performances and 100 % pure PVC sheets. The PVC-coated polyester fabric exhibited better sound insulation properties, particularly in the mid-to-high frequency range (600-1600 Hz) with a transmission loss of about 11 to 22 dB, while insertion loss of > 10 dB has been achieved. Also, the acoustic performance of multi-layer curtains has been investigated. These multi-layer curtains have shown superior acoustic properties to that of single-layer acoustic curtains. ",Investigation of lightweight acoustic curtains for mid-to-high frequency noise insulations " Metasurfaces have drawn significant attentions due to their superior capability in tailoring electromagnetic waves with a wide frequency range, from microwave to visible light. Recently, programmable metasurfaces have demonstrated the ability of manipulating the amplitude or phase of electromagnetic waves in a programmable manner in real time, which renders them especially appealing in the applications of wireless communications. To practically demonstrate the feasibility of programmable metasurfaces in future communication systems, in this paper, we design and realize a novel metasurface-based wireless communication system. By exploiting the dynamically controllable property of programmable metasurface, we firstly introduce the fundamental principle of the metasurface-based wireless communication system design. We then present the design, implementation and experimental evaluation of the proposed metasurface-based wireless communication system with a prototype, which realizes single carrier quadrature phase shift keying (QPSK) transmission over the air. In the developed prototype, the phase of the reflected electromagnetic wave of programmable metasurface is directly manipulated in real time according to the baseband control signal, which achieves 2.048 Mbps data transfer rate with video streaming transmission over the air. Experimental result is provided to compare the performance of the proposed metasurface-based architecture against the conventional one. With the slight increase of the transmit power by 5 dB, the same bit error rate (BER) performance can be achieved as the conventional system in the absence of channel coding. Such a result is encouraging considering that the metasurface-based system has the advantages of low hardware cost and simple structure, thus leading to a promising new architecture for wireless communications. ",Wireless Communications with Programmable Metasurface: Transceiver Design and Experimental Results We prove Bloom type two-weight inequalities for commutators of multilinear singular integral operators including Calder\'on-Zygmund operators and their dyadic counterparts. Such estimates are further extended to a general higher order multilinear setting. The proof involves a pointwise sparse domination of multilinear commutators. ,Two-weight inequalities for multilinear commutators " The Fermion flavor structure is investigated by bilinear decomposition of the mass matrix after EW symmetry breaking, and the roles of factorized matrices in flavor mixing and mass generation are explored. It is shown that flavor mixing can be addressed as an independent issue. On a new Yukawa basis, the minimal parameterization of flavor mixing is realized containing two relative phases and two free $SO(2)_L$ rotation angles. The validity of the flavor mixing structure is checked in both the lepton and quark sectors. Under the decomposition of flavor mixing, fermion mass matrices are reconstructed under the hierarchy limit. A flat mass matrix with all elements equal to 1 arises naturally from the requirement that homology exists between up-type and down-type fermion mass matrices. Some hints of a flat matrix and flavor breaking are also discussed. ",The Structure of Flavor Mixing and Reconstruction of the Mass Matrix We will define a new type of cipher that doesn't use neither an easy to calcualate and hard to invert matematical function like RSA nor a classical mono or polyalphabetic cipher. ,A New Type of Cipher " This paper proposes a novel integrated dynamic method based on Behavior Trees for planning and allocating tasks in mixed human robot teams, suitable for manufacturing environments. The Behavior Tree formulation allows encoding a single job as a compound of different tasks with temporal and logic constraints. In this way, instead of the well-studied offline centralized optimization problem, the role allocation problem is solved with multiple simplified online optimization sub-problem, without complex and cross-schedule task dependencies. These sub-problems are defined as Mixed-Integer Linear Programs, that, according to the worker-actions related costs and the workers' availability, allocate the yet-to-execute tasks among the available workers. To characterize the behavior of the developed method, we opted to perform different simulation experiments in which the results of the action-worker allocation and computational complexity are evaluated. The obtained results, due to the nature of the algorithm and to the possibility of simulating the agents' behavior, should describe well also how the algorithm performs in real experiments. ",An Integrated Dynamic Method for Allocating Roles and Planning Tasks for Mixed Human-Robot Teams " Speeding up development may produce technical debt, i.e., not-quite-right code for which the effort to make it right increases with time as a sort of interest. Developers may be aware of the debt as they admit it in their code comments. Literature reports that such a self-admitted technical debt survives for a long time in a program, but it is not yet clear its impact on the quality of the code in the long term. We argue that self-admitted technical debt contains a number of different weaknesses that may affect the security of a program. Therefore, the longer a debt is not paid back the higher is the risk that the weaknesses can be exploited. To discuss our claim and rise the developers' awareness of the vulnerability of the self-admitted technical debt that is not paid back, we explore the self-admitted technical debt in the Chromium C-code to detect any known weaknesses. In this preliminary study, we first mine the Common Weakness Enumeration repository to define heuristics for the automatic detection and fix of weak code. Then, we parse the C-code to find self-admitted technical debt and the code block it refers to. Finally, we use the heuristics to find weak code snippets associated to self-admitted technical debt and recommend their potential mitigation to developers. Such knowledge can be used to prioritize self-admitted technical debt for repair. A prototype has been developed and applied to the Chromium code. Initial findings report that 55\% of self-admitted technical debt code contains weak code of 14 different types. ",WeakSATD: Detecting Weak Self-admitted Technical Debt " Micro-video background music recommendation is a complicated task where the matching degree between videos and uploader-selected background music is a major issue. However, the selection of the user-generated content (UGC) is biased caused by knowledge limitations and historical preferences among music of each uploader. In this paper, we propose a Debiased Cross-Modal (DebCM) matching model to alleviate the influence of such selection bias. Specifically, we design a teacher-student network to utilize the matching of segments of music videos, which is professional-generated content (PGC) with specialized music-matching techniques, to better alleviate the bias caused by insufficient knowledge of users. The PGC data is captured by a teacher network to guide the matching of uploader-selected UGC data of the student network by KL-based knowledge transfer. In addition, uploaders' personal preferences of music genres are identified as confounders that spuriously correlate music embeddings and background music selections, resulting in the learned recommender system to over-recommend music from the majority groups. To resolve such confounders in the UGC data of the student network, backdoor adjustment is utilized to deconfound the spurious correlation between music embeddings and prediction scores. We further utilize Monte Carlo (MC) estimator with batch-level average as the approximations to avoid integrating the entire confounder space calculated by the adjustment. Extensive experiments on the TT-150k-genre dataset demonstrate the effectiveness of the proposed method towards the selection bias. The code is publicly available on: \url{https://github.com/jing-1/DebCM}. ",Debiased Cross-modal Matching for Content-based Micro-video Background Music Recommendation " We study the problem of finding a minimum homology basis, that is, a shortest set of cycles that generates the $1$-dimensional homology classes with $\mathbb{Z}_2$ coefficients in a given simplicial complex $K$. This problem has been extensively studied in the last few years. For general complexes, the current best deterministic algorithm, by Dey et al., runs in $O(N^\omega + N^2 g)$ time, where $N$ denotes the number of simplices in $K$, $g$ denotes the rank of the $1$-homology group of $K$, and $\omega$ denotes the exponent of matrix multiplication. In this paper, we present two conceptually simple randomized algorithms that compute a minimum homology basis of a general simplicial complex $K$. The first algorithm runs in $\tilde{O}(m^\omega)$ time, where $m$ denotes the number of edges in $K$, whereas the second algorithm runs in $O(m^\omega + N m^{\omega-1})$ time. We also study the problem of finding a minimum cycle basis in an undirected graph $G$ with $n$ vertices and $m$ edges. The best known algorithm for this problem runs in $O(m^\omega)$ time. Our algorithm, which has a simpler high-level description, but is slightly more expensive, runs in $\tilde{O}(m^\omega)$ time. ",Fast Algorithms for Minimum Cycle Basis and Minimum Homology Basis " The dispersion relation of energetic (few TeV) neutrinos traversing a medium is studied. We use the real time formalism of thermal field theory and we include the effects from the propagator of the W gauge boson. We consider then the MSW oscillations for cosmic neutrinos traversing the Earth, adopting for the neutrino parameters values suggested by the LSND results. It is found that the $\nu_\mu$ flux, for neutrinos passing through the center of the Earth, will appear reduced by 15% for energies around 10 TeV. ",TeV Neutrinos in a dense medium " The notion of set-valued Young tableaux was introduced by Buch in his study of the Littlewood-Richardson rule for stable Grothendieck polynomials. Knutson, Miller and Yong showed that the double Grothendieck polynomials of 2143-avoiding permutations can be generated by set-valued Young tableaux. In this paper, we introduce the structure of set-valued Rothe tableaux of permutations. Given the Rothe diagram $D(w)$ of a permutation $w$, a set-valued Rothe tableau of shape $D(w)$ is a filling of finite nonempty subsets of positive integers into the squares of $D(w)$ such that the rows are weakly decreasing and the columns are strictly increasing. We show that the double Grothendieck polynomials of 1432-avoiding permutations can be generated by set-valued Rothe tableaux. When restricted to 321-avoiding permutations, our formula specializes to the tableau formula for double Grothendieck polynomials due to Matsumura. Employing the properties of tableau complexes given by Knutson, Miller and Yong, we obtain two alternative tableau formulas for the double Grothendieck polynomials of 1432-avoiding permutations. ",Set-valued Rothe Tableaux and Grothendieck Polynomials " In this paper, we consider a hierarchical distributed multi-task learning (MTL) system where distributed users wish to jointly learn different models orchestrated by a central server with the help of a layer of multiple relays. Since the users need to download different learning models in the downlink transmission, the distributed MTL suffers more severely from the communication bottleneck compared to the single-task learning system. To address this issue, we propose a coded hierarchical MTL scheme that exploits the connection topology and introduces coding techniques to reduce communication loads. It is shown that the proposed scheme can significantly reduce the communication loads both in the uplink and downlink transmissions between relays and the server. Moreover, we provide information-theoretic lower bounds on the optimal uplink and downlink communication loads, and prove that the gaps between achievable upper bounds and lower bounds are within the minimum number of connected users among all relays. In particular, when the network connection topology can be delicately designed, the proposed scheme can achieve the information-theoretic optimal communication loads. Experiments on real datasets show that our proposed scheme can reduce the overall training time by 17% $\sim$ 26% compared to the conventional uncoded scheme. ",Coded Distributed Computing for Hierarchical Multi-task Learning " No Hadean rocks have ever been found on Earth's surface except for zircons---evidence of continental crust, suggesting that Hadean continental crust existed but later disappeared. One hypothesis for the disappearance of the continental crust is excavation/melting by the Late Heavy Bombardment (LHB), a concentration of impacts in the last phase of the Hadean eon. In this paper, we calculate the effects of LHB on Hadean continental crust in order to investigate this hypothesis. Approximating the size-frequency distribution of the impacts by a power-law scaling with an exponent {\alpha} as a parameter, we have derived semi-analytical expressions for the effects of LHB impacts. We calculated the total excavation/melting volume and area affected by the LHB from two constraints of LHB on the moon, the size of the largest basin during LHB, and the density of craters larger than 20 km. We also investigated the effects of the value of {\alpha}. Our results show that LHB does not excavate/melt all of Hadean continental crust directly, but over 70% of the Earth's surface area can be covered by subsequent melts in a broad range of {\alpha}. If there have been no overturns of the continental crust until today, LHB could be responsible for the absence of Hadean rocks because most of Hadean continental crust is not be exposed on the Earth's surface in this case. ",Excavation and Melting of the Hadean Continental Crust by Late Heavy Bombardment " Neutron stars are one of the most extreme objects in the universe, with densities that can exceed those of atomic nuclei and gravitational fields that are among the strongest known. Theoretical and observational research on neutron stars has revealed a wealth of information about their structural characteristics and physical properties. The structural characteristics of neutron stars are determined by the equations of state that describe the relationship between their density, pressure, and energy. These equations of state are still not well understood, and ongoing theoretical research aims to refine our understanding of the behavior of matter under these extreme conditions. Observational research on neutron stars, such as measurements of their masses and radii, can provide valuable constraints on the properties of the equation of state. The physical properties of neutron stars are also of great interest to researchers. Neutron stars have strong magnetic fields, which can produce observable effects such as pulsations and emission of X-rays and gamma rays. The surface temperature of neutron stars can also provide insight into their thermal properties, while observations of their gravitational fields can test predictions of Einstein's theory of general relativity. Observational research on neutron stars is carried out using a variety of techniques, including radio and X-ray telescopes, gravitational wave detectors, and optical telescopes. These observations are often combined with theoretical models to gain a more complete understanding of the properties of neutron stars. ",Structural characteristics and physical properties of neutron stars: theoretical and observational research " This paper is devoted to the construction and analysis of the Wigner functions for noncommutative quantum mechanics, their marginal distributions and star-products, following a technique developed earlier, {\it viz\/,} using the unitary irreducible representations of the group $\g$, which is the three fold central extension of the abelian group of $\mathbb R^4$. These representations have been exhaustively studied in earlier papers. The group $\g$ is identified with the kinematical symmetry group of noncommutative quantum mechanics of a system with two degrees of freedom. The Wigner functions studied here reflect different levels of non-commutativity -- both the operators of position and those of momentum not commuting, the position operators not commuting and finally, the case of standard quantum mechanics, obeying the canonical commutation relations only. ",Wigner Functions for Noncommutative Quantum Mechanics: a group representation based construction " Let $\mathscr{A}$ be an abelian category and $\mathscr{C}$ an additive full subcategory of $\mathscr{A}$. We provide a method to construct a proper $\mathscr{C}$-resolution (resp. coproper $\mathscr{C}$-coresolution) of one term in a short exact sequence in $\mathscr{A}$ from that of the other two terms. By using these constructions, we answer affirmatively an open question on the stability of the Gorenstein category $\mathcal{G}(\mathscr{C})$ posed by Sather-Wagstaff, Sharif and White; and also prove that $\mathcal{G}(\mathscr{C})$ is closed under direct summands. In addition, we obtain some criteria for computing the $\mathscr{C}$-dimension and the $\mathcal{G}(\mathscr{C)}$-dimension of an object in $\mathscr{A}$. ",Proper Resolutions and Gorenstein Categories " Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric. ",Unsupervised Learning of Spatiotemporally Coherent Metrics " We study the following nonlinear Schr\""{o}dinger system which is related to Bose-Einstein condensate: {displaymath} {cases}-\Delta u +\la_1 u = \mu_1 u^{2^\ast-1}+\beta u^{\frac{2^\ast}{2}-1}v^{\frac{2^\ast}{2}}, \quad x\in \Omega, -\Delta v +\la_2 v =\mu_2 v^{2^\ast-1}+\beta v^{\frac{2^\ast}{2}-1} u^{\frac{2^\ast}{2}}, \quad x\in \om, u\ge 0, v\ge 0 \,\,\hbox{in $\om$},\quad u=v=0 \,\,\hbox{on $\partial\om$}.{cases}{displaymath} Here $\om\subset \R^N$ is a smooth bounded domain, $2^\ast:=\frac{2N}{N-2}$ is the Sobolev critical exponent, $-\la_1(\om)<\la_1,\la_2<0$, $\mu_1,\mu_2>0$ and $\beta\neq 0$, where $\lambda_1(\om)$ is the first eigenvalue of $-\Delta$ with the Dirichlet boundary condition. When $\bb=0$, this is just the well-known Brezis-Nirenberg problem. The special case N=4 was studied by the authors in (Arch. Ration. Mech. Anal. 205: 515-551, 2012). In this paper we consider {\it the higher dimensional case $N\ge 5$}. It is interesting that we can prove the existence of a positive least energy solution $(u_\bb, v_\bb)$ {\it for any $\beta\neq 0$} (which can not hold in the special case N=4). We also study the limit behavior of $(u_\bb, v_\bb)$ as $\beta\to -\infty$ and phase separation is expected. In particular, $u_\bb-v_\bb$ will converge to {\it sign-changing solutions} of the Brezis-Nirenberg problem, provided $N\ge 6$. In case $\la_1=\la_2$, the classification of the least energy solutions is also studied. It turns out that some quite different phenomena appear comparing to the special case N=4. ",Positive Least Energy Solutions and Phase Separation for Coupled Schrodinger Equations with Critical Exponent: Higher Dimensional Case " Interferometric photon-correlation measurements, which correspond to the second-order intensity cross-correlations between the two output ports of an unbalanced Michelson interferometer, are sensitive to both amplitude and phase fluctuations of an incoming beam of light. Here, we present the theoretical framework behind these measurements and show that they can be used to unambiguously differentiate a coherent wave undergoing dynamical amplitude and phase fluctuations from a chaotic state of light. This technique may thus be used to characterize the output of nanolasers and monitor the onset of coherent emission. ",Theory of Interferometric Photon-Correlation Measurements: Differentiating Coherent from Chaotic Light " Using a parameterised function for the mass loss at the base of the post-shock region, we have constructed a formulation for magnetically confined accretion flows which avoids singularities, such as the infinity in density, at the base associated with all previous formulations. With the further inclusion of a term allowing for the heat input into the base from the accreting white dwarf we are able also to obtain the hydrodynamic variables to match the conditions in the stellar atmosphere. (We do not, however, carry out a mutually consistent analysis for the match). Changes to the emitted X-ray spectra are negligible unless the thickness of mass leakage region at the base approaches or exceeds one percent of the height of the post-shock region. In this case the predicted spectra from higher-mass white dwarfs will be harder, and fits to X-ray data will predict lower white-dwarf masses than previous formulations. ",The lower boundary of the accretion column in magnetic cataclysmic variables " By quantum Monte Carlo simulations of bosons in gapped honeycomb lattices, we show the existence of bosonic edge states. For single layer honeycomb lattice, bosonic edge states can be controlled to appear, cross the gap and merge into bulk states by an on-site potential applied on the outmost sites of the boundary. On bilayer honeycomb lattice, bosonic edge state traversing the gap at half filling is demonstrated. The topological origin of the bosonic edge states is discussed with pseudo Berry curvature. The results will simulate experimental studies of these exotic bosonic edge states with ultracold bosons trapped in honeycomb optical lattices. ",Bosonic edge states in gapped honeycomb lattices " Many dynamical systems consist of multiple, co-evolving subsystems (degrees of freedom). These subsystems often depend upon each other in a way that restricts the overall system's dynamics. How does this network of dependencies affect the system's thermodynamics? Prior studies in the stochastic thermodynamics of multipartite processes (MPPs) have approached this question by restricting the system to allow only one subsystem to change state at a time. However, in many real systems, such as chemical reaction networks or electronic circuits, multiple subsystems must change state together. Therefore, studies of MPPs do not apply to such systems. Here, we investigate the thermodynamics of composite processes, in which subsets of subsystems are allowed to change state simultaneously. These subsets correspond to the subsystems that interact with a single mechanism (e.g., a thermal or chemical reservoir) that is coupled to the system. An MPP is simply a (subcase of a) composite process in which all such subsets have cardinality one. We demonstrate the power of the composite systems framework to study the thermodynamics of multiple, co-evolving subsystems. In particular, we derive thermodynamic uncertainty relations for information flows in composite processes. We also derive strengthened speed limits for composite processes. Our results apply to a much broader class of dynamical systems than do results for MPPs, and could guide future studies of the thermodynamics of distributed computational systems. ",Stochastic thermodynamics of multiple co-evolving systems -- beyond multipartite processes " The Kitaev chain model exhibits topological order that manifests as topological degeneracy, Majorana edge modes and $Z_{2}$ topological invariance of the abulk spectrum. This model can be obtained from a transverse field Ising model(TFIM) using the Jordan-Wigner transformation. TFIM has neither topological degeneracy nor any edge modes. Topological degeneracy associated with topological order is central to topological quantum computation. In this paper we will explore topological protection of the ground state manifold in the case of Majorana fermion models which exhibit $Z_{2}$ topological order. We will show that there are at least two different ways to understand this topological protection of Majorana fermion qubits: one way is based on fermionic mode operators and the other is based on anti-commuting symmetry operators. We will also show how these two different ways are related to each other. We provide a very general approach of understanding the topological protection of Majorana fermion qubits in the case of lattice Hamiltonians. ",$Z_{2}$ Topological Order and Topological Protection of Majorana Fermion Qubits We derive a sharp bound on the location of non-positive eigenvalues of Schroedinger operators on the halfline with complex-valued potentials. ,A sharp bound on eigenvalues of Schroedinger operators on the halfline with complex-valued potentials " Automatic segmentation of medical images is among most demanded works in the medical information field since it saves time of the experts in the field and avoids human error factors. In this work, a method based on Conditional Adversarial Networks and Fully Convolutional Networks is proposed for the automatic segmentation of the liver MRIs. The proposed method, without any post-processing, is achieved the second place in the SIU Liver Segmentation Challenge 2018, data of which is provided by Dokuz Eyl\""ul University. In this paper, some improvements for the post-processing step are also proposed and it is shown that with these additions, the method outperforms other baseline methods. ",Automatic Liver Segmentation with Adversarial Loss and Convolutional Neural Network " Recursive neural networks (RvNN) have been shown useful for learning sentence representations and helped achieve competitive performance on several natural language inference tasks. However, recent RvNN-based models fail to learn simple grammar and meaningful semantics in their intermediate tree representation. In this work, we propose an attention mechanism over Tree-LSTMs to learn more meaningful and explainable parse tree structures. We also demonstrate the superior performance of our proposed model on natural language inference, semantic relatedness, and sentiment analysis tasks and compare them with other state-of-the-art RvNN based methods. Further, we present a detailed qualitative and quantitative analysis of the learned parse trees and show that the discovered linguistic structures are more explainable, semantically meaningful, and grammatically correct than recent approaches. The source code of the paper is available at https://github.com/atul04/Explainable-Latent-Structures-Using-Attention. ",Unsupervised Learning of Explainable Parse Trees for Improved Generalisation " We investigate the linear stability of plane Poiseuille flow in 2D under slip boundary conditions. The slip s is defined by the tangential velocity at the wall in units of the maximal flow velocity. As it turns out, the critical Reynolds number depends smoothly on s but increases quite rapidly. ",Critical curves of plane Poiseuille flow with slip boundary conditions " Star-forming galaxies have been predicted to contribute considerably to the diffuse gamma-ray background as they are guaranteed reservoirs of cosmic rays. Assuming that the hadronic interactions responsible for high-energy gamma rays also produce high-energy neutrinos and that O(100) PeV cosmic rays can be produced and confined in starburst galaxies, we here discuss the possibility that star-forming galaxies are also the main sources of the high-energy neutrinos observed by the IceCube experiment. First, we compute the diffuse gamma-ray background from star-forming galaxies, adopting the latest Herschel PEP/HerMES luminosity function and relying on the correlation between the gamma-ray and infrared luminosities reported by Fermi observations. Then we derive the expected intensity of the diffuse high-energy neutrinos from star-forming galaxies including normal and starburst galaxies. Our results indicate that starbursts, including those with active galactic nuclei and galaxy mergers, could be the main sources of the high-energy neutrinos observed by the IceCube experiment. We find that assuming a cosmic-ray spectral index of 2.1-2.2 for all starburst-like galaxies, our predictions can be consistent with both the Fermi and IceCube data, but larger indices readily fail to explain the observed diffuse neutrino flux. Taking the starburst high-energy spectral index as free parameter, and extrapolating from GeV to PeV energies, we find that the spectra harder than E^(-2.15) are likely to be excluded by the IceCube data, which can be more constraining than the Fermi data for this population. ","Star-forming galaxies as the origin of diffuse high-energy backgrounds: Gamma-ray and neutrino connections, and implications for starburst history" " In this article, an overview of the fabrication and properties of high quality La0.67Sr0.33MnO3 (LSMO) thin films is given. A high quality LSMO film combines a smooth surface morphology with a large magnetization and a small residual resistivity, while avoiding precipitates and surface segregation. In literature, typically only a few of these issues are adressed. We therefore present a thorough characterization of our films, which were grown by pulsed laser deposition. The films were characterized with reflection high energy electron diffraction, atomic force microscopy, x-ray diffraction, magnetization and transport measurements, x-ray photoelectron spectroscopy and scanning transmission electron microscopy. The films have a saturation magnetization of 4.0 {\mu}B/Mn, a Curie temperature of 350 K and a residual resistivity of 60 {\mu}{\Omega}cm. These results indicate that high quality films, combining both large magnetization and small residual resistivity, were realized. A comparison between different samples presented in literature shows that focussing on a single property is insufficient for the optimization of the deposition process. For high quality films, all properties have to be adressed. For LSMO devices, the thin film quality is crucial for the device performance. Therefore, this research is important for the application of LSMO in devices. ",Optimized fabrication of high quality La0.67Sr0.33MnO3 thin films considering all essential characteristics " If the color Coulomb potential is confining, then the Coulomb field energy of an isolated color charge is infinite on an infinite lattice, even if the usual UV divergence is lattice regulated. A simple criterion for Coulomb confinement is that the expectation value of timelike link variables vanishes in Coulomb gauge, but it is unclear how this criterion is related to the spectrum of the corresponding Faddeev-Popov operator, which can be used to formulate a quite different criterion for Coulomb confinement. The purpose of this article is to connect the two seemingly different Coulomb confinement criteria, and explain the geometrical basis of the connection. ",Gauge Orbits and the Coulomb Potential " China is currently the country with the largest number of Android smartphone users. We use a combination of static and dynamic code analysis techniques to study the data transmitted by the preinstalled system apps on Android smartphones from three of the most popular vendors in China. We find that an alarming number of preinstalled system, vendor and third-party apps are granted dangerous privileges. Through traffic analysis, we find these packages transmit to many third-party domains privacy sensitive information related to the user's device (persistent identifiers), geolocation (GPS coordinates, network-related identifiers), user profile (phone number, app usage) and social relationships (e.g., call history), without consent or even notification. This poses serious deanonymization and tracking risks that extend outside China when the user leaves the country, and calls for a more rigorous enforcement of the recently adopted data privacy legislation. ",Android OS Privacy Under the Loupe -- A Tale from the East " Five fields located close to the center of the globular cluster NGC 104=47 Tuc were surveyed in a search for variable stars. We present V-band light curves for 42 variables. This sample includes 13 RR Lyr stars -- 12 of them belong to the Small Magellanic Cloud (SMC) and 1 is a background object from the galactic halo. Twelve eclipsing binaries were identified -- 9 contact systems and 3 detached/semi-detached systems. Seven eclipsing binaries are located in the blue straggler region on the cluster color-magnitude diagram (CMD) and four binaries can be considered main-sequence systems. One binary is probably a member of the SMC. Eight contact binaries are likely members of the cluster and one is most probably a foreground star. We show that for the surveyed region of 47 Tuc, the relative frequency of contact binaries is very low as compared with other recently surveyed globular clusters. The sample of identified variables also includes 15 red variables with periods ranging from about 2 days to several weeks. A large fraction of these 15 variables probably belong to the SMC but a few stars are likely to be red giants in 47 Tuc. VI photometry for about 50 000 stars from the cluster fields was obtained as a by product of our survey. ",The Optical Gravitational Lensing Experiment. Variable Stars in Globular Clusters - IV. Fields 104A-E in 47 Tuc " We present a deep recurrent neural network architecture to solve a class of stochastic optimal control problems described by fully nonlinear Hamilton Jacobi Bellmanpartial differential equations. Such PDEs arise when one considers stochastic dynamics characterized by uncertainties that are additive and control multiplicative. Stochastic models with the aforementioned characteristics have been used in computational neuroscience, biology, finance and aerospace systems and provide a more accurate representation of actuation than models with additive uncertainty. Previous literature has established the inadequacy of the linear HJB theory and instead rely on a non-linear Feynman-Kac lemma resulting in a second order forward-backward stochastic differential equations representation. However, the proposed solutions that use this representation suffer from compounding errors and computational complexity leading to lack of scalability. In this paper, we propose a deep learning based algorithm that leverages the second order Forward-Backward SDE representation and LSTM based recurrent neural networks to not only solve such Stochastic Optimal Control problems but also overcome the problems faced by previous approaches and scales well to high dimensional systems. The resulting control algorithm is tested on non-linear systems in robotics and biomechanics to demonstrate feasibility and out-performance against previous methods. ",Deep 2FBSDEs For Systems With Control Multiplicative Noise " We study the maximum of a Brownian motion with a parabolic drift; this is a random variable that often occurs as a limit of the maximum of discrete processes whose expectations have a maximum at an interior point. We give series expansions and integral formulas for the distribution and the first two moments, together with numerical values to high precision. ",The maximum of Brownian motion with parabolic drift " In the coming decade, thousands of stellar streams will be observed in the halos of external galaxies. What fundamental discoveries will we make about dark matter from these streams? As a first attempt to look at these questions, we model Magellan/Megacam imaging of the Centaurus A's (Cen A) disrupting dwarf companion Dwarf 3 (Dw3) and its associated stellar stream, to find out what can be learned about the Cen A dark-matter halo. We develop a novel external galaxy stream-fitting technique and generate model stellar streams that reproduce the stream morphology visible in the imaging. We find that there are many viable stream models that fit the data well, with reasonable parameters, provided that Cen A has a halo mass larger than M$_{200}$ $>4.70\times 10^{12}$ M$_{\odot}$. There is a second stream in Cen A's halo that is also reproduced within the context of this same dynamical model. However, stream morphology in the imaging alone does not uniquely determine the mass or mass distribution for the Cen A halo. In particular, the stream models with high likelihood show covariances between the inferred Cen A mass distribution, the inferred Dw3 progenitor mass, the Dw3 velocity, and the Dw3 line-of-sight position. We show that these degeneracies can be broken with radial-velocity measurements along the stream, and that a single radial velocity measurement puts a substantial lower limit on the halo mass. These results suggest that targeted radial-velocity measurements will be critical if we want to learn about dark matter from extragalactic stellar streams. ",Mapping Dark Matter with Extragalactic Stellar Streams: the Case of Centaurus A " We present a simple combinatorial framework for establishing approximate tensorization of variance and entropy in the setting of spin systems (a.k.a. undirected graphical models) based on balanced separators of the underlying graph. Such approximate tensorization results immediately imply as corollaries many important structural properties of the associated Gibbs distribution, in particular rapid mixing of the Glauber dynamics for sampling. We prove approximate tensorization by recursively establishing block factorization of variance and entropy with a small balanced separator of the graph. Our approach goes beyond the classical canonical path method for variance and the recent spectral independence approach, and allows us to obtain new rapid mixing results. As applications of our approach, we show that: 1. On graphs of treewidth $t$, the mixing time of the Glauber dynamics is $n^{O(t)}$, which recovers the recent results of Eppstein and Frishberg with improved exponents and simpler proofs; 2. On bounded-degree planar graphs, strong spatial mixing implies $\tilde{O}(n)$ mixing time of the Glauber dynamics, which gives a faster algorithm than the previous deterministic counting algorithm by Yin and Zhang. ",Combinatorial Approach for Factorization of Variance and Entropy in Spin Systems " PTF/M-dwarfs is a 100,000-target M-dwarf planetary transit survey, a Key Project of the Palomar Transient Factory (PTF) collaboration. The survey is sensitive to Jupiter-radius planets around all of the target stars, and has sufficient precision to reach Neptunes and super-Earths for the best targets. The Palomar Transient Factory is a fully-automated, wide-field survey aimed at a systematic exploration of the optical transient sky. The survey is performed using a new 7.26 square degree camera installed on the 48 inch Samuel Oschin telescope at Palomar Observatory. Each 92-megapixel R-band exposure contains about 3,000 M-dwarfs usable for planet detection. In each PTF observational season PTF/M-dwarfs searches for Jupiter-radius planets around almost 30,000 M-dwarfs, Neptune-radius planets around approximately 500 M-dwarfs, and super-Earths around 100 targets. The full survey is expected to cover more than 100,000 targets over the next several years. Photometric and spectroscopic followup operations are performed on the Palomar 60-inch, LCOGT, Palomar 200-inch, MDM and Keck telescopes. The survey has been running since mid-2009. We detail the survey design, the survey's data analysis pipeline and the performance of the first year of operations. ",PTF/M-dwarfs: A Large New M-dwarf Planetary Transit Survey " We calculate properties of neutron drops in external potentials using both quantum Monte Carlo and no-core full configuration techniques. The properties of the external wells are varied to examine different density profiles. We compare neutron drop results given by a selection of nuclear Hamiltonians, including realistic two-body interactions as well as several three-body forces. We compute a range of properties for the neutron drops: ground-state energies, spin-orbit splittings, excitation energies, radial densities and rms radii. We compare the equations of state for neutron matter for several of these Hamiltonians. Our results can be used as benchmarks to test other many-body techniques, and to constrain properties of energy-density functionals. ",Properties of trapped neutrons interacting with realistic nuclear Hamiltonians " The goal of this study is to contribute to the physics underlying the material properties of suspensions that exhibit shear thickening through the ultrasonic characterization of suspensions of cornstarch in a density-matched solution. Ultrasonic measurements at frequencies in the range of 4 to 8 MHz of the speed of sound and the frequency-dependent attenuation properties are reported for concentrations of cornstarch in a density-matched aqueous (cesium chloride brine) suspension, ranging up to 40% cornstarch. The speed of sound is found to range from 1483 +/- 10 m/s in pure brine to 1765 +/- 9 m/s in the 40% cornstarch suspension. The bulk modulus of a granule of cornstarch is inferred to be (1.2 +/- 0.1) X 10^{10} Pa. The attenuation coefficient at 5 MHz increases from essentially zero in brine to 12.0 +/- 1.2 dB/cm at 40% cornstarch. ",Ultrasonic Attenuation and Speed of Sound of Cornstarch Suspensions " The recently-introduced self-learning Monte Carlo method is a general-purpose numerical method that speeds up Monte Carlo simulations by training an effective model to propose uncorrelated configurations in the Markov chain. We implement this method in the framework of continuous time Monte Carlo method with auxiliary field in quantum impurity models. We introduce and train a diagram generating function (DGF) to model the probability distribution of auxiliary field configurations in continuous imaginary time, at all orders of diagrammatic expansion. By using DGF to propose global moves in configuration space, we show that the self-learning continuous-time Monte Carlo method can significantly reduce the computational complexity of the simulation. ",Self-Learning Monte Carlo Method: Continuous-Time Algorithm " The aim of this paper is to establish the framework of the enclosure method for some class of inverse problems whose governing equations are given by parabolic equations with discontinuous coefficients. The framework is given by considering a concrete inverse initial boundary value problem for a parabolic equation with discontinuous coefficients. The problem is to extract information about the location and shape of unknown inclusions embedded in a known isotropic heat conductive body from a set of the input heat flux across the boundary of the body and output temperature on the same boundary. In the framework the original inverse problem is reduced to an inverse problem whose governing equation has a large parameter. A list of requirements which enables one to apply the enclosure method to the reduced inverse problem is given. Two new results which can be considered as the application of the framework are given. In the first result the background conductive body is assumed to be homogeneous and a family of explicit complex exponential solutions are employed. Second an application of the framework to inclusions in an isotropic inhomogeneous heat conductive body is given. The main problem is the construction of the special solution of the governing equation with a large parameter for the background inhomogeneous body required by the framework. It is shown that, introducing another parameter which is called the virtual slowness and making it sufficiently large, one can construct the required solution which yields an extraction formula of the convex hull of unknown inclusions in a known isotropic inhomogeneous conductive body. ",The framework of the enclosure method with dynamical data and its applications " The screening of impurities in plasma with Bose-Einstein condensate of electrically charged bosons is considered. It is shown that the screened potential is drastically different from the usual Debye one. The polarization operator of photons in plasma acquires infrared singular terms at small photon momentum and the screened potential drops down as a power of distance and even has an oscillating behavior, similar to the Friedel oscillations in plasma with degenerate fermions. The magnetic properties of the cosmological plasma with condensed W-bosons are also discussed. It is shown that W-bosons condense in the ferromagnetic state. It could lead to spontaneous magnetization of the primeval plasma. The created magnetic fields may seed galactic and intergalactic magnetic fields observed in the present-day universe. ",Condensation of charged bosons in plasma physics and cosmology " Properties of X-ray radiation emitted from the polar caps of a radio pulsar depend not only on the cap temperature, size, and position, but also on the surface chemical composition, magnetic field, and neutron star's mass and radius. Fitting the spectra and the light curves with neutron star atmosphere models enables one to infer these parameters. As an example, we present here results obtained from the analysis of the pulsed X-ray radiation of a nearby millisecond pulsar J0437-4715. In particular, we show that stringent constraints on the mass-to-radius ratio can be obtained if orientations of the magnetic and rotation axes are known, e.g., from the radio polarization data. ",Mass-to-Radius Ratio for the Millisecond Pulsar J0437-4715 " The aim of this short note is to define the \it universal cubic fourfold \rm over certain loci of their moduli space. Then, we propose two methods to prove that it is unirational over the Hassett divisors $\mathcal{C}_d$, in the range $8\leq d \leq 42$. By applying inductively this argument, we are able to show that, in the same range of values, $\mathcal{C}_{d,n}$ is unirational for all integer values of $n$. Finally, we observe that for explicit infinitely many values of $d$, the universal cubic fourfold over $\mathcal{C}_d$ can not be unirational. ",Unirationality of certain universal families of cubic fourfolds " Eccentric ellipsoidal variables (aka heartbeat stars) is a class of eccentric binaries in which proximity effects, tidal distortion due to time-dependent tidal potential in particular, lead to measurable photometric variability close to the periastron passage. The varying tidal potential may also give rise to tidally-excited oscillations (TEOs). TEOs may play an important role in the dynamical evolution of massive eccentric systems. Our study is aimed at the detection of TEOs and characterisation of the long-term behaviour of their amplitudes and frequencies in the extreme-amplitude heartbeat star MACHO 80.7443.1718, consisting of a blue supergiant and a late O-type massive dwarf. We use two seasons of Transiting Exoplanet Survey Satellite (TESS) observations of the target to obtain new 30-min cadence photometry by means of the difference image analysis of TESS full-frame images. In order to extend the analysis to longer time scales, we supplement the TESS data with 30-years long ground-based photometry of the target. We confirm the detection of the known $n=23$, 25, and 41 TEOs and announce the detection of two new TEOs, with $n=24$ and 230, in the photometry of MACHO 80.7443.1718. Amplitudes of all TEOs were found to vary on a time scale of years or months. For $n=25$ TEO amplitude and frequency changes are related, which may indicate that the main cause of the amplitude drop of this TEO in TESS observations is the change of its frequency and increase of detuning parameter. The light curve of the $n=230$ TEO is strongly non-sinusoidal. Its high frequency may indicate that the oscillation is a strange mode. We also find that the orbital period of the system decreases at the rate of about 11 s(yr)$^{-1}$, which can be explained by a significant mass loss or mass transfer in the system with a possible contribution from tidal dissipation. ","Tidally excited oscillations in MACHO 80.7443.1718: Changing amplitudes and frequencies, high-frequency tidally excited mode, and a decrease in the orbital period" " The self-similarity properties of fractals are studied in the framework of the theory of entire analytical functions and the $q$-deformed algebra of coherent states. Self-similar structures are related to dissipation and to noncommutative geometry in the plane. The examples of the Koch curve and logarithmic spiral are considered in detail. It is suggested that the dynamical formation of fractals originates from the coherent boson condensation induced by the generators of the squeezed coherent states, whose (fractal) geometrical properties thus become manifest. The macroscopic nature of fractals appears to emerge from microscopic coherent local deformation processes. ","Fractals, coherent states and self-similarity induced noncommutative geometry" " The polarized images of a synchrotron emitting ring are studied in the spacetime of a rotating black hole in the Scalar-Tensor-Vector-Gravity (STVG) theory. The black hole owns an additional dimensionless MOG parameter described its deviation from Kerr black hole. The effects of the MOG parameter on the observed polarization vector and Strokes $Q-U$ loops depend heavily on the spin parameter, the magnetic field configuration, the fluid velocity and the observation inclination angle. For the fixed MOG parameter, the changes of the polarization vector in the image plane are similar to those in the Kerr black hole case. The comparison of the polarization images between Kerr-MOG black hole and M87* implies that there remains some possibility for the STVG-MOG theory. ",Polarized image of a rotating black hole in Scalar-Tensor-Vector-Gravity theory " Six significant new methodological developments of the previously-presented ""metastimuli architecture"" for human learning through machine learning of spatially correlated structural position within a user's personal information management system (PIMS), providing the basis for haptic metastimuli, are presented. These include architectural innovation, recurrent (RNN) artificial neural network (ANN) application, a variety of atom embedding techniques (including a novel technique we call ""nabla"" embedding inspired by linguistics), ANN hyper-parameter (one that affects the network but is not trained, e.g. the learning rate) optimization, and meta-parameter (one that determines the system performance but is not trained and not a hyper-parameter, e.g. the atom embedding technique) optimization for exploring the large design space. A technique for using the system for automatic atom categorization in a user's PIMS is outlined. ANN training and hyper- and meta-parameter optimization results are presented and discussed in service of methodological recommendations. ","New methods for metastimuli: architecture, embeddings, and neural network optimization" " The use of ontologies and taxonomies contributes by providing means to define concepts, minimize the ambiguity, improve the interoperability and manage knowledge of the security domain. Thus, this paper presents a literature survey on ontologies and taxonomies concerning the Security Assessment domain. We carried out it to uncover initiatives that aim at formalizing concepts from the Information Security and Test and Assessment fields of research. We applied a systematic review approach in seven scientific databases. 138 papers were identified and divided into categories according to their main contributions, namely: Ontology, Taxonomy and Survey. Based on their contents, we selected 47 papers on ontologies, 22 papers on taxonomies, and 11 papers on surveys. A taxonomy has been devised to be used in the evaluation of the papers. Summaries, tables, and a preliminary analysis of the selected works are presented. Our main contributions are: 1) an updated literature review, describing key characteristics, results, research issues, and application domains of the papers; and 2) the taxonomy for the evaluation process. We have also detected gaps in the Security Assessment literature that could be the subject of further studies in the field. This work is meant to be useful for security researchers who wish to adopt a formal approach in their methods and techniques. ",The Security Assessment Domain: A Survey of Taxonomies and Ontologies " Atoms and molecules, and in particular CO, are important coolants during the evolution of interstellar star-forming gas clouds. The presence of dust grains, which allow many chemical reactions to occur on their surfaces, strongly impacts the chemical composition of a cloud. At low temperatures, dust grains can lock-up species from the gas phase which freeze out and form ices. In this sense, dust can deplete important coolants. Our aim is to understand the effects of freeze-out on the thermal balance and the evolution of a gravitationally bound molecular cloud. For this purpose, we perform 3D hydrodynamical simulations with the adaptive mesh code FLASH. We simulate a gravitationally unstable cloud under two different conditions, with and without grain surface chemistry. We let the cloud evolve until one free-fall time is reached and track the thermal evolution and the abundances of species during this time. We see that at a number density of 10$^4$ cm$^{-3}$ most of the CO molecules are frozen on dust grains in the run with grain surface chemistry, thereby depriving the most important coolant. As a consequence, we find that the temperature of the gas rises up to $\sim$25 K. The temperature drops once again due to gas-grain collisional cooling when the density reaches a few$\times$10$^4$ cm$^{-3}$. We conclude that grain surface chemistry not only affects the chemical abundances in the gas phase, but also leaves a distinct imprint in the thermal evolution that impacts the fragmentation of a star-forming cloud. As a final step, we present the equation of state of a collapsing molecular cloud that has grain surface chemistry included. ",The impact of freeze-out on collapsing molecular clouds " The API economy refers to the widespread integration of API (advanced programming interface) microservices, where software applications can communicate with each other, as a crucial element in business models and functions. The number of possible ways in which such a system could be used is huge. It is thus desirable to monitor the usage patterns and identify when the system is used in a way that was never used before. This provides a warning to the system analysts and they can ensure uninterrupted operation of the system. In this work we analyze both histograms and call graph of API usage to determine if the usage patterns of the system has shifted. We compare the application of nonparametric statistical and Bayesian sequential analysis to the problem. This is done in a way that overcomes the issue of repeated statistical tests and insures statistical significance of the alerts. The technique was simulated and tested and proven effective in detecting the drift in various scenarios. We also mention modifications to the technique to decrease its memory so that it can respond more quickly when the distribution drift occurs at a delay from when monitoring begins. ",Using sequential drift detection to test the API economy " Brown dwarfs and exoplanets provide unique atmospheric regimes that hold information about their formation routes and evolutionary states. Modelling mineral cloud particle formation is key to prepare for missions and instruments like CRIRES+, JWST and ARIEL as well as possible polarimetry missions like {\sc PolStar}. The aim is to support more detailed observations that demand greater understanding of microphysical cloud processes. We extend our kinetic cloud formation model that treats nucleation, condensation, evaporation and settling of mixed material cloud particles to consistently model cloud particle-particle collisions. The new hybrid code, {\sc HyLandS}, is applied to a grid of {\sc Drift-Phoenix} (T, p)-profiles. Effective medium theory and Mie theory are used to investigate the optical properties. Turbulence is the main driving process of collisions, with collisions becoming the dominant process at the cloud base ($p>10^{-4}\,{\rm bar}$). Collisions produce one of three outcomes: fragmenting atmospheres ($\log_{10}(g)=3$), coagulating atmospheres ($\log_{10}(g)=5$, $T_{\rm eff} \leq 1800\, {\rm K}$) and condensational growth dominated atmospheres ($\log_{10}(g\,)=5$, $T_{\rm eff} > 1800\, {\rm K}$). Cloud particle opacity slope at optical wavelengths (HST) is increased with fragmentation, as are the silicate features at mid-infrared wavelengths. The hybrid moment-bin method {\sc HyLandS} demonstrates the feasibility of combining a moment and a bin method whilst assuring element conservation. It provides a powerful and fast tool for capturing general trends of particle collisions, consistently with other microphysical processes. Collisions are important in exoplanet and brown dwarf atmospheres but cannot be assumed to be hit-and-stick only. The spectral effects of collisions complicates inferences of cloud particle size and material composition from observational data. ",Mineral Snowflakes on Exoplanets and Brown Dwarfs: Coagulation and Fragmentation of Cloud Particles with {\sc HyLandS} " This paper introduces a paradigm of smartphone application based disease diagnostics that may completely revolutionise the way healthcare services are being provided. Although primarily aimed to assist the problems in rendering the healthcare services during the coronavirus pandemic, the model can also be extended to identify the exact disease that the patient is caught with from a broad spectrum of pulmonary diseases. The app inputs Chest X-Ray images captured from the mobile camera which is then relayed to the AI architecture in a cloud platform, and diagnoses the disease with state of the art accuracy. Doctors with a smartphone can leverage the application to save the considerable time that standard COVID-19 tests take for preliminary diagnosis. The scarcity of training data and class imbalance issues were effectively tackled in our approach by the use of Data Augmentation Generative Adversarial Network (DAGAN) and model architecture based as a Convolutional Siamese Network with attention mechanism. The backend model was tested for robustness us-ing publicly available datasets under two different classification scenarios(Binary/Multiclass) with minimal and noisy data. The model achieved pinnacle testing accuracy of 99.30% and 98.40% on the two respective scenarios, making it completely reliable for its users. On top of that a semi-live training scenario was introduced, which helps improve the app performance over time as data accumulates. Overall, the problems of generalisability of complex models and data inefficiency is tackled through the model architecture. The app based setting with semi live training helps in ease of access to reliable healthcare in the society, as well as help ineffective research of rare diseases in a minimal data setting. ",A Data-Efficient Deep Learning Based Smartphone Application For Detection Of Pulmonary Diseases Using Chest X-rays " This paper investigates the usage of kernel functions at the different layers in a convolutional neural network. We carry out extensive studies of their impact on convolutional, pooling and fully-connected layers. We notice that the linear kernel may not be sufficiently effective to fit the input data distributions, whereas high order kernels prone to over-fitting. This leads to conclude that a trade-off between complexity and performance should be reached. We show how one can effectively leverage kernel functions, by introducing a more distortion aware pooling layers which reduces over-fitting while keeping track of the majority of the information fed into subsequent layers. We further propose Kernelized Dense Layers (KDL), which replace fully-connected layers, and capture higher order feature interactions. The experiments on conventional classification datasets i.e. MNIST, FASHION-MNIST and CIFAR-10, show that the proposed techniques improve the performance of the network compared to classical convolution, pooling and fully connected layers. Moreover, experiments on fine-grained classification i.e. facial expression databases, namely RAF-DB, FER2013 and ExpW demonstrate that the discriminative power of the network is boosted, since the proposed techniques improve the awareness to slight visual details and allows the network reaching state-of-the-art results. ",Kernel function impact on convolutional neural networks " MicroRNAs (miRNAs) are small endogenous regulatory molecules that modulate gene expression post-transcriptionally. Although differential expression of miRNAs have been implicated in many diseases (including cancers), the underlying mechanisms of action remain unclear. Because each miRNA can target multiple genes, miRNAs may potentially have functional implications for the overall behavior of entire pathways. Here we investigate the functional consequences of miRNA dysregulation through an integrative analysis of miRNA and mRNA expression data using a novel approach that incorporates pathway information a priori. By searching for miRNA-pathway associations that differ between healthy and tumor tissue, we identify specific relationships at the systems-level which are disrupted in cancer. Our approach is motivated by the hypothesis that if a miRNA and pathway are associated, then the expression of the miRNA and the collective behavior of the genes in a pathway will be correlated. As such, we first obtain an expression-based summary of pathway activity using Isomap, a dimension reduction method which can articulate nonlinear structure in high-dimensional data. We then search for miRNAs that exhibit differential correlations with the pathway summary between phenotypes as a means of finding aberrant miRNA-pathway coregulation in tumors. We apply our method to cancer data using gene and miRNA expression datasets from The Cancer Genome Atlas (TCGA) and compare ${\sim}10^5$ miRNA-pathway relationships between healthy and tumor samples from four tissues (breast, prostate, lung, and liver). Many of the flagged pairs we identify have a biological basis for disruption in cancer. ",Integrative analysis reveals disrupted pathways regulated by microRNAs in cancer " The propagation of classical waves in the presence of a disordered medium is studied. We consider wave pulses containing a broad range of frequencies in terms of the configurationally averaged Green function of the system. Damped oscillations in the time-dependent response trailing behind the direct arrival of the pulse (coda) are predicted, the periods of which are governed by the density of scatterers. ",Amplitude coda of classical waves in disordered media " Understanding why machine learning algorithms may fail is usually the task of the human expert that uses domain knowledge and contextual information to discover systematic shortcomings in either the data or the algorithm. In this paper, we propose a semantic referee, which is able to extract qualitative features of the errors emerging from deep machine learning frameworks and suggest corrections. The semantic referee relies on ontological reasoning about spatial knowledge in order to characterize errors in terms of their spatial relations with the environment. Using semantics, the reasoner interacts with the learning algorithm as a supervisor. In this paper, the proposed method of the interaction between a neural network classifier and a semantic referee shows how to improve the performance of semantic segmentation for satellite imagery data. ",Semantic Referee: A Neural-Symbolic Framework for Enhancing Geospatial Semantic Segmentation " The results of electronic structure calculations for the one-dimensional magnetic chain compound Ca3Co2O6 are presented. The calculations are based on density functional theory and the local density approximation and used the augmented spherical wave (ASW) method. Our results allow for deeper understanding of recent experimental findings. In particular, alternation of Co 3d low- and high-spin states along the characteristic chains is related to differences in the oxygen coordination at the inequivalent cobalt sites. Strong hybridization of the d states with the O 2p states lays ground for polarization of the latter and the formation of extended localized magnetic moments centered at the high-spin sites. In contrast, strong metal-metal overlap along the chains gives rise to intrachain ferromagnetic exchange coupling of the extended moments via the d_{3z^2-r^2} orbitals of the low-spin cobalt atoms. ",Extended moment formation and magnetic ordering in the trigonal chain compound Ca3Co2O6 " Random fields are useful mathematical tools for representing natural phenomena with complex dependence structures in space and/or time. In particular, the Gaussian random field is commonly used due to its attractive properties and mathematical tractability. However, this assumption seems to be restrictive when dealing with counting data. To deal with this situation, we propose a random field with a Poisson marginal distribution by considering a sequence of independent copies of a random field with an exponential marginal distribution as 'inter-arrival times' in the counting renewal processes framework. Our proposal can be viewed as a spatial generalization of the Poisson process. Unlike the classical hierarchical Poisson Log-Gaussian model, our proposal generates a (non)-stationary random field that is mean square continuous and with Poisson marginal distributions. For the proposed Poisson spatial random field, analytic expressions for the covariance function and the bivariate distribution are provided. In an extensive simulation study, we investigate the weighted pairwise likelihood as a method for estimating the Poisson random field parameters. Finally, the effectiveness of our methodology is illustrated by an analysis of reindeer pellet-group survey data, where a zero-inflated version of the proposed model is compared with zero-inflated Poisson Log-Gaussian and Poisson Gaussian copula models. Supplementary materials for this article, include technical proofs and R code for reproducing the work, are available as an online supplement. ",Modelling Point Referenced Spatial Count Data: A Poisson Process Approach " We study numerically the nature of the diffusion process on a honeycomb and a quasi-lattice, where a point particle, moving along the bonds of the lattice, scatters from randomly placed scatterers on the lattice sites according to strictly deterministic rules. For the honeycomb lattice fully occupied by fixed rotators two (symmetric) isolated critical points appear to be present, with the same hyperscaling relation as for the square and the triangular lattices. No such points appear to exist for the quasi-lattice. A comprehensive comparison is made with the behavior on the previously studied square and triangular lattices. A great variety of diffusive behavior is found ranging from propagation, super-diffusion, normal, quasi-normal, anomalous to absence of diffusion. The influence of the scattering rules as well as of the lattice structure on the diffusive behavior of a point particle moving on the all lattices studied so far, is summarized. ",Diffusion in Lorentz Lattice Gas Cellular Automata: the honeycomb and quasi-lattices compared with the square and triangular lattices Let X be a smooth projective hyperelliptic curve over an algeraically closed field k of prime characteristic p. The aim of this note is to find necessary and sufficient conditions on the automorphism group of the curve X to be lifted to characteristic zero. The results will be generalised for a certain family of curves that we call cyclic curves. ,Lifting Problem on Automorphism Groups of Cyclic Curves " This paper investigates a fundamental problem of scene understanding: how to parse a scene image into a structured configuration (i.e., a semantic object hierarchy with object interaction relations). We propose a deep architecture consisting of two networks: i) a convolutional neural network (CNN) extracting the image representation for pixel-wise object labeling and ii) a recursive neural network (RsNN) discovering the hierarchical object structure and the inter-object relations. Rather than relying on elaborative annotations (e.g., manually labeled semantic maps and relations), we train our deep model in a weakly-supervised learning manner by leveraging the descriptive sentences of the training images. Specifically, we decompose each sentence into a semantic tree consisting of nouns and verb phrases, and apply these tree structures to discover the configurations of the training images. Once these scene configurations are determined, then the parameters of both the CNN and RsNN are updated accordingly by back propagation. The entire model training is accomplished through an Expectation-Maximization method. Extensive experiments show that our model is capable of producing meaningful scene configurations and achieving more favorable scene labeling results on two benchmarks (i.e., PASCAL VOC 2012 and SYSU-Scenes) compared with other state-of-the-art weakly-supervised deep learning methods. In particular, SYSU-Scenes contains more than 5000 scene images with their semantic sentence descriptions, which is created by us for advancing research on scene parsing. ",Hierarchical Scene Parsing by Weakly Supervised Learning with Image Descriptions " From detailed abundance analysis of >100 Hamburg/ESO candidate extremely metal-poor (EMP) stars we find 45 with [Fe/H] < -3.0 dex. We identify a heretofore unidentified group: Ca-deficient stars, with sub-solar [Ca/Fe] ratios and the lowest neutron-capture abundances; the Ca-deficient group comprises ~ 10% of the sample, excluding Carbon stars. Our radial velocity distribution shows that the carbon-enhanced stars with no s-process enhancements, CEMP-no, and which do not show C2 bands are not preferentially binary systems. Ignoring Carbon stars, approximately 15% of our sample are strong (> 5 sigma) outliers in one, or more, elements between Mg and Ni; this rises to ~19% if very strong (>10 sigma) outliers for Sr and Ba are included. Examples include: HE0305-0554 with the lowest [Ba/H] known; HE1012-1540 and HE2323-0256, two (non-velocity variable) C-rich stars with very strong [Mg,Al/Fe] enhancements; and HE1226-1149 an extremely r-process rich star. ",Normal and Outlying Populations of the Milky Way Stellar Halo at [Fe/H] < -2 " The question of selecting the ""best"" amongst different choices is a common problem in statistics. In drug development, our motivating setting, the question becomes, for example: what is the dose that gives me a pre-specified risk of toxicity or which treatment gives the best response rate. Motivated by a recent development in the weighted information measures theory, we propose an experimental design based on a simple and intuitive criterion which governs arm selection in the experiment with multinomial outcomes. The criterion leads to accurate arm selection without any parametric or monotonicity assumption. The asymptotic properties of the design are studied for different allocation rules and the small sample size behaviour is evaluated in simulations in the context of Phase I and Phase II clinical trials with binary endpoints. We compare the proposed design to currently used alternatives and discuss its practical implementation. ",An information-theoretic approach for selecting arms in clinical trials " This article proposes a stochastic model to obtain the end-to-end delay law between two nodes of a Delay Tolerant Network (DTN). We focus on the commonly used Binary Spray and Wait (BSW) routing protocol and propose a model that can be applied to homogeneous or heterogeneous networks (i.e. when the inter-contact law parameter takes one or several values). To the best of our knowledge, this is the first model allowing to estimate the delay distribution of Binary Spray and Wait DTN protocol in heterogeneous networks. We first detail the model and propose a set of simulations to validate the theoretical results. ",Modelling the Delay Distribution of Binary Spray and Wait Routing Protocol " A relativistic and manifestly gauge-invariant soft-photon amplitude, which is consistent with the soft-photon theorem and satisfies the Pauli Principle, is derived for the proton-proton bremsstrahlung process. This soft-photon amplitude is the first two-u-two-t special amplitude to satisfy all theoretical constraints. The conventional Low amplitude can be obtained as a special case. It is demonstrated that previously proposed amplitudes for this process, both the (u,t) and (s,t) classes, violate the Pauli principle at some level. The origin of the Pauli principle violation is shown to come from two sources: (i) For the (s,t) class, the two-s-two-t amplitude transforms into the two-s-two-u amplitude under the interchange of two initial-state (or final-state) protons. (ii) For the (u,t) class, the use of an internal emission amplitude determined from the gauge-invariance constraint alone, without imposition of the Pauli principle, causes a problem. The resulting internal emission amplitude can depend upon an electromagnetic factor which is not invariant under the interchange of the two protons. ",The Pauli principle in the soft-photon approach to proton-proton bremstrahlung We study analytical properties of the hard exclusive process amplitudes. We found that QCD factorization for deeply virtual Compton scattering and hard exclusive vector meson production results in the subtracted dispersion relation with the subtraction constant determined by the Polyakov-Weiss $D$-term. ,Dispersion relations and QCD factorization in hard reactions " Motivated by the theory of isoparametric hypersurfaces, we study submanifolds whose tubular hypersurfaces have some constant ""higher order mean curvatures"". Here a $k$-th order mean curvature $Q_k$ ($k\geq1$) of a hypersurface $M^n$ is defined as the $k$-th power sum of the principal curvatures, or equivalently, of the shape operator. Many necessary restrictions involving principal curvatures, higher order mean curvatures and Jacobi operators on such submanifolds are obtained, which, among other things, generalize some classical results in the theory of isoparametric hypersurfaces given by E. Cartan, K. Nomizu, H. F. M{\""u}nzner, Q. M. Wang, \emph{etc.}. As an application, we finally get a geometrical filtration for the focal varieties of isoparametric functions on a complete Riemannian manifold. ",On submanifolds whose tubular hypersurfaces have constant mean curvatures " A quantum random number generator (QRNG) can generate true randomness by exploiting the fundamental indeterminism of quantum mechanics. Most approaches to QRNG employ single-photon detection technologies and are limited in speed. Here, we propose and experimentally demonstrate an ultrafast QRNG at a rate over 6 Gb/s based on the quantum phase fluctuations of a laser operating near threshold. Moreover, we consider a potential adversary who has partial knowledge on the raw data and discuss how one can rigorously remove such partial knowledge with post-processing. We quantify the quantum randomness through min-entropy by modeling our system, and employ two extractors, Trevisan's extractor and Toeplitz-hashing, to distill the randomness, which is information-theoretically provable. The simplicity and high-speed of our experimental setup show the feasibility of a robust, low-cost, high-speed QRNG. ",Ultrafast quantum random number generation based on quantum phase fluctuations " Machine Reading Comprehension (MRC) models tend to take advantage of spurious correlations (also known as dataset bias or annotation artifacts in the research community). Consequently, these models may perform the MRC task without fully comprehending the given context and question, which is undesirable since it may result in low robustness against distribution shift. The main focus of this paper is answer-position bias, where a significant percentage of training questions have answers located solely in the first sentence of the context. We propose a Single-Sentence Reader as a new approach for addressing answer position bias in MRC. Remarkably, in our experiments with six different models, our proposed Single-Sentence Readers trained on biased dataset achieve results that nearly match those of models trained on normal dataset, proving their effectiveness in addressing the answer position bias. Our study also discusses several challenges our Single-Sentence Readers encounter and proposes a potential solution. ",Single-Sentence Reader: A Novel Approach for Addressing Answer Position Bias " Let g be a semi-simple Lie algebra and let h be a reductive subalgebra of maximal rank in g. Given any irreducible representation of g, consider its tensor product with the spin representation associated to the orthogonal complement of h in g. Gross, Kostant, Ramond, and Sternberg recently proved a generalization of the Weyl character formula which decomposes the signed character of this product representation in terms of the characters of a set of irreducible representations of h, called a multiplet. Kostant then constructed a formal h-equivariant Dirac operator on such product representations whose kernel is precisely the multiplet of h-representations corresponding to the given representation of g. We reproduce these results in the Kac-Moody setting for the extended loop algebras Lg and Lh. We prove a homogeneous generalization of the Weyl-Kac character formula, which now yields a multiplet of irreducible positive energy representations of Lh associated to any irreducible positive energy representation of Lg. We construct a Lh-equivariant operator, analogous to Kostant's Dirac operator, on the tensor product of a representation of Lg with the spin representation associated to the complement of Lh in Lg. We then prove that the kernel of this operator gives the Lh-multiplet corresponding to the original representation of Lg. ",Multiplets of representations and Kostant's Dirac operator for equal rank loop groups " By means of micromagnetic spin dynamics calculations, a quantitative calculation is carried out to explore the mechanism of exchange bias (EB) in ferromagnetic (FM)/compensated antiferromagnetic (AFM) bilayers. The antiferromagnets with low and high Neel temperatures have been both considered, and the crossover from negative to positive EB is found only in the case with low Neel temperature. We propose that the mechanism of EB in FM/compensated AFM bilayers is due to the symmetry broken of AFM that yields some net ferromagnetic components. ",Exchange Bias in Ferromagnetic/Compensated Antiferromagnetic Bilayers " We report a mixed-signal data acquisition (DAQ) system for optically detected magnetic resonance (ODMR) of solid-state spins. This system is designed and implemented based on a Field-Programmable-Gate-Array (FPGA) chip assisted with high-speed peripherals. The ODMR experiments often require high-speed mixed-signal data acquisition and processing for general and specific tasks. To this end, we realized a mixed-signal DAQ system which can acquire both analog and digital signals with precise hardware synchronization. The system consist of 4 analog channels (2 inputs and 2 outputs) and 16 optional digital channels works at up to 125 MHz clock rate. With this system, we performed general-purpose ODMR and advanced Lock-in detection experiments of nitrogen-vacancy (NV) centers, and the reported DAQ system shows excellent performance in both single and ensemble spin cases. This work provides a uniform DAQ solution for NV center quantum control system and could be easily extended to other spin-based systems. ",Mixed-signal data acquisition system for optically detected magnetic resonance of solid-state spins " Mission critical data dissemination in massive Internet of things (IoT) networks imposes constraints on the message transfer delay between devices. Due to low power and communication range of IoT devices, data is foreseen to be relayed over multiple device-to-device (D2D) links before reaching the destination. The coexistence of a massive number of IoT devices poses a challenge in maximizing the successful transmission capacity of the overall network alongside reducing the multi-hop transmission delay in order to support mission critical applications. There is a delicate interplay between the carrier sensing threshold of the contention based medium access protocol and the choice of packet forwarding strategy selected at each hop by the devices. The fundamental problem in optimizing the performance of such networks is to balance the tradeoff between conflicting performance objectives such as the spatial frequency reuse, transmission quality, and packet progress towards the destination. In this paper, we use a stochastic geometry approach to quantify the performance of multi-hop massive IoT networks in terms of the spatial frequency reuse and the transmission quality under different packet forwarding schemes. We also develop a comprehensive performance metric that can be used to optimize the system to achieve the best performance. The results can be used to select the best forwarding scheme and tune the carrier sensing threshold to optimize the performance of the network according to the delay constraints and transmission quality requirements. ",Optimizing Mission Critical Data Dissemination in Massive IoT Networks " Reconstructing a high-resolution 3D model of an object is a challenging task in computer vision. Designing scalable and light-weight architectures is crucial while addressing this problem. Existing point-cloud based reconstruction approaches directly predict the entire point cloud in a single stage. Although this technique can handle low-resolution point clouds, it is not a viable solution for generating dense, high-resolution outputs. In this work, we introduce DensePCR, a deep pyramidal network for point cloud reconstruction that hierarchically predicts point clouds of increasing resolution. Towards this end, we propose an architecture that first predicts a low-resolution point cloud, and then hierarchically increases the resolution by aggregating local and global point features to deform a grid. Our method generates point clouds that are accurate, uniform and dense. Through extensive quantitative and qualitative evaluation on synthetic and real datasets, we demonstrate that DensePCR outperforms the existing state-of-the-art point cloud reconstruction works, while also providing a light-weight and scalable architecture for predicting high-resolution outputs. ",Dense 3D Point Cloud Reconstruction Using a Deep Pyramid Network " Existing private synthetic data generation algorithms are agnostic to downstream tasks. However, end users may have specific requirements that the synthetic data must satisfy. Failure to meet these requirements could significantly reduce the utility of the data for downstream use. We introduce a post-processing technique that improves the utility of the synthetic data with respect to measures selected by the end user, while preserving strong privacy guarantees and dataset quality. Our technique involves resampling from the synthetic data to filter out samples that do not meet the selected utility measures, using an efficient stochastic first-order algorithm to find optimal resampling weights. Through comprehensive numerical experiments, we demonstrate that our approach consistently improves the utility of synthetic data across multiple benchmark datasets and state-of-the-art synthetic data generation algorithms. ",Post-processing Private Synthetic Data for Improving Utility on Selected Measures " We present a construction of the most general BPS black holes of STU supergravity (${\cal N}=2$ supersymmetric $D=4$ supergravity coupled to three vector super-multiplets) with arbitrary asymptotic values of the scalar fields. These solutions are obtained by acting with a subset of of the global symmetry generators on STU BPS black holes with zero values of the asymptotic scalars, both in the U-duality and the heterotic frame. The solutions are parameterized by fourteen parameters: four electric and four magnetic charges, and the asymptotic values of the six scalar fields. We also present BPS black hole solutions of a consistently truncated STU supergravity, which are parameterized by two electric and two magnetic charges and two scalar fields. These latter solutions are significantly simplified, and are very suitable for further explicit studies. We also explore a conformal inversion symmetry of the Couch-Torrence type, which maps any member of the fourteen-parameter family of BPS black holes to another member of the family. Furthermore, these solutions are expected to be valuable in the studies of various swampland conjectures in the moduli space of string compactifications. ",Conformal Symmetries for Extremal Black Holes with General Asymptotic Scalars in STU Supergravity " We select and characterise a sample of massive (log(M$_{*}/$M$_{\odot})>10.6$) quiescent galaxies (QGs) at $33$. We compute median rest-frame SEDs for our sample and find the median quiescent galaxy at $3100\,{\rm GeV}$) $\gamma$-ray emission up to the TeV range by the H.E.S.S. experiment. This makes AP Librae one of the few VHE emitters of the LBL type. The measured spectrum yields a flux of $(8.8 \pm 1.5_{\rm stat} \pm 1.8_{\rm sys}) \times 10^{-12}\ {\rm cm}^{-2} {\rm s}^{-1}$ above 130 GeV and a spectral index of $\Gamma = 2.65\pm0.19_{\rm stat}\pm0.20_{\rm sys}$. This study also makes use of \textit{Fermi}-LAT, observations in the high energy (HE, E$>$100 MeV) range, providing the longest continuous light curve (5 years) ever published on this source. The source underwent a flaring event between MJD 56306-56376 in the HE range, with a flux increase of a factor 3.5 in the 14-day bin light curve and no significant variation in spectral shape with respect to the low-flux state. While the H.E.S.S., and (low state) \textit{Fermi}-LAT fluxes are in good agreement where they overlap, a spectral curvature between the steep VHE spectrum and the \textit{Fermi}-LAT, spectrum is observed. The maximum of the $\gamma$-ray emission in the spectral energy distribution is located below the GeV energy range. ",The high-energy $\gamma$-ray emission of AP Librae " We review the current status of calculations of the HQET field anomalous dimension and the cusp anomalous dimension. In particular, we give the results at 4 loops for the quartic Casimir contribution, and for the full QED case, up to $\varphi^6$ in the small angle expansion. Furthermore, we discuss the leading terms in the anti-parallel lines limit at four loops. ",Four-loop results for the cusp anomalous dimension " In the Color Glass Condensate formalism, we evaluate the 3-dipole correlator up to the $\frac{1}{N_c^4}$ order with $N_c$ being the number of colors, and compute the azimuthal cumulant $c_{123}$ for 3-particle productions. In addition, we discuss the patterns appearing in the $n$-dipole formula in terms of $\frac{1}{N_c}$ expansions. This allows us to conjecture the $N_c$ scaling of $c_n\{m\}$, which is crosschecked by our calculation of $c_2\{4\}$ in the dilute limit. ",Multiparticle azimuthal angular correlations in $pA$ collisions " We investigate exotic supersolid phases in the extended Bose-Hubbard model with infinite projected entangled-pair state, numerical exact diagonalization, and mean-field theory. We demonstrate that many different supersolid phases can be generated by changing signs of hopping terms, and the interactions along with the frustration of hopping terms are important to stabilize those supersolid states. We argue the effect of frustration introduced by the competition of hopping terms in the supersolid phases from the mean-field point of view. This helps to give a clearer picture of the background mechanism for underlying superfluid/supersolid states to be formed. With this knowledge, we predict and realize the $d$-wave superfluid, which shares the same pairing symmetry with high-$T_c$ materials, and its extended phases. We believe that our results contribute to preliminary understanding for desired target phases in the real-world experimental systems. ",Frustration-Induced Supersolid Phases of Extended Bose-Hubbard Model in the Hard-Core Limit " Multimodal demand forecasting aims at predicting product demand utilizing visual, textual, and contextual information. This paper proposes a method for multimodal product demand forecasting using convolutional, graph-based, and transformer-based architectures. Traditional approaches to demand forecasting rely on historical demand, product categories, and additional contextual information such as seasonality and events. However, these approaches have several shortcomings, such as the cold start problem making it difficult to predict product demand until sufficient historical data is available for a particular product, and their inability to properly deal with category dynamics. By incorporating multimodal information, such as product images and textual descriptions, our architecture aims to address the shortcomings of traditional approaches and outperform them. The experiments conducted on a large real-world dataset show that the proposed approach effectively predicts demand for a wide range of products. The multimodal pipeline presented in this work enhances the accuracy and reliability of the predictions, demonstrating the potential of leveraging multimodal information in product demand forecasting. ",Multimodal Temporal Fusion Transformers Are Good Product Demand Forecasters " In this article, we consider a dynamic model of a three-phase power system including nonlinear generator dynamics, transmission line dynamics, and static nonlinear loads. We define a synchronous steady-state behavior which corresponds to the desired nominal operating point of a power system and obtain necessary and sufficient conditions on the control inputs, load model, and transmission network, under which the power system admits this steady-state behavior. We arrive at a separation between the steady-state conditions of the transmission network and generators, which allows us to recover the steady-state of the entire power system solely from a prescribed operating point of the transmission network. Moreover, we constructively obtain necessary and sufficient steady-state conditions based on network balance equations typically encountered in power flow analysis. Our analysis results in several necessary conditions that any power system control strategy needs to satisfy. ",On the steady-state behavior of a nonlinear power system model " To support emerging applications ranging from holographic communications to extended reality, next-generation mobile wireless communication systems require ultra-fast and energy-efficient (UFEE) baseband processors. Traditional complementary metal-oxide-semiconductor (CMOS)-based baseband processors face two challenges in transistor scaling and the von Neumann bottleneck. To address these challenges, in-memory computing-based baseband processors using resistive random-access memory (RRAM) present an attractive solution. In this paper, we propose and demonstrate RRAM-based in-memory baseband processing for the widely adopted multiple-input-multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) air interface. Its key feature is to execute the key operations, including discrete Fourier transform (DFT) and MIMO detection using linear minimum mean square error (L-MMSE) and zero forcing (ZF), in one-step. In addition, RRAM-based channel estimation as well as mapper/demapper modules are proposed. By prototyping and simulations, we demonstrate that the RRAM-based full-fledged communication system can significantly outperform its CMOS-based counterpart in terms of speed and energy efficiency by $10^3$ and $10^6$ times, respectively. The results pave a potential pathway for RRAM-based in-memory computing to be implemented in the era of the sixth generation (6G) mobile communications. ",Realizing Ultra-Fast and Energy-Efficient Baseband Processing Using Analogue Resistive Switching Memory " The global $SU(3)$ color symmetry and its physical consequences are discussed. The N\""{o}ther current is actually governed by the conserved matter current of color charges if the color field generated by this charge is properly polarized. The color field strength of a charge can have a uniform part due to the nontrivial QCD vacuum field and the nonzero gluon condensate, which implies that the self-energy of a system with a net color charge is infinite and thereby cannot exist as a free state. This is precisely what the color confinement means. Accordingly, the Cornell type potential with the feature of the Casimir scaling is derived for a color singlet system composed of a static color charge and an anti-charge. The uniform color field also implies that a hadron has a minimal size and a minimal energy. Furthermore, the global $SU(3)$ color symmetry requires that the minimal irreducible color singlet systems can only be $q\bar{q}$, $qqq$, $gg$, $ggg$, $q\bar{q}g$, $qqqg$ and $\bar{q}\bar{q}\bar{q}g$, etc., as such a multi-quark systems can only exist as a molecular configurations if there are no other binding mechanisms. ",Confinement and the Global $SU(3)$ Color Symmetry The odderon equation is studied in terms of the variable suggested by the modular invariance of the 3 Reggeon system. Odderon charge is identified with the cross-product of three conformal spins. A complete set of commuting operators: h^2 and q is diagonalized and quantization conditions for eigenvalues of the odderon charge q are solved for arbitrary conformal weight h. ,Spectrum of the Odderon Charge for Arbitrary Conformal Weights " We study a primordial black hole (PBH) formation scenario based on the Affleck-Dine (AD) mechanism and investigate two PBH mass regions: $M \sim 30 M_\odot$ motivated by the LIGO-Virgo observations of the binary black hole mergers and $M \gtrsim 10^4 M_\odot$ motivated by the observations of supermassive black holes at the center of galaxies. In the previous studies, it has been considered that the inhomogeneous AD baryogenesis generates regions with a large baryon asymmetry, some of which collapse into PBHs. In this paper, we show that this scenario is severely constrained due to the baryon asymmetry remaining outside PBHs, which would spoil the success of the big bang nucleosynthesis. Then, we propose an alternative scenario where the AD leptogenesis results in the inhomogeneous formation of Q-balls with lepton charges, which collapse into PBHs. As a result, we find that our scenario can explain the favorable PBH abundance without conflicting with the observational constraints. ",Revisiting the Affleck-Dine mechanism for primordial black hole formation We prove a Girsanov identity on the Poisson space for anticipating transformations that satisfy a strong quasi-nilpotence condition. Applications are given to the Girsanov theorem and to the invariance of Poisson measures under random transformations. The proofs use combinatorial identities for the central moments of Poisson stochastic integrals. ,Girsanov identities for Poisson measures under quasi-nilpotent transformations " Interested in formalizing the generation of fast running code for linear algebra applications, the authors show how an index-free, calculational approach to matrix algebra can be developed by regarding matrices as morphisms of a category with biproducts. This shifts the traditional view of matrices as indexed structures to a type-level perspective analogous to that of the pointfree algebra of programming. The derivation of fusion, cancellation and abide laws from the biproduct equations makes it easy to calculate algorithms implementing matrix multiplication, the central operation of matrix algebra, ranging from its divide-and-conquer version to its vectorization implementation. From errant attempts to learn how particular products and coproducts emerge from biproducts, not only blocked matrix algebra is rediscovered but also a way of extending other operations (e.g. Gaussian elimination) blockwise, in a calculational style, is found. The prospect of building biproduct-based type checkers for computer algebra systems such as MatlabTM is also considered. ",Typing linear algebra: A biproduct-oriented approach " We examine the collider and dark matter phenomenology of the Standard Model extended by a hypercharge-zero SU(2) triplet scalar and gauge singlet scalar. In particular, we study the scenario where the singlet and triplet are both charged under a single $\mathbb{Z}_2$ symmetry. We find that such an extension is capable of generating the observed dark matter density, while also modifying the collider phenomenology such that the lower bound on the mass of the triplet is smaller than in minimal triplet scalar extensions to the Standard Model. A high triplet mass is in tension with the parameter space that leads to novel electroweak phase transitions in the early universe. Therefore, the lower triplet masses that are permitted in this extended model are of particular importance for the prospects of successful electroweak baryogenesis and the generation of gravitational waves from early universe phase transitions. ",A Real Triplet-Singlet Extended Standard Model: Dark Matter and Collider Phenomenology " The photoluminescence dynamics of a microscopic gas of indirect excitons trapped in coupled quantum wells is probed at very low bath temperature (approximately 350 mK). Our experiments reveal the non linear energy relaxation characteristics of indirect excitons. Particularly, we observe that the excitons dynamics is strongly correlated with the screening of structural disorder by repulsive exciton-exciton interactions. For our experiments where two-dimensional excitonic states are gradually defined, the distinctive enhancement of the exciton scattering rate towards lowest energy states with increasing density does not reveal unambiguously quantum statistical effects such as Bose stimulation. ",Quantum Signature Blurred by Disorder in Indirect Exciton Gases " Artin's representation is an injective homomorphism from the braid group $B_n$ on $n$ strands into $\operatorname{Aut}\mathbb{F}_n$, the automorphism group of the free group $\mathbb{F}_n$ on $n$ generators. The representation induces maps $B_n\to\operatorname{Aut}C^*_r(\mathbb{F}_n)$ and $B_n\to\operatorname{Aut}C^*(\mathbb{F}_n)$ into the automorphism groups of the corresponding group $C^*$-algebras of $\mathbb{F}_n$. These maps also have natural restrictions to the pure braid group $P_n$. In this paper, we consider twisted versions of the actions by cocycles with values in the circle, and discuss the ideal structure of the associated crossed products. Additionally, we make use of Artin's representation to show that the braid groups $B_\infty$ and $P_\infty$ on infinitely many strands are both $C^*$-simple. ",Dynamical systems and operator algebras associated to Artin's representation of braid groups " We design numerical schemes for a class of slow-fast systems of stochastic differential equations, where the fast component is an Ornstein-Uhlenbeck process and the slow component is driven by a fractional Brownian motion with Hurst index $H>1/2$. We establish the asymptotic preserving property of the proposed scheme: when the time-scale parameter goes to $0$, a limiting scheme which is consistent with the averaged equation is obtained. With this numerical analysis point of view, we thus illustrate the recently proved averaging result for the considered SDE systems and the main differences with the standard Wiener case. ",Asymptotic preserving schemes for SDEs driven by fractional Brownian motion in the averaging regime " In the application of deep learning on optical coherence tomography (OCT) data, it is common to train classification networks using 2D images originating from volumetric data. Given the micrometer resolution of OCT systems, consecutive images are often very similar in both visible structures and noise. Thus, an inappropriate data split can result in overlap between the training and testing sets, with a large portion of the literature overlooking this aspect. In this study, the effect of improper dataset splitting on model evaluation is demonstrated for three classification tasks using three OCT open-access datasets extensively used, Kermany's and Srinivasan's ophthalmology datasets, and AIIMS breast tissue dataset. Results show that the classification performance is inflated by 0.07 up to 0.43 in terms of Matthews Correlation Coefficient (accuracy: 5% to 30%) for models tested on datasets with improper splitting, highlighting the considerable effect of dataset handling on model evaluation. This study intends to raise awareness on the importance of dataset splitting given the increased research interest in implementing deep learning on OCT data. ",Inflation of test accuracy due to data leakage in deep learning-based classification of OCT images " The Far Ultraviolet Spectroscopic Explorer (FUSE) has surveyed a large sample (> 100) of active galactic nuclei in the low-redshift universe (z < 1). Its response at short wavelengths makes it possible to measure directly the far ultraviolet spectral properties of quasistellar objects (QSOs) and Seyfert 1 galaxies at z < 0.3. Using archival FUSE spectra, we form a composite extreme ultraviolet (EUV) spectrum of QSOs at z < 0.67. After consideration of many possible sources of systematic error in our analysis, we find that the spectral slope of the FUSE composite spectrum, \alpha= -0.56^+0.38_-0.28 for F_\nu \propto \nu^\alpha, is significantly harder than the EUV (\lambda \lesssim 1200 A) portion of the composite spectrum of QSOs with z > 0.33 formed from archival Hubble Space Telescope spectra, \alpha=-1.76 \pm 0.12. We identify several prominent emission lines in the \fuse composite and find that the high-ionization O VI and Ne VIII emission lines are enhanced relative to the HST composite. Power law continuum fits to the individual FUSE AGN spectra reveal a correlation between EUV spectral slope and AGN luminosity in the FUSE and FUSE + HST samples in the sense that lower luminosity AGNs show harder spectral slopes. We find an anticorrelation between the hardness of the EUV spectral slope and AGN black hole mass, using estimates of this quantity found in the literature. We interpret these results in the context of the well-known anticorrelation between AGN luminosity and emission line strength, the Baldwin effect, given that the median luminosity of the FUSE AGN sample is an order of magnitude lower than that of the HST sample. ",A Composite Extreme Ultraviolet QSO Spectrum from the Far Ultraviolet Spectroscopic Explorer " \begin{abstract} In recent years, the Finger Texture (FT) has attracted considerable attention as a biometric characteristic. It can provide efficient human recognition performance, because it has different human-specific features of apparent lines, wrinkles and ridges distributed along the inner surface of all fingers. Also, such pattern structures are reliable, unique and remain stable throughout a human's life. Efficient biometric systems can be established based only on FTs. In this paper, a comprehensive survey of the relevant FT studies is presented. We also summarise the main drawbacks and obstacles of employing the FT as a biometric characteristic, and provide useful suggestions to further improve the work on FT. \end{abstract} ",Finger Texture Biometric Characteristic: a Survey " We consider the curve shortening flow applied to a class of figure-eight curves: those with dihedral symmetry, convex lobes, and a monotonicity assumption on the curvature. We prove that when (non-conformal) linear transformations are applied to the solution so as to keep the bounding box the unit square, the renormalized limit converges to a quadrilateral which we call a bowtie. Along the way we prove that suitably chosen arcs of our evolving curves, when suitably rescaled, converge to the Grim Reaper Soliton under the flow. Our Grim Reaper Theorem is an analogue of a theorem of S. Angenent, which is proven in the locally convex case. ",The Affine Shape of a Figure-Eight under the Curve Shortening Flow " The main purpose of the present article is to establish the real case of ""Karoubi's conjecture"" in algebraic K-theory. The complex case was proved in 1990-91 by the second author and Andrei Suslin. Compared to the case of complex algebras, the real case poses additional difficulties. This is due to the fact that topological K-theory of real Banach algebras has period 8 instead of 2. The method we employ to overcome these difficulties can be used for complex algebras, and provides some simplifications to the original proofs. We also establish a natural analog of ""Karoubi's conjecture"" in Hermitian K-theory. ",Algebraic and Hermitian K-theory of K-rings " In this paper we analyze the propositional extensions of the minimal classical modal logic system E, which form a lattice denoted as CExtE. Our method of analysis uses algebraic calculations with canonical forms, which are a generalization of the normal forms applicable to normal modal logics. As an application, we identify a group of automorphisms of CExtE that is isomorphic to the symmetric group S4. ",Automorphisms of the Lattice of Classical Modal Logics " We have proved that homeomorphisms of domains of Euclidean space, inverse of which distort the modulus of families of curves by Poletskii type, have a continuous extension to isolated boundary point. ","On isolated singularities of mappings, inverse of which are generalized quasiconformal" " In the electricity market, it is quite common that the market participants make ""selfish"" strategies to harvest the maximum profits for themselves, which may cause the social benefit loss and impair the sustainability of the society in the long term. Regarding this issue, in this work, we will discuss how the social profit can be improved through strategic demand response (DR) management. Specifically, we explore two interaction mechanisms in the market: Nash equilibrium (NE) and Stackelberg equilibrium (SE) among utility companies (UCs) and user-UC interactions, respectively. At the user side, each user determines the optimal energy-purchasing strategy to maximize its own profit. At the UC side, a governmental UC (g-UC) is considered, who aims to optimize the social profit of the market. Meanwhile, normal UCs play games to maximize their own profits. As a result, a basic leader-following problem among the UCs is formulated under the coordination of the independent system operator (ISO). Moreover, by using our proposed demand function amelioration (DFA) strategy, a multi-timescale leader-following problem is formulated. In this case, the maximal market efficiency can be achieved without changing the ""selfish instinct"" of normal UCs. In addition, by considering the local constraints for the UCs, two projection-based pricing algorithms are proposed for UCs, which can provide approximate optimal solutions for the resulting non-convex social profit optimization problems. The feasibility of the proposed algorithms is verified by using the concept of price of anarchy (PoA) in a multi-UC multi-user market model in the simulation. ",Social Profit Optimization with Demand Response Management in Electricity Market: A Multi-timescale Leader-following Approach " It is a crucial feature of quantum mechanics that not all measurements are compatible with each other. However, if measurements suffer from noise they may lose their incompatibility. Here we determine the critical visibility such that all qubit observables, i.e. all positive operator-valued measures (POVMs), become compatible. In addition, we apply our methods to quantum steering and Bell nonlocality. We obtain a tight local hidden state model for two-qubit Werner states of visibility 1/2. Interestingly, this proves that POVMs do not help to demonstrate quantum steering for this family of states. As an implication, this also provides a new bound on how much white noise the two-qubit singlet can tolerate before it does not violate any Bell inequality. ",Compatibility of all noisy qubit observables " We present MarbleNet, an end-to-end neural network for Voice Activity Detection (VAD). MarbleNet is a deep residual network composed from blocks of 1D time-channel separable convolution, batch-normalization, ReLU and dropout layers. When compared to a state-of-the-art VAD model, MarbleNet is able to achieve similar performance with roughly 1/10-th the parameter cost. We further conduct extensive ablation studies on different training methods and choices of parameters in order to study the robustness of MarbleNet in real-world VAD tasks. ",MarbleNet: Deep 1D Time-Channel Separable Convolutional Neural Network for Voice Activity Detection " Sparse latent multi-factor models have been used in many exploratory and predictive problems with high-dimensional multivariate observations. Because of concerns with identifiability, the latent factors are almost always assumed to be linearly related to measured feature variables. Here we explore the analysis of multi-factor models with different structures of interactions between latent factors, including multiplicative effects as well as a more general framework for nonlinear interactions introduced via the Gaussian Process. We utilize sparsity priors to test whether the factors and interaction terms have significant effect. The performance of the models is evaluated through simulated and real data applications in genomics. Variation in the number of copies of regions of the genome is a well-known and important feature of most cancers. We examine interactions between factors directly associated with different chromosomal regions detected with copy number alteration in breast cancer data. In this context, significant interaction effects for specific genes suggest synergies between duplications and deletions in different regions of the chromosome. ",Sparse latent factor models with interactions: Analysis of gene expression data " We investigate the tilt-stability of stable sheaves on projective varieties with respect to certain tilt-stability conditions depends on two parameters constructed by Bridgeland. For a stable sheaf, we give effective bounds of these parameters such that the stable sheaf is tilt-stable. These allow us to prove new vanishing theorems for stable sheaves and an effective Serre vanishing theorem for torsion free sheaves. Using these results, we also prove Bogomolov-Gieseker type inequalities for the third Chern character of a stable sheaf on $\mathbb{P}^3$. ","Tilt-stability, vanishing theorems and Bogomolov-Gieseker type inequalities" This paper examines a discrete-time queuing system with applications to telecommunications traffic. The arrival process is a particular Markov modulated process which belongs to the class of discrete batched Markovian arrival processes. The server process is a single server deterministic queue. A closed form exact solution is given for the expected queue length and delay. A simple system of equations is given for the probability of the queue exceeding a given length. ,A discrete-time Markov modulated queuing system with batched arrivals " We deny with a concrete example the generality of the correlation subadditivity relation conjectured by Modi et al's [Phys. Rev. Lett. {\bf 104}, 080501 (2010)] for any quantum state and point out that the correlation additivity relation is actually super-additive for separable states. This work indicates that any effort on explicitly proving the conjecture and finding the subadditivity source is unnecessary and fruitless. ",Correlation additivity relation is superadditive for separable states " The selection rule on vibronic angular momentum of $t_{1u}^n \otimes h_g$ Jahn-Teller problem ($n = $ 1-5) is reinvestigated. It is shown that among three adiabatic orbitals only two have nonzero Berry phase. Thus, the Berry phase of adiabatic electronic configurations depends on the spin multiplicity as well as the number of electrons. On this basis, the general relation between the Berry phase and the angular momentum is described. It allows, in particular, to clarify the nature of vibronic states arising from high spin configurations. In comparison with the previous solution for the low-lying vibronic states for bimodal systems, the present solutions correctly fulfill all the symmetry requirement. ",Berry phase of adiabatic electronic configurations in fullerene anions " The concept of an adequate transversal of an abundant semigroup was introduced by El-Qallali in [8] whilst in [7], he and Fountain initiated the study of quasi-adequate semigroups as natural generalisations of orthodox semigroups. In this work we provide a structure theorem for adequate transversals of certain types of quasi-adequate semigroup and from this deduce Saito's classic result on the structure of inverse transversals of orthodox semigroups. We also apply it to left ample adequate transversals of left adequate semigroups and provide a structure for these based on semidirect products of adequate semigroups by left regular bands. ",Adequate transversals of quasi-adequate semigroups " Quantitative microstructural analysis of Room Temperature Vulcanized (RTV) silicone pyrolysis at high temperatures is presented. RTV is used as a bonding agent in multiple industries, particularly filling gaps in ablative tiles for hypersonic (re-)entry vehicles and fire prevention. Decomposition of RTV is resolved in real time using in situ high-temperature X-ray computed micro-tomography. Full tomographies are acquired every 90~seconds for four different linear heating rates ranging from 7 to 54 C/min. The microstructure is resolved below 5 micro-meters/pixel, allowing for a full quantitative analysis of the micro-structural evolution and porous network development. Results are highly heating rate dependent, and are evaluated for bulk sample volume change, porosity, pore network size, and observed densification from X-ray attenuation. The outcome of this work is critical to develop multi-physics models for thermal response. ",Real-time quantitative imaging of RTV silicone pyrolysis " The spin-dependent structure functions g_2 (x, Q^2) are investigated within the framework of the chiral quark soliton model. It turns out that the twist-3 part of g_2 (x, Q^2) gives nonnegligible contributions to the total distributions at the energy scale of Q^2 = 5 {GeV}^2 but mainly in the smaller x region only, so that the corresponding third moments \int_0^1 x^2 \bar{g}_2 (x, Q^2) dx are pretty small for both of the proton and neutron in conformity with the recent E155 data. ",Polarized Structure Functions g_2 (x) in the Chiral Quark Soliton Model " We develop a mathematically rigorous path integral representation of the time evolution operator for a model of (1+1) quantum gravity that incorporates factor ordering ambiguity. In obtaining a suitable integral kernel for the time-evolution operator, one requires that the corresponding Hamiltonian is self-adjoint; this issue is subtle for a particular category of factor orderings. We identify and parametrize a complete set of self-adjoint extensions and provide a canonical description of these extensions in terms of boundary conditions. Moreover, we use Trotter-type product formulae to construct path-integral representations of time evolution. ",Factor Ordering and Path Integral Measure for Quantum Gravity in (1+1) Dimensions " Given a classical symmetric pair, $(G,K)$, with $\mathfrak g = Lie(G)$, we provide descriptions of the Hilbert series of the algebra of $K$-invariant vectors in the associated graded algebra of $\mathcal U(\mathfrak g)$ viewed as a $K$-representation under restriction of the adjoint representation. The description illuminates a certain stable behavior of the Hilbert series, which is investigated in a case-by-case basis. We note that the stable Hilbert series of one symmetric pair often coincides with others. Also, for the case of the real form $U(p,q)$ we derive a closed expression for the Hilbert series when $\min(p,q) \to \infty$. ",Stable Hilbert series of $\mathcal S(\mathfrak g)^K$ for classical groups " We present an energy functional for a Thomas-Fermi type two-fluid model of a self-gravitating non-rotating charged body, with a non-relativistic kinetic energy. We prove that, under certain conditions on the total number of positively charged and negatively charged particles, a minimizer exists and both fluids have compact support. We prove the same result for special relativistic kinetic energy, assuming further conditions on the total number of particles. In the non-relativistic kinetic energy case, we further prove the uniqueness of the minimizer, as well as present results relating the general shape of the minimizer to the total number of particles. ",A Two Species Thomas-Fermi Model for Stellar Ground States " We discuss the implications that new magnetocaloric, thermal expansion and magnetostriction data in $\alpha$-RuCl$_{3}$ single crystals have on its temperature-field phase diagram and uncover the magnetic-field dependence of an apparent energy gap structure $\Delta (H)$ that evolves when the low temperature antiferromagnetic order is suppressed. We show that, depending on how the thermal expansion data is modeled, $\Delta (H)$ can show a cubic field dependence and remain finite at zero field, consistent with the pure Kitaev model hosting itinerant Majorana fermions and localized $\mathbb{Z}_{2}$ fluxes. Our magnetocaloric effect data provides, below $1\,\mathrm{K}$, unambiguous evidence for dissipative phenomena at $H_{\mathrm{c}}$, smoking gun for a first order phase transition. Our results, on the other hand, show little support for a phase transition from a QSL to a polarized paramagnetic state above $H_{\mathrm{c}}$. ",Thermal and magnetoelastic properties of {\alpha}-RuCl3 in the field-induced low temperature states " This paper reports on the development of a wearable system using wireless biomedical sensors for ubiquitous healthcare service provisioning. The prototype system is developed to address current healthcare challenges such as increasing cost of services, inability to access diverse services, low quality services and increasing population of elderly as experienced globally. The biomedical sensors proactively collect physiological data of remote patients to recommend diagnostic services. The prototype system is designed to monitor oxygen saturation level (SpO2), Heart Rate (HR), activity and location of the elderly. Physiological data collected are uploaded to a Health Server (HS) via GPRS/Internet for analysis. ",Development of Wearable Systems for Ubiquitous Healthcare Service Provisioning " We present new soliton and hairy black hole solutions of Einstein-non-Abelian-Proca theory in asymptotically anti-de Sitter space-time with gauge group ${\mathfrak {su}}(2)$. For static, spherically symmetric configurations, we show that the gauge field must be purely magnetic, and solve the resulting field equations numerically. The equilibrium gauge field is described by a single function $\omega (r)$, which must have at least one zero. The solitons and hairy black holes share many properties with the corresponding solutions in asymptotically flat space-time. In particular, all the solutions we study are unstable under linear, spherically symmetric, perturbations of the metric and gauge field. ",Solitons and hairy black holes in Einstein-non-Abelian-Proca theory in anti-de Sitter space-time " Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82\% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: https://github.com/D-X-Y/NAS-Projects. ",Searching for A Robust Neural Architecture in Four GPU Hours We argue that non-trivial fixed points bordering on the paramagnetic and ferromagnetic phases are most likely to exist in the Higgs-Yukawa systems that have a connected domain with the paramagnetic phase and no ferrimagnetic phase. We find three examples of such systems; among them is the U(1) system with naive fermions. ,Which Higgs-Yukawa systems can possess non-trivial fixed points " The ability to perceive polarization-related entoptic phenomena arises from the dichroism of macular pigments held in Henle's fiber layer of the retina and can be inhibited by retinal diseases such as age-related macular degeneration, which alter the structure of the macula. Structured light tools enable the direct probing of macular pigment density through the perception of polarization-dependent entoptic patterns. Here, we directly measure the visual angle of an entoptic pattern created through the illumination of the retina with a structured state of light and a perception task that is insensitive to corneal birefringence. The central region of the structured light stimuli was obstructed, with the size of the obstruction varying according to a psychophysical staircase. The perceived size of the entoptic pattern was observed to vary between participants, with an average visual angle threshold radius of $9.5^\circ \pm 0.9^\circ$, 95% C.I. = [$5.8^\circ$, $13^\circ$], in a sample of healthy participants. These results (with eleven azimuthal fringes) differ markedly from previous estimates of the Haidinger's brush phenomenon's extant (two azimuthal fringes), of $3.75^\circ$, suggesting that higher azimuthal fringe density increases pattern visibility. The increase in apparent size and clarity of entoptic phenomenon produced by the presented structured light stimuli may possess greater potential to detect the early signs of macular disease over perception tasks using uniform polarization stimuli. ",Measuring the visual angle of polarization-related entoptic phenomena using structured light " Let $(A, \m, k)$ be a Gorenstein local ring of dimension $ d\geq 1.$ Let $I$ be an ideal of $A$ with $\htt(I) \geq d-1.$ We prove that the numerical function \[ n \mapsto \ell(\ext_A^i(k, A/I^{n+1}))\] is given by a polynomial of degree $d-1 $ in the case when $ i \geq d+1 $ and $\curv(I^n) > 1$ for all $n \geq 1.$ We prove a similar result for the numerical function \[ n \mapsto \ell(\Tor_i^A(k, A/I^{n+1}))\] under the assumption that $A$ is a \CM ~ local ring. \noindent We note that there are many examples of ideals satisfying the condition $\curv(I^n) > 1,$ for all $ n \geq 1.$ We also consider more general functions $n \mapsto \ell(\Tor_i^A(M, A/I_n)$ for a filtration $\{I_n \}$ of ideals in $A.$ We prove similar results in the case when $M$ is a maximal \CM ~ $A$-module and $\{I_n=\overline{I^n} \}$ is the integral closure filtration, $I$ an $\m$-primary ideal in $A.$ ",Bass and Betti Numbers of $A/I^n.$ " The angular dependence of magnetic-field commensurability effects in thin films of the cuprate high-critical-temperature superconductor YBa$_{2}$Cu$_{3}$O$_{7-\delta}$ (YBCO) with an artificial pinning landscape is investigated. Columns of point defects are fabricated by two different methods of ion irradiation -- scanning the focused 30 keV ion beam in a helium ion microscope or employing the wide-field 75 keV He$^+$ beam of an ion implanter through a stencil mask. Simulations of the ion-target interactions and the resulting collision cascades reveal that with both methods square arrays of defect columns with sub-$\mu$m spacings can be created. They consist of dense point-defect clusters, which act as pinning centers for Abrikosov vortices. This is verified by the measurement of commensurable peaks of the critical current and related minima of the flux-flow resistance vs magnetic field at the matching fields. In oblique magnetic fields the matching features are exclusively governed by the component of the magnetic field parallel to the axes of the columnar defects, which confirms that the magnetic flux is penetrated along the defect columns. We demonstrate that the latter dominate the pinning landscape despite of the strong intrinsic pinning in thin YBCO films. ",Angular magnetic-field dependence of vortex matching in pinning lattices fabricated by focused or masked helium ion beam irradiation of superconducting YBa$_2$Cu$_3$O$_{7-\delta}$ thin films " Synchronization of non-identical oscillators coupled through complex networks is an important example of collective behavior. It is interesting to ask how the structural organization of network interactions influences this process. Several studies have uncovered optimal topologies for synchronization by making purposeful alterations to a network. Yet, the connectivity patterns of many natural systems are often not static, but are rather modulated over time according to their dynamics. This co-evolution - and the extent to which the dynamics of the individual units can shape the organization of the network itself - is not well understood. Here, we study initially randomly connected but locally adaptive networks of Kuramoto oscillators. The system employs a co-evolutionary rewiring strategy that depends only on instantaneous, pairwise phase differences of neighboring oscillators, and that conserves the total number of edges, allowing the effects of local reorganization to be isolated. We find that a simple regulatory rule - which preserves connections between more out-of-phase oscillators while rewiring connections between more in-phase oscillators - can cause initially disordered networks to organize into more structured topologies that support enhanced synchronization dynamics. We examine how this process unfolds over time, finding both a dependence on the intrinsic frequencies of the oscillators and the global coupling. For large enough coupling and after sufficient adaptation, the resulting networks exhibit degree - frequency and frequency - neighbor frequency correlations. These properties have previously been associated with optimal synchronization or explosive transitions. By considering a time-dependent interplay between structure and dynamics, this work offers a mechanism through which emergent phenomena can arise in complex systems utilizing local rules. ",Development of structural correlations and synchronization from adaptive rewiring in networks of Kuramoto oscillators " Visualization plays a vital role in making sense of complex network data. Recent studies have shown the potential of using extended reality (XR) for the immersive exploration of networks. The additional depth cues offered by XR help users perform better in certain tasks when compared to using traditional desktop setups. However, prior works on immersive network visualization rely on mostly static graph layouts to present the data to the user. This poses a problem since there is no optimal layout for all possible tasks. The choice of layout heavily depends on the type of network and the task at hand. We introduce a multi-layout approach that allows users to effectively explore hierarchical network data in immersive space. The resulting system leverages different layout techniques and interactions to efficiently use the available space in VR and provide an optimal view of the data depending on the task and the level of detail required to solve it. To evaluate our approach, we have conducted a user study comparing it against the state of the art for immersive network visualization. Participants performed tasks at varying spatial scopes. The results show that our approach outperforms the baseline in spatially focused scenarios as well as when the whole network needs to be considered. ",A Multi-Layout Design for Immersive Visualization of Network Data " Topological crystalline insulators define a new class of topological insulator phases with gapless surface states protected by crystalline symmetries. In this work, we present a general theory to classify topological crystalline insulator phases based on the representation theory of space groups. Our approach is to directly identify possible nontrivial surface states in a semi-infinite system with a specific surface, of which the symmetry property can be described by 17 two-dimensional space groups. We reproduce the existing results of topological crystalline insulators, such as mirror Chern insulators in the $pm$ or $pmm$ groups, $C_{nv}$ topological insulators in the $p4m$, $p31m$ and $p6m$ groups, and topological nonsymmorphic crystalline insulators in the $pg$ and $pmg$ groups. Aside from these existing results, we also obtain the following new results: (1) there are two integer mirror Chern numbers ($\mathbb{Z}^2$) in the $pm$ group but only one ($\mathbb{Z}$) in the $cm$ or $p3m1$ group for both the spinless and spinful cases; (2) for the $pmm$ ($cmm$) groups, there is no topological classification in the spinless case but $\mathbb{Z}^4$ ($\mathbb{Z}^2$) classifications in the spinful case; (3) we show how topological crystalline insulator phase in the $pg$ group is related to that in the $pm$ group; (4) we identify topological classification of the $p4m$, $p31m$, and $p6m$ for the spinful case; (5) we find topological non-symmorphic crystalline insulators also existing in $pgg$ and $p4g$ groups, which exhibit new features compared to those in $pg$ and $pmg$ groups. We emphasize the importance of the irreducible representations for the states at some specific high-symmetry momenta in the classification of topological crystalline phases. Our theory can serve as a guide for the search of topological crystalline insulator phases in realistic materials. ",Classification of topological crystalline insulators based on representation theory " Let $\lambda\in (1,\sqrt{2}]$ be an algebraic integer with Mahler measure $2.$ A classical result of Garsia shows that the Bernoulli convolution $\mu_\lambda$ is absolutely continuous with respect to the Lebesgue measure with a density function in $L^\infty$. In this paper, we show that the density function is continuous. ","Bernoulli convolutions with Garsia parameters in $(1,\sqrt{2}]$ have continuous density functions" " We propose a simple analytical method for estimating the central volume density of prestellar molecular cloud cores from their column density profiles. Prestellar cores feature a flat central part of the column density and volume density profiles of the same size indicating the existence of a uniform density inner region. The size of this region is set by the thermal pressure force which depends only on the central volume density and temperature of the core, and can provide a direct measurement of the central volume density. Thus a simple length measurement can immediately yield a central density estimate independent of any dynamical model for the core and without the need for fitting. Using the radius at which the column density is 90% of the central value as an estimate of the size of the flat inner part of the column density profile yields an estimate of the central volume density within a factor of 2 for well resolved cores. ",A New Recipe for Obtaining Central Volume Densities of Prestellar Cores from Size Measurements " A search for the decay of a light Higgs (120 - 140 GeV) to a pair of weakly-interacting, long-lived particles in 1.94 fb^-1 of proton-proton collisions at sqrt{s} = 7 TeV recorded in 2011 by the ATLAS detector is presented. The search strategy requires that both long-lived particles decay inside the muon spectrometer. No excess of events is observed above the expected background and limits on the Higgs boson production times branching ratio to weakly-interacting, long-lived particles are derived as a function of the particle proper decay length. ",Search for a light Higgs boson decaying to long-lived weakly-interacting particles in proton-proton collisions at sqrt(s) = 7 TeV with the ATLAS detector " We consider the nonlinear equations obtained from soliton equations by adding self-consistent sources. We demonstrate by using as an example the Kadomtsev-Petviashvili equation that such equations on periodic functions are not isospectral. They deform the spectral curve but preserve the multipliers of the Floquet functions. The latter property implies that the conservation laws, for soliton equations, which may be described in terms of the Floquet multipliers give rise to conservation laws for the corresponding equations with self-consistent sources. Such a property was first observed by us for some geometrical flow which appears in the conformal geometry of tori in three- and four-dimensional Euclidean spaces (math/0611215). ",Spectral conservation laws for periodic nonlinear equations of the Melnikov type " The pattern of neutrino flavor oscillations could be altered by the influence of noisy perturbations such as those arising from a gravitational wave background (GWB). A stochastic process that is consistent with a GWB has been recently reported by the independent analyses of pulsar timing array (PTA) data sets collected over a decadal timescale by the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), the European Pulsar Timing Array (EPTA), the Parkes Pulsar Timing Array (PPTA), and the Chinese Pulsar Timing Array (CPTA) collaborations. We investigate the modifications in the neutrino flavor oscillations under the influence of the GWB reported by the PTA collaborations and we discuss how such effects could be potentially revealed in near-future neutrino detectors, possibly helping the discrimination of different models for the GWB below the nHz frequency range. ",Astrophysical neutrino oscillations after pulsar timing array analyses We have started a survey of galaxies at intermediate redshifts using the HST-STIS parallel fields. Our main goal is to analyse the morphology of faint galaxies in order to estimate the epoch of formation of the Hubble classification sequence. The high resolution of STIS images (0.05'') is ideal for this work and enable us to perform a morphological classification and to analyse the internal structures of galaxies. We find that 40% of the 290 galaxies are early types and that there are more irregulars and ellipticals at the fainter magnitudes. ,A Survey Searching for the Epoch of Assembling of Hubble Types " We present a numerical and partially analytical study of classical particles obeying a Langevin equation that describes diffusion on a surface modeled by a two dimensional potential. The potential may be either periodic or random. Depending on the potential and the damping, we observe superdiffusion, large-step diffusion, diffusion, and subdiffusion. Superdiffusive behavior is associated with low damping and is in most cases transient, albeit often long. Subdiffusive behavior is associated with highly damped particles in random potentials. In some cases subdiffusive behavior persists over our entire simulation and may be characterized as metastable. In any case, we stress that this rich variety of behaviors emerges naturally from an ordinary Langevin equation for a system described by ordinary canonical Maxwell-Boltzmann statistics. ",From subdiffusion to superdiffusion of particles on solid surfaces " Professor M. C. Polivanov and I met only a few times, during my infrequent visits to the-then Soviet Union in the 1970's and 1980's. His hospitality at the Moscow Steclov Institute made the trips a pleasure, while the scientific environment that he provided made them professionally valuable. But it is the human contact that I remember most vividly and shall now miss after his death. At a time when issues of conscience were both pressing for attention and difficult/dangerous to confront, Professor Polivanov made a deep impression with his quiet but adamant commitment to justice. I can only guess at the satisfaction he must have felt when his goal of gaining freedom for Yuri Orlov was attained, and even more so these days when human rights became defensible in his country; it is regrettable that he cannot now enjoy the future that he strived to attain. One of our joint interests was the Liouville theory,$^{1,\,2}$ which in turn can be viewed as a model for gravity in two-dimensional space-time. Some recent developments in this field are here summarized and dedicated to Polivanov's memory, with the hope that he would have enjoyed knowing about them. ",Gauge Theories for Gravity on a Line The method of controlled Lagrangians for discrete mechanical systems is extended to include potential shaping in order to achieve complete state-space asymptotic stabilization. New terms in the controlled shape equation that are necessary for matching in the discrete context are introduced. The theory is illustrated with the problem of stabilization of the cart-pendulum system on an incline. We also discuss digital and model predictive control. ,Controlled Lagrangians and Potential Shaping for Stabilization of Discrete Mechanical Systems " More and more HPC applications require fast and effective compression techniques to handle large volumes of data in storage and transmission. Not only do these applications need to compress the data effectively during simulation, but they also need to perform decompression efficiently for post hoc analysis. SZ is an error-bounded lossy compressor for scientific data, and cuSZ is a version of SZ designed to take advantage of the GPU's power. At present, cuSZ's compression performance has been optimized significantly while its decompression still suffers considerably lower performance because of its sophisticated lossless compression step -- a customized Huffman decoding. In this work, we aim to significantly improve the Huffman decoding performance for cuSZ, thus improving the overall decompression performance in turn. To this end, we first investigate two state-of-the-art GPU Huffman decoders in depth. Then, we propose a deep architectural optimization for both algorithms. Specifically, we take full advantage of CUDA GPU architectures by using shared memory on decoding/writing phases, online tuning the amount of shared memory to use, improving memory access patterns, and reducing warp divergence. Finally, we evaluate our optimized decoders on an Nvidia V100 GPU using eight representative scientific datasets. Our new decoding solution obtains an average speedup of 3.64X over cuSZ's Huffman decoder and improves its overall decompression performance by 2.43X on average. ",Optimizing Huffman Decoding for Error-Bounded Lossy Compression on GPUs " As a cool star evolves, it loses mass and angular momentum due to magnetized stellar winds which affect its rotational evolution. This change has consequences that range from the alteration of its activity to influences over the atmosphere of any orbiting planet. Despite their importance, observations constraining the properties of stellar winds in cool stars are extremely limited. Therefore, numerical simulations provide a valuable way to understand the structure and properties of these winds. In this work, we simulate the magnetized winds of 21 cool main-sequence stars (F-type to M-dwarfs), using a state-of-the-art 3D MHD code driven by observed large-scale magnetic field distributions. We perform a qualitative and quantitative characterization of our solutions, analyzing the dependencies between the driving conditions (e.g., spectral type, rotation, magnetic field strength) and the resulting stellar wind parameters (e.g., Alfv\'en surface size, mass loss rate, angular momentum loss rate, stellar wind speeds). We compare our models with the current observational knowledge on stellar winds in cool stars and explore the behaviour of the mass loss rate as a function of the Rossby number. Furthermore, our 3D models encompass the entire classical Habitable Zones (HZ) of all the stars in our sample. This allows us to provide the stellar wind dynamic pressure at both edges of the HZ and analyze the variations of this parameter across spectral type and orbital inclination. The results here presented could serve to inform future studies of stellar wind-magnetosphere interactions and stellar wind erosion of planetary atmospheres via ion escape processe. ",Numerical quantification of the wind properties of cool main sequence stars " In the 60's Shapley provided an example of a two player fictitious game with periodic behaviour. In this game, player $A$ aims to copy $B$'s behaviour and player $B$ aims to play one ahead of player $A$. In this paper we generalize Shapley's example by introducing an external parameter. We show that the periodic behaviour in Shapley's example at some critical parameter value disintegrates into unpredictable (chaotic) behaviour, with players dithering a huge number of times between different strategies. At a further critical parameter the dynamics becomes periodic again and both players aim to play one ahead of the other. We study the dynamics of a two player continuous time bimatrix fictitious play with the dynamics of a one-parameter family of $3 \times 3$ games that includes a well-known example of Shapley's as a special case. In this paper we adopt a geometric (dynamical systems) approach and study the bifurcations of simple periodic orbits. Here we concentrate on the periodic behaviour, while in a sequel we shall describe the chaotic behaviour. ",Fictitious Play in 3x3 Games: the transition between periodic and chaotic behaviour " We report inelastic neutron scattering measurements of the resonant spin excitations in Ba1-xKxFe2As2 over a broad range of electron band filling. The fall in the superconducting transi- tion temperature with hole doping coincides with the magnetic excitations splitting into two incom- mensurate peaks because of the growing mismatch in the hole and electron Fermi surface volumes, as confirmed by a tight-binding model with s+- symmetry pairing. The reduction in Fermi surface nesting is accompanied by a collapse of the resonance binding energy and its spectral weight caused by the weakening of electron-electron correlations. ",Effect of Fermi Surface Nesting on Resonant Spin Excitations in Ba1-xKxFe2As2 Free evolution for quantum particle in general ultrametric space is considered. We find that if mean zero wave packet is localized in some ball in the ultrametric space then its evolution remains localized in the same ball. ,Localization for free ultrametric quantum particle " We give an asymptotic formula for the number of $D_4$ quartic extensions of a function field with discriminant equal to some bound, essentially reproducing the analogous result over number fields due Cohen, Diaz y Diaz, and Olivier, but with a stronger error term. We also study the relative density of $D_4$ and $S_4$ quartic extensions of a function field and show that with mild conditions, the number of $D_4$ quartic extensions can far exceed the number of $S_4$ quartic extensions ",Enumerating $D_4$ Quartics and a Galois Group Bias Over Function Fields " Student success models might be prone to develop weak spots, i.e., examples hard to accurately classify due to insufficient representation during model creation. This weakness is one of the main factors undermining users' trust, since model predictions could for instance lead an instructor to not intervene on a student in need. In this paper, we unveil the need of detecting and characterizing unknown unknowns in student success prediction in order to better understand when models may fail. Unknown unknowns include the students for which the model is highly confident in its predictions, but is actually wrong. Therefore, we cannot solely rely on the model's confidence when evaluating the predictions quality. We first introduce a framework for the identification and characterization of unknown unknowns. We then assess its informativeness on log data collected from flipped courses and online courses using quantitative analyses and interviews with instructors. Our results show that unknown unknowns are a critical issue in this domain and that our framework can be applied to support their detection. The source code is available at https://github.com/epfl-ml4ed/unknown-unknowns. ",Do Not Trust a Model Because It is Confident: Uncovering and Characterizing Unknown Unknowns to Student Success Predictors in Online-Based Learning " Inflation creates perturbations for the large scale structures in the universe, but it also dilutes everything. Therefore it is pertinent that the end of inflation must explain how to excite the Standard Model {\it dof} along with the dark matter. In this paper we will briefly discuss the role of visible sector inflaton candidates which are embedded within the Minimal Supersymmetric Standard Model (MSSM) and discuss their merit on how well they match the current data from the Planck. Since the inflaton carries the Standard Model charges their decay naturally produces all the relevant {\it dof} with no {\it dark/hidden sector radiation} and no isocurvature fluctuations. We will first discuss a single supersymmetric flat direction model of inflation and demonstrate what parameter space is allowed by the Planck and the LHC. We will also consider where the perturbations are created by another light field which decays after inflation, known as a {\it curvaton}. The late decay of the curvaton can create observable non-Gaussianity. In the end we will discuss the role of a {\it spectator} field whose origin may not lie within the visible sector physics, but its sheer presence during inflation can still create all the perturbations responsible for the large scale structures including possible non-Gaussianity, while the inflaton is embedded within the visible sector which creates all the relevant matter including dark matter, but no dark radiation. ",Visible sector inflation and the right thermal history in light of Planck data " Instance segmentation on point clouds is crucially important for 3D scene understanding. Most SOTAs adopt distance clustering, which is typically effective but does not perform well in segmenting adjacent objects with the same semantic label (especially when they share neighboring points). Due to the uneven distribution of offset points, these existing methods can hardly cluster all instance points. To this end, we design a novel divide-and-conquer strategy named PBNet that binarizes each point and clusters them separately to segment instances. Our binary clustering divides offset instance points into two categories: high and low density points (HPs vs. LPs). Adjacent objects can be clearly separated by removing LPs, and then be completed and refined by assigning LPs via a neighbor voting method. To suppress potential over-segmentation, we propose to construct local scenes with the weight mask for each instance. As a plug-in, the proposed binary clustering can replace the traditional distance clustering and lead to consistent performance gains on many mainstream baselines. A series of experiments on ScanNetV2 and S3DIS datasets indicate the superiority of our model. In particular, PBNet ranks first on the ScanNetV2 official benchmark challenge, achieving the highest mAP. ",Divide and Conquer: 3D Point Cloud Instance Segmentation With Point-Wise Binarization " We study a version of adversarial classification where an adversary is empowered to corrupt data inputs up to some distance $\varepsilon$, using tools from variational analysis. In particular, we describe necessary conditions associated with the optimal classifier subject to such an adversary. Using the necessary conditions, we derive a geometric evolution equation which can be used to track the change in classification boundaries as $\varepsilon$ varies. This evolution equation may be described as an uncoupled system of differential equations in one dimension, or as a mean curvature type equation in higher dimension. In one dimension, and under mild assumptions on the data distribution, we rigorously prove that one can use the initial value problem starting from $\varepsilon=0$, which is simply the Bayes classifier, in order to solve for the global minimizer of the adversarial problem for small values of $\varepsilon$. In higher dimensions we provide a similar result, albeit conditional to the existence of regular solutions of the initial value problem. In the process of proving our main results we obtain a result of independent interest connecting the original adversarial problem with an optimal transport problem under no assumptions on whether classes are balanced or not. Numerical examples illustrating these ideas are also presented. ",Adversarial Classification: Necessary conditions and geometric flows " We report the results of a magneto-hydrodynamic (MHD) simulation of a convective dynamo in a model solar convective envelope driven by the solar radiative diffusive heat flux. The convective dynamo produces a large-scale mean magnetic field that exhibits irregular cyclic behavior with oscillation time scales ranging from about 5 to 15 years and undergoes irregular polarity reversals. The mean axisymmetric toroidal magnetic field is of opposite signs in the two hemispheres and is concentrated at the bottom of the convection zone. The presence of the magnetic fields is found to play an important role in the self-consistent maintenance of a solar-like differential rotation in the convective dynamo model. Without the magnetic fields, the convective flows drive a differential rotation with a faster rotating polar region. In the midst of magneto-convection, we found emergence of strong super-equipartition flux bundles at the surface, exhibiting properties that are similar to emerging solar active regions. ",A simulation of convective dynamo in the solar convective envelope: maintenance of the solar-like differential rotation and emerging flux We present an extension of a previously developed method employing the formalism of the fractional derivatives to solve new classes of integral equations. This method uses different forms of integral operators that generalizes the exponential shift operator. ,"Integral equations, fractional calculus and shift operator" " Astronomy is one of the most data-intensive of the sciences. Data technology is accelerating the quality and effectiveness of its research, and the rate of astronomical discovery is higher than ever. As a result, many view astronomy as being in a 'Golden Age', and projects such as the Virtual Observatory are amongst the most ambitious data projects in any field of science. But these powerful tools will be impotent unless the data on which they operate are of matching quality. Astronomy, like other fields of science, therefore needs to establish and agree on a set of guiding principles for the management of astronomical data. To focus this process, we are constructing a 'data manifesto', which proposes guidelines to maximise the rate and cost-effectiveness of scientific discovery. ",How to Make the Dream Come True: The Astronomers' Data Manifesto " When dealing with a parametric statistical model, a Riemannian manifold can naturally appear by endowing the parameter space with the Fisher information metric. The geometry induced on the parameters by this metric is then referred to as the Fisher-Rao information geometry. Interestingly, this yields a point of view that allows for leveragingmany tools from differential geometry. After a brief introduction about these concepts, we will present some practical uses of these geometric tools in the framework of elliptical distributions. This second part of the exposition is divided into three main axes: Riemannian optimization for covariance matrix estimation, Intrinsic Cram\'er-Rao bounds, and classification using Riemannian distances. ",The Fisher-Rao geometry of CES distributions " SPYTHIA is an event level Monte Carlo program which simulates particle production and decay at lepton and hadron colliders in the Minimal Supersymmetric Standard Model (MSSM). It is an extension of PYTHIA 5.7, with all of its previous capabilities. This paper is meant to supplement the PYTHIA/JETSET user manual, providing a description of the new particle spectrum, hard scattering processes, and decay modes. Several examples of using the program are provided. ","SPYTHIA, A Supersymmetric Extension of PYTHIA 5.7" " The Jalilian-Marian,Iancu, McLerran, Weigert, Leonidov, Kovner (JIMWLK) Hamiltonian for high energy evolution of QCD amplitudes is presented at the next-to-leading order accuracy in $\alpha_s$. The form of the Hamiltonian is deduced from the symmetries and the structure of the hadronic light cone wavefunction and by comparing the rapidity evolution of the quark dipole and the three-quark singlet states with results available in the literature. The next-to-leading corrections should allow for more robust phenomenological applications of perturbative saturation approach. ","Jalilian-Marian, Iancu, McLerran, Weigert, Leonidov, Kovner evolution at next to leading order" " `Double descent' delineates the generalization behaviour of models depending on the regime they belong to: under- or over-parameterized. The current theoretical understanding behind the occurrence of this phenomenon is primarily based on linear and kernel regression models -- with informal parallels to neural networks via the Neural Tangent Kernel. Therefore such analyses do not adequately capture the mechanisms behind double descent in finite-width neural networks, as well as, disregard crucial components -- such as the choice of the loss function. We address these shortcomings by leveraging influence functions in order to derive suitable expressions of the population loss and its lower bound, while imposing minimal assumptions on the form of the parametric model. Our derived bounds bear an intimate connection with the spectrum of the Hessian at the optimum, and importantly, exhibit a double descent behaviour at the interpolation threshold. Building on our analysis, we further investigate how the loss function affects double descent -- and thus uncover interesting properties of neural networks and their Hessian spectra near the interpolation threshold. ",Phenomenology of Double Descent in Finite-Width Neural Networks " We investigate the thermally activated magnetization switching of small ferromagnetic particles driven by an external magnetic field. For low uniaxial anisotropy the spins can be expected to rotate coherently, while for sufficient large anisotropy they should behave Ising-like, i.e., the switching should then be due to nucleation. We study this crossover from coherent rotation to nucleation for the classical three-dimensional Heisenberg model with a finite anisotropy. The crossover is influenced by the size of the particle, the strength of the driving magnetic field, and the anisotropy. We discuss the relevant energy barriers which have to be overcome during the switching, and find theoretical arguments which yield the energetically favorable reversal mechanisms for given values of the quantities above. The results are confirmed by Monte Carlo simulations of Heisenberg and Ising models. ",Magnetization switching in a Heisenberg model for small ferromagnetic particles " Consider a particle diffusing in a confined volume which is divided into two equal regions. In one region the diffusion coefficient is twice the value of the diffusion coefficient in the other region. Will the particle spend equal proportions of time in the two regions in the long term? Statistical mechanics would suggest yes, since the number of accessible states in each region is presumably the same. However, another line of reasoning suggests that the particle should spend less time in the region with faster diffusion, since it will exit that region more quickly. We demonstrate with a simple microscopic model system that both predictions are consistent with the information given. Thus, specifying the diffusion rate as a function of position is not enough to characterize the behaviour of a system, even assuming the absence of external forces. We propose an alternative framework for modelling diffusive dynamics in which both the diffusion rate and equilibrium probability density for the position of the particle are specified by the modeller. We introduce a numerical method for simulating dynamics in our framework that samples from the equilibrium probability density exactly and is suitable for discontinuous diffusion coefficients. ",A Paradox of State-Dependent Diffusion and How to Resolve It " Let G be a reductive group over an algebraically closed field of characteristic p, and let u in G be a unipotent element of order p. Suppose that p is a good prime for G. We show in this paper that there is a homomorphism phi:SL_2/k --> G whose image contains u. This result was first obtained by D. Testerman (J. Algebra, 1995) using case considerations for each type of simple group (and using, in some cases, computer calculations with explicit representatives for the unipotent orbits). The proof we give is free of case considerations (except in its dependence on the Bala-Carter theorem). Our construction of phi generalizes the construction of a principal homomorphism made by J.-P. Serre in (Invent. Math. 1996); in particular, phi is obtained by reduction modulo P from a homomorphism of group schemes over a valuation ring in a number field. This permits us to show moreover that the weight spaces of a maximal torus of phi(SL_2/k) on Lie(G) are ``the same as in characteristic 0''; the existence of a phi with this property was previously obtained, again using case considerations, by Lawther and Testerman (Memoirs AMS, 1999) and has been applied in some recent work of G. Seitz (Invent. Math. 2000). ",Sub-principal homomorphisms in positive characteristic " The purpose of this work is to give a definition of a topological K-theory for dg-categories over C and to prove that the Chern character map from algebraic K-theory to periodic cyclic homology descends naturally to this new invariant. This topological Chern map provides a natural candidate for the existence of a rational structure on the periodic cylic homology of a smooth proper dg-algebra, within the theory of noncommutative Hodge structures. The definition of topological K-theory consists in two steps : taking the topological realization of algebraic K-theory, and inverting the Bott element. The topological realization is the left Kan extension of the functor ""space of complex points"" to all simplicial presheaves over complex algebraic varieties. Our first main result states that the topological K-theory of the unit dg-category is the spectrum BU. For this we are led to prove a homotopical generalization of Deligne's cohomological proper descent, using Lurie's proper descent. The fact that the Chern character descends to topological K-theory is established by using Kassel's K\""unneth formula for periodic cyclic homology and once again the proper descent result. In the case of a dg-category of perfect complexes on a smooth scheme, we show that we recover the usual topological K-theory. Finally in the case of a finite dimensional associative algebra, we show that the lattice conjecture holds. This gives a formula for the periodic homology groups of a finite dimensional algebra in terms of the stack of projective modules of finite type. ",Topological K-theory of complex noncommutative spaces " Using computational techniques we derive six new upper bounds on the classical two-color Ramsey numbers: R(3,10) <= 42, R(3,11) <= 50, R(3,13) <= 68, R(3,14) <= 77, R(3,15) <= 87, and R(3,16) <= 98. All of them are improvements by one over the previously best known bounds. Let e(3,k,n) denote the minimum number of edges in any triangle-free graph on n vertices without independent sets of order k. The new upper bounds on R(3,k) are obtained by completing the computation of the exact values of e(3,k,n) for all n with k <= 9 and for all n <= 33 for k = 10, and by establishing new lower bounds on e(3,k,n) for most of the open cases for 10 <= k <= 15. The enumeration of all graphs witnessing the values of e(3,k,n) is completed for all cases with k <= 9. We prove that the known critical graph for R(3,9) on 35 vertices is unique up to isomorphism. For the case of R(3,10), first we establish that R(3,10) = 43 if and only if e(3,10,42) = 189, or equivalently, that if R(3,10) = 43 then every critical graph is regular of degree 9. Then, using computations, we disprove the existence of the latter, and thus show that R(3,10) <= 42. ","New Computational Upper Bounds for Ramsey Numbers R(3,k)" " Independent component analysis (ICA), as a data driven method, has shown to be a powerful tool for functional magnetic resonance imaging (fMRI) data analysis. One drawback of this multivariate approach is, that it is not compatible to the analysis of group data in general. Therefore various techniques have been proposed in order to overcome this limitation of ICA. In this paper a novel ICA-based work-flow for extracting resting state networks from fMRI group studies is proposed. An empirical mode decomposition (EMD) is used to generate reference signals in a data driven manner, which can be incorporated into a constrained version of ICA (cICA), what helps to eliminate the inherent ambiguities of ICA. The results of the proposed workflow are then compared to those obtained by a widely used group ICA approach for fMRI analysis. In this paper it is demonstrated that intrinsic modes, extracted by EMD, are suitable to serve as references for cICA to obtain typical resting state patterns, which are consistent over subjects. By introducing these reference signals into the ICA, our processing pipeline makes it transparent for the user, how comparable activity patterns across subjects emerge. This additionally allows adapting the trade-off between enforcing similarity across subjects and preserving individual subject features. ",A constrained ICA-EMD Model for Group Level fMRI Analysis " The girth of a graph is the length of its shortest cycle. We give an algorithm that computes in O(n(log n)^3) time and O(n) space the (weighted) girth of an n-vertex planar digraph with arbitrary real edge weights. This is an improvement of a previous time bound of O(n^(3/2)), a bound which was only valid for non-negative edge-weights. Our algorithm can be modified to output a shortest cycle within the same time and space bounds if such a cycle exists. ",Girth of a Planar Digraph with Real Edge Weights in O(n(log n)^3) Time " We consider a classical elastohydrodynamic model of an inextensible filament undergoing planar motion in $\mathbb{R}^3$. The hydrodynamics are described by resistive force theory, and the fiber elasticity is governed by Euler-Bernoulli beam theory. Our aim is twofold: (1) Serve as a starting point for developing the mathematical analysis of filament elastohydrodynamics, particularly the analytical treatment of an inextensibility constraint, and (2) As an application, prove conditions on internal fiber forcing that allow a free-ended filament to swim. Our analysis of fiber swimming speed is supplemented with a numerical optimization of the internal fiber forcing, as well as a novel numerical method for simulating an inextensible swimmer. ",Well-posedness and applications of classical elastohydrodynamics for a swimming filament " Recently, some infinite families of binary minimal and optimal linear codes are constructed from simplicial complexes by Hyun {\em et al}. Inspired by their work, we present two new constructions of codes over the ring $\Bbb F_2+u\Bbb F_2$ by employing simplicial complexes. When the simplicial complexes are all generated by a maximal element, we determine the Lee weight distributions of two classes of the codes over $\Bbb F_2+u\Bbb F_2$. Our results show that the codes have few Lee weights. Via the Gray map, we obtain an infinite family of binary codes meeting the Griesmer bound and a class of binary distance optimal codes. ",Optimal few-weight codes from simplicial complexes " A new equation of state is proposed in order to describe the thermal behavior of relic neutrinos. It is based on extensions of the MIT bag model to deal with the gravitational interaction and takes in account the fermionic character of neutrinos. The results for the temperature and entropy of relic neutrinos are compared with those of the cosmic background radiation, treated as a gas of photons at the temperature of 2.726 K. In particular, it is found that the temperature of the relic neutrinos is 3/4 of that of the photon gas. The ratio between the two entropies is also estimate. ",Relic neutrinos and cosmic background radiation: a new way of comparison " Two players alternate moves in the following impartial combinatorial game: Given a finitely generated abelian group $A$, a move consists of picking some nonzero element $a \in A$. The game then continues with the quotient group $A/ \langle a \rangle$. We prove that under the normal play rule, the second player has a winning strategy if and only if $A$ is a square, i.e. $A$ is isomorphic to $B \times B$ for some abelian group $B$. Under the mis\`ere play rule, only minor modifications concerning elementary abelian groups are necessary to describe the winning situations. We also compute the nimbers, i.e. Sprague-Grundy values, of $2$-generated abelian groups. An analogous game can be played with arbitrary algebraic structures. We study some examples of non-abelian groups and commutative rings such as $R[X]$, where $R$ is a principal ideal domain. ",Algebraic games - Playing with groups and rings " Interferometry can completely redirect light, providing the potential for strong and controllable optical forces. However, small particles do not naturally act like interferometric beamsplitters, and the optical scattering from them is not generally thought to allow efficient interference. Instead, optical trapping is typically achieved via deflection of the incident field. Here we show that a suitably structured incident field can achieve beamsplitter-like interactions with scattering particles. The resulting trap offers order-of-magnitude higher stiffness than the usual Gaussian trap in one axis, even when constrained to phase-only structuring. We demonstrate trapping of 3.5 to 10.0~$\mu$m silica spheres, achieving stiffness up to 27.5$\pm$4.1 times higher than is possible using Gaussian traps, and two orders of magnitude higher measurement signal-to-noise ratio. These results are highly relevant to many applications, including cellular manipulation, fluid dynamics, micro-robotics, and tests of fundamental physics. ",Enhanced optical trapping via structured scattering " The question of graviton cloning in the context of the bulk/boundary correspondence is considered. It is shown that multi-graviton theories can be obtained from products of large-N CFTs. No more than one interacting massless graviton is possible. There can be however, many interacting massive gravitons. This is achieved by coupling CFTs via multi-trace marginal or relevant perturbations. The geometrical structure of the gravitational duals of such theories is that of product manifolds with their boundaries identified. The calculational formalism is described and the interpretation of such theories is discussed. ","Product CFTs, gravitational cloning, massive gravitons and the space of gravitational duals" Integrality in the Hodge theory of Calabi-Yau fourfolds is essential to find the vacuum structure and the anomaly cancellation mechanism of four dimensional F-theory compactifications. We use the Griffiths-Frobenius geometry and homological mirror symmetry to fix the integral monodromy basis in the primitive horizontal subspace of Calabi-Yau fourfolds. The Gamma class and supersymmetric localization calculations in the 2d gauged linear sigma model on the hemisphere are used to check and extend this method. The result allows us to study the superpotential and the Weil-Petersson metric and an associated tt* structure over the full complex moduli space of compact fourfolds for the first time. We show that integral fluxes can drive the theory to N=1 supersymmetric vacua at orbifold points and argue that fluxes can be chosen that fix the complex moduli of F-theory compactifications at gauge enhancements including such with U(1) factors. Given the mechanism it is natural to start with the most generic complex structure families of elliptic Calabi-Yau 4-fold fibrations over a given base. We classify these families in toric ambient spaces and among them the ones with heterotic duals. The method also applies to the creating of matter and Yukawa structures in F-theory. We construct two SU(5) models in F-theory with a Yukawa point that have a point on the base with an $E_8$-type singularity on the fiber and explore their embeddings in the global models. The explicit resolution of the singularity introduce a higher dimensional fiber and leads to novel features. ,Landscaping with fluxes and the E8 Yukawa Point in F-theory " We investigate how exotic differential structures may reveal themselves in particle physics. The analysis is based on the A. Connes' construction of the standard model. It is shown that, if one of the copies of the spacetime manifold is equipped with an exotic differential structure, compact object of geometric origin may exist even if the spacetime is topologically trivial. Possible implications are discussed. An $SU(3)\otimes SU(2)\otimes U(1)$ gauge model is constructed. This model may not be realistic but it shows what kind of physical phenomena might be expected due to the existence of exotic differential structures on the spacetime manifold. ","Exotic smoothness, noncommutative geometry and particle physics" " The term fine-grained visual classification (FGVC) refers to classification tasks where the classes are very similar and the classification model needs to be able to find subtle differences to make the correct prediction. State-of-the-art approaches often include a localization step designed to help a classification network by localizing the relevant parts of the input images. However, this usually requires multiple iterations or passes through a full classification network or complex training schedules. In this work we present an efficient localization module that can be fused with a classification network in an end-to-end setup. On the one hand the module is trained by the gradient flowing back from the classification network. On the other hand, two self-supervised loss functions are introduced to increase the localization accuracy. We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft and are able to achieve competitive recognition performance. ",Fine-Grained Visual Classification with Efficient End-to-end Localization " An interferometric technique is proposed for determining the spatial forms of the individual degrees of freedom through which a many body system can absorb energy from its environment. The method separates out the coherent excitations present at any given frequency; it is not necessary to infer modal content from spectra. The system under test is excited with two external sources, which create generalized forces, and the fringe in the total power dissipated is measured as the relative phase between the sources is varied. If the complex fringe visibility is measured for different pairs of source locations, the anti-Hermitian part of the complex-valued non-local correlation tensor can be determined, which can then be decomposed to give the natural dynamical modes of the system and their relative responsivities. If each source in the interferometer creates a different kind of force, the spatial forms of the individual excitations that are responsible for cross-correlated response can be found. The technique is a generalization of holography because it measures the state of coherence to which the system is maximally sensitive. It can be applied across a wide range of wavelengths, in a variety of ways, to homogeneous media, thin films, patterned structures, and to components such as sensors, detectors and energy harvesting absorbers. ",Probing Quantum Correlation Functions Through Energy Absorption Interferometry The leading-order parton processes that produce a dilepton with large transverse momentum predict that the transverse polarization should increase with the transverse momentum for almost any choice of the quantization axis for the spin of the virtual photon. The rate of approach to complete transverse polarization depends on the choice of spin quantization axis. We propose axes that optimize that rate of approach. They are determined by the momentum of the dilepton and the direction of the jet that provides most of the balancing transverse momentum. ,Optimal spin-quantization axes for the polarization of dileptons with large transverse momentum " We discuss the evaporation and antievaporation instabilities of Nariai solution in extended theories of gravity. These phenomena were explicitly shown in several different extensions of General Relativity, suggesting that a universal cause is behind them. We show that evaporation and antievaporation are originated from deformations of energy conditions on the Nariai horizon. Energy conditions get new contributions from the extra propagating degrees of freedom, which can provide extra focalizing or antifocalizing terms in the Raychanduri equation. We also show two explicit examples in $f(R)$-gravity and Gauss-Bonnet gravity. ",Evaporation/Antievaporation and energy conditions in alternative gravity " The experimental STM images for the CDW phase of the blue bronze RbMoO3 have been successfully explained on the basis of first-principles DFT calculations. Although the density of states near the Fermi level strongly concentrates in two of the three types of Mo atoms Mo-II and Mo-III, the STM measurement mostly probes the contribution of the uppermost O atoms of the surface, associated with the Mo-IO6 octahedra. In addition, it is found that the surface concentration of Rb atoms plays a key role in determining the surface nesting vector and hence the periodicity of the CDW modulation. Significant experimental inhomogeneities of the b* surface component of the wavevector of the modulation, probed by STM, are reported. The calculated changes in the surface nesting vector are consistent with the observed experimental inhomogeneities. ",Analysis of the Scanning Tunneling Microscopy Images of the Charge Density Wave Phase in Quasi-one-dimensional Rb0.3MoO3 " The computation of the ground states of special multi-component Bose-Einstein condensates (BECs) can be formulated as an energy functional minimization problem with spherical constraints. It leads to a nonconvex quartic-quadratic optimization problem after suitable discretizations. First, we generalize the Newton-based methods for single-component BECs to the alternating minimization scheme for multi-component BECs. Second, the global convergent alternating Newton-Noda iteration (ANNI) is proposed. In particular, we prove the positivity preserving property of ANNI under mild conditions. Finally, our analysis is applied to a class of more general ""multi-block"" optimization problems with spherical constraints. Numerical experiments are performed to evaluate the performance of proposed methods for different multi-component BECs, including pseudo spin-1/2, anti-ferromagnetic spin-1 and spin-2 BECs. These results support our theory and demonstrate the efficiency of our algorithms. ",Newton-based alternating methods for the ground state of a class of multi-component Bose-Einstein condensates " (Abridged) A variety of formation scenarios was proposed to explain the diversity of properties observed in bulges. Studying their intrinsic shape can help in constraining the dominant mechanism at the epochs of their assembly. The structural parameters of a magnitude-limited sample of 148 unbarred S0--Sb galaxies were derived in order to study the correlations between bulges and disks as well as the probability distribution function (PDF) of the intrinsic equatorial ellipticity of bulges. It is presented a new fitting algorithm (GASP2D) to perform the two-dimensional photometric decomposition of galaxy surface-brightness distribution. This was assumed to be the sum of the contribution of a bulge and disk component characterized by elliptical and concentric isophotes with constant (but possibly different) ellipticity and position angles. Bulge and disk parameters of the sample galaxies were derived from the J-band images which were available in the Two Micron All Sky Survey. The PDF of the equatorial ellipticity of the bulges was derived from the distribution of the observed ellipticities of bulges and misalignments between bulges and disks. Strong correlations between the bulge and disk parameters were found. About 80% of bulges in unbarred lenticular and early-to-intermediate spiral galaxies are not oblate but triaxial ellipsoids. Their mean axial ratio in the equatorial plane is = 0.85. There is not significant dependence of their PDF on morphology, light concentration, and luminosity. The interplay between bulge and disk parameters favors scenarios in which bulges assembled from mergers and/or grew over long times through disk secular evolution. But all these mechanisms have to be tested against the derived distribution of bulge intrinsic ellipticities. ",Structural properties of disk galaxies I. The intrinsic ellipticity of bulges " We study the phase behaviour of a fluid composed of particles which interact via a pair potential that is repulsive for large inter-particle distances, is attractive at intermediate distances and is strongly repulsive at short distances (the particles have a hard core). As well as exhibiting gas-liquid phase separation, this system also exhibits phase transitions from the uniform fluid phases to modulated inhomogeneous fluid phases. Starting from a microscopic density functional theory, we develop an order parameter theory for the phase transition in order to examine in detail the phase behaviour. The amplitude of the density modulations is the order parameter in our theory. The theory predicts that the phase transition from the uniform to the modulated fluid phase can be either first order or second order (continuous). The phase diagram exhibits two tricritical points, joined to one another by the line of second order transitions. ",Theory for the phase behaviour of a colloidal fluid with competing interactions " The dynamical instability of new-born neutron stars is studied by evolving the linearized hydrodynamical equations. The neutron stars considered in this paper are those produced by the accretion induced collapse of rigidly rotating white dwarfs. A dynamical bar-mode (m=2) instability is observed when the ratio of rotational kinetic energy to gravitational potential energy $\beta$ of the neutron star is greater than the critical value $\beta_d \approx 0.25$. This bar-mode instability leads to the emission of gravitational radiation that could be detected by gravitational wave detectors. However, these sources are unlikely to be detected by LIGO II interferometers if the event rate is less than $10^{-6}$ per year per galaxy. Nevertheless, if a significant fraction of the pre-supernova cores are rapidly rotating, there would be a substantial number of neutron stars produced by the core collapse undergoing bar-mode instability. This would greatly increase the chance of detecting the gravitational radiation. ",Dynamical instability of new-born neutron stars as sources of gravitational radiation " We study the evolution of a small-scale emerging flux region (EFR) in the quiet Sun, from its emergence to its decay. We track processes and phenomena across all atmospheric layers, explore their interrelations and compare our findings with recent numerical modelling studies. We used imaging, spectral and spectropolarimetric observations from space-borne and ground-based instruments. The EFR appears next to the chromospheric network and shows all characteristics predicted by numerical simulations. The total magnetic flux of the EFR exhibits distinct evolutionary phases, namely an initial subtle increase, a fast increase and expansion of the region area, a more gradual increase, and a slow decay. During the initial stages, bright points coalesce, forming clusters of positive- and negative-polarity in a largely bipolar configuration. During the fast expansion, flux tubes make their way to the chromosphere, producing pressure-driven absorption fronts, visible as blueshifted chromospheric features. The connectivity of the quiet-Sun network gradually changes and part of the existing network forms new connections with the EFR. A few minutes after the bipole has reached its maximum magnetic flux, it brightens in soft X-rays forming a coronal bright point, exhibiting episodic brightenings on top of a long smooth increase. These coronal brightenings are also associated with surge-like chromospheric features, which can be attributed to reconnection with adjacent small-scale magnetic fields and the ambient magnetic field. The emergence of magnetic flux even at the smallest scales can be the driver of a series of energetic phenomena visible at various atmospheric heights and temperature regimes. Multi-wavelength observations reveal a wealth of mechanisms which produce diverse observable effects during the different evolutionary stages of these small-scale structures. ",Emergence of small-scale magnetic flux in the quiet Sun " Human social interactions are typically recorded as time-specific dyadic interactions, and represented as evolving (temporal) networks, where links are activated/deactivated over time. However, individuals can interact in groups of more than two people. Such group interactions can be represented as higher-order events of an evolving network. Here, we propose methods to characterize the temporal-topological properties of higher-order events to compare networks and identify their (dis)similarities. We analyzed 8 real-world physical contact networks, finding the following: a) Events of different orders close in time tend to be also close in topology; b) Nodes participating in many different groups (events) of a given order tend to involve in many different groups (events) of another order; Thus, individuals tend to be consistently active or inactive in events across orders; c) Local events that are close in topology are correlated in time, supporting observation a). Differently, in 5 collaboration networks, observation a) is almost absent; Consistently, no evident temporal correlation of local events has been observed in collaboration networks. Such differences between the two classes of networks may be explained by the fact that physical contacts are proximity based, in contrast to collaboration networks. Our methods may facilitate the investigation of how properties of higher-order events affect dynamic processes unfolding on them and possibly inspire the development of more refined models of higher-order time-varying networks. ",Temporal-topological properties of higher-order evolving networks " The separability of the spatial modes of a charged particle in a Penning trap in the presence of an environment is studied by means of the positive partial transpose (PPT) criterion. Assuming a weak Markovian environment, described by linear Lindblad operators, our results strongly suggest that the environmental coupling of the axial and cyclotron degrees of freedom does not lead to entanglement at experimentally realistic temperatures. We therefore argue that, apart from unavoidable decoherence, the presence of such an environment does not alter the effectiveness of recently suggested quantum information protocols in Penning traps, which are based on the combination of a spatial mode with the spin of the particle. ",Robustness of spatial Penning trap modes against environment-assisted entanglement " Holst term represents an interesting addition to the Einstein-Cartan theory of gravity with torsion. When this term is present the contact interactions between vector and axial vector fermion currents gain an extra parity-violating component. We re-derive this interaction using a simple representation for the Holst term. The same representation serves as a useful basis for the calculation of one-loop divergences in the theory with external fermionic currents and cosmological constant. Furthermore, we explore the possibilities of the on-shell version of renormalization group and construct the equations for the running of dimensionless parameters related to currents and for the effective Barbero-Immirzi parameter. ",Quantum Einstein-Cartan theory with the Holst term " We show some facts regarding the question whether, for any number $n$, the length of the shortest Addition Multiplications Chain (AMC) computing $n$ is polynomial in the length of the shortest division-free Straight Line Program (SLP) that computes $n$. If the answer to this question is ""yes"", then we can show a stronger upper bound for $\mathrm{PosSLP}$, the important problem which essentially captures the notion of efficient computation over the reals. If the answer is ""no"", then this would demonstrate how subtraction helps generating integers super-polynomially faster, given that addition and multiplication can be done in unit time. In this paper, we show that, for almost all numbers, AMCs and SLPs need same asymptotic length for computation. However, for one specific form of numbers, SLPs are strictly more powerful than AMCs by at least one step of computation. ",Subtraction makes computing integers faster " We present exact calculations of the zero-temperature partition function $Z(G,q,T=0)$ and ground-state degeneracy $W(\{G\},q)$ for the $q$-state Potts antiferromagnet on a number of families of graphs $G$ for which (generalizing $q$ from ${\mathbb Z}_+$ to ${\mathbb C}$) the boundary ${\cal B}$ of regions of analyticity of $W$ in the complex $q$ plane is noncompact, passing through $z=1/q=0$. For these types of graphs, since the reduced function $W_{red.}=q^{-1}W$ is nonanalytic at $z=0$, there is no large--$q$ Taylor series expansion of $W_{red.}$. The study of these graphs thus gives insight into the conditions for the validity of the large--$q$ expansions. It is shown how such (families of) graphs can be generated from known families by homeomorphic expansion. ",Ground State Entropy of Potts Antiferromagnets: Homeomorphic Classes with Noncompact W Boundaries " A graph is one-ended if it contains a ray (a one way infinite path) and whenever we remove a finite number of vertices from the graph then what remains has only one component which contains rays. A vertex $v$ {\em dominates} a ray in the end if there are infinitely many paths connecting $v$ to the ray such that any two of these paths have only the vertex $v$ in common. We prove that if a one-ended graph contains no ray which is dominated by a vertex and no infinite family of pairwise disjoint rays, then it has a tree-decomposition such that the decomposition tree is one-ended and the tree-decomposition is invariant under the group of automorphisms. This can be applied to prove a conjecture of Halin from 2000 that the automorphism group of such a graph cannot be countably infinite and solves a recent problem of Boutin and Imrich. Furthermore, it implies that every transitive one-ended graph contains an infinite family of pairwise disjoint rays. ",On tree-decompositions of one-ended graphs " In this paper, we study the back-end of simultaneous localization and mapping (SLAM) problem in deforming environment, where robot localizes itself and tracks multiple non-rigid soft surface using its onboard sensor measurements. An elaborate analysis is conducted on conventional deformation modelling method, Embedded Deformation (ED) graph. We demonstrate and prove that the ED graph widely used in such scenarios is unobservable and leads to multiple solutions unless suitable priors are provided. Example as well as theoretical prove are provided to show the ambiguity of ED graph and camera pose. In modelling non-rigid scenario with ED graph, motion priors of the deforming environment is essential to separate robot pose and deforming environment. The conclusion can be extrapolated to any free form deformation formulation. In solving the observability, this research proposes a preliminary deformable SLAM approach to estimate robot pose in complex environments that exhibits regular motion. A strategy that approximates deformed shape using a linear combination of several previous shapes is proposed to avoid the ambiguity in robot movement and rigid and non-rigid motions of the environment. Fisher information matrix rank analysis with a base case is discussed to prove the effectiveness. Moreover, the proposed algorithm is validated extensively on Monte Carlo simulations and real experiments. It is demonstrated that the new algorithm significantly outperforms conventional rigid SLAM and ED based SLAM especially in scenarios where there is large deformation. ",An observable time series based SLAM algorithm for deforming environment " The basic method of rewriting for words in a free monoid given a monoid presentation is extended to rewriting for paths in a free category given a `Kan extension presentation'. This is related to work of Carmody-Walters on the Todd-Coxeter procedure for Kan extensions, but allows for the output data to be infinite, described by a language. The result also allows rewrite methods to be applied in a greater range of situations and examples, in terms of induced actions of monoids, categories, groups or groupoids. ",Using Rewriting Systems to Compute Kan Extensions and Induced Actions of Categories " In the context of a time-varying multiuser multiple-input-multiple-output (MIMO) system, we design recursive least squares based adaptive predictors and differential quantizers to minimize the sum mean squared error of the overall system. Using the fact that the scalar entries of the left singular matrix of a Gaussian MIMO channel becomes almost Gaussian distributed even for a small number of transmit antennas, we perform adaptive differential quantization of the relevant singular matrix entries. Compared to the algorithms in the existing differential feedback literature, our proposed quantizer provides three advantages: first, the controller parameters are flexible enough to adapt themselves to different vehicle speeds; second, the model is backward adaptive i.e., the base station and receiver can agree upon the predictor and variance estimator coefficients without explicit exchange of the parameters; third, it can accurately model the system even when the correlation between two successive channel samples becomes as low as 0.05. Our simulation results show that our proposed method can reduce the required feedback by several kilobits per second for vehicle speeds up to 20 km/h (channel tracker) and 10 km/h (singular vector tracker). The proposed system also outperforms a fixed quantizer, with same feedback overhead, in terms of bit error rate up to 30 km/h. ",Adaptive Differential Feedback in Time-Varying Multiuser MIMO Channels " Let $G$ be an $n$-vertex graph with the maximum degree $\Delta$ and the minimum degree $\delta$. We give algorithms with complexity $O(1.3158^{n-0.7~\Delta(G)})$ and $O(1.32^{n-0.73~\Delta(G)})$ that determines if $G$ is 3-colorable, when $\delta(G)\geq 8$ and $\delta(G)\geq 7$, respectively. ",Improved algorithm to determine 3-colorability of graphs with the minimum degree at least 7 We discuss how saturation of unitarity would change phase structure of hadronic matter at very high temperatures emphasizing the role of the vacuum state with spontaneously broken chiral symmetry. ,Unitarity: confinement and collective effects in hadron interactions " We evaluate the thermal corrections to the generalized Gaussian effective potential. We carry out the calculations of the lowest order corrections in the case of self-interacting scalar fields in one and two spatial dimensions, and study the restoration of the symmetry at high temperatures. ",Generalized Gaussian Effective Potential: Thermal Corrections " Flux analysis is a class of constraint-based approaches to the study of biochemical reaction networks: they are based on determining the reaction flux configurations compatible with given stoichiometric and thermodynamic constraints. One of its main areas of application is the study of cellular metabolic networks. We briefly and selectively review the main approaches to this problem and then, building on recent work, we provide a characterization of the productive capabilities of the metabolic network of the bacterium E.coli in a specified growth medium in terms of the producible biochemical species. While a robust and physiologically meaningful production profile clearly emerges (including biomass components, biomass products, waste etc.), the underlying constraints still allow for significant fluctuations even in key metabolites like ATP and, as a consequence, apparently lay the ground for very different growth scenarios. ","The solution space of metabolic networks: producibility, robustness and fluctuations" " Five-hundred-meter Aperture Spherical radio Telescope (FAST) is a Chinese mega-science project to build the largest single dish radio telescope in the world. Its innovative engineering concept and design pave a new road to realize a huge single dish in the most effective way. FAST also represents Chinese contribution in the international efforts to build the square kilometer array (SKA). Being the most sensitive single dish radio telescope, FAST will enable astronomers to jump-start many science goals, for example, surveying the neutral hydrogen in the Milky Way and other galaxies, detecting faint pulsars, looking for the first shining stars, hearing the possible signals from other civilizations, etc. The idea of sitting a large spherical dish in a karst depression is rooted in Arecibo telescope. FAST is an Arecibo-type antenna with three outstanding aspects: the karst depression used as the site, which is large to host the 500-meter telescope and deep to allow a zenith angle of 40 degrees; the active main reflector correcting for spherical aberration on the ground to achieve a full polarization and a wide band without involving complex feed systems; and the light-weight feed cabin driven by cables and servomechanism plus a parallel robot as a secondary adjustable system to move with high precision. The feasibility studies for FAST have been carried out for 14 years, supported by Chinese and world astronomical communities. The project time is 5.5 years from the commencement of work in March of 2011 and the first light is expected to be in 2016. This review intends to introduce FAST project with emphasis on the recent progress since 2006. In this paper, the subsystems of FAST are described in modest details followed by discussions of the fundamental science goals and examples of early science projects. ",The Five-Hundred-Meter Aperture Spherical Radio Telescope (FAST) Project " The Arctic sea ice represents an important energy reservoir for the climate of the northern hemisphere. The shrinking of the polar ice in the past decades decreases the stored energy and raises serious concerns about future climate changes.[1-4] Model calculations of the present authors [5,6] suggest that half of the global warming during the past fifty years is directly related to the retreat of the sea ice, while the cause is not well understood, e.g. the role of surface pollution [7-10]. We have analysed the reported annual melting and freezing data of the northern sea ice in the years 1979 to 2018 [11] to gain some insight. Two features can be deduced from our simple model: (i) recent results [12,13] are confirmed that approximately 60 % of the loss of sea ice stems from energy transport to the arctic region. (ii) We find evidence that the remaining part of the ice retreat originates from an increasing surface absorption of solar radiation, obviously due to the rising surface pollution of the sea ice. While the phenomenon was previously considered by several authors in a qualitative way, our analysis contributes semi-quantitative information on the situation. We estimate that the relevant fall-out of light absorbing aerosols onto the sea ice increased by 17 +/- 5 % during the past fifty years. A deposition of additional 3 +/- 1 % of solar radiation in the melting region results that accounts for the ice retreat. Recalling the important role of the ice loss for the terrestrial climate,[3,5,9] the precipitation of air pollution in the Arctic seems to be an important factor for the global warming. ",An Estimate of the Surface Pollution of the Arctic Sea Ice " Nonparametric regression models have recently surged in their power and popularity, accompanying the trend of increasing dataset size and complexity. While these models have proven their predictive ability in empirical settings, they are often difficult to interpret and do not address the underlying inferential goals of the analyst or decision maker. In this paper, we propose a modular two-stage approach for creating parsimonious, interpretable summaries of complex models which allow freedom in the choice of modeling technique and the inferential target. In the first stage a flexible model is fit which is believed to be as accurate as possible. In the second stage, lower-dimensional summaries are constructed by projecting draws from the distribution onto simpler structures. These summaries naturally come with valid Bayesian uncertainty estimates. Further, since we use the data only once to move from prior to posterior, these uncertainty estimates remain valid across multiple summaries and after iteratively refining a summary. We apply our method and demonstrate its strengths across a range of simulated and real datasets. Code to reproduce the examples shown is avaiable at github.com/spencerwoody/ghost ",Model interpretation through lower-dimensional posterior summarization " We study the accretion flows from the circumbinary disks onto the supermassive binary black holes in a subparsec scale of the galactic center, using a smoothed particles hydrodynamics (SPH) code. Simulation models are presented in four cases of a circular binary with equal and unequal masses, and of an eccentric binary with equal and unequal masses. We find that the circumblack-hole disks are formed around each black holes regardless of simulation parameters. There are two-step mechanisms to cause an accretion flow from the circumbinary disk onto supermassive binary black holes: First, the tidally induced elongation of the circumbinary disk triggers mass inflow towards two closest points on the circumbinary disk from the black holes. Then, the gas is increasingly accumulated on these two points owing to the gravitational attraction of black holes. Second, when the gas can pass across the maximum loci of the effective binary potential, it starts to overflow via their two points and freely infalls to each black hole. In circular binaries, the gas continues to be supplied from the circumbinary disk (i.e. the gap between the circumbinary disk and the binary black hole is always closed.) In eccentric binaries, the mass supply undergoes the periodic on/off transitions during one orbital period because of the variation of periodic potential. The gap starts to close after the apastron and to open again after the next periastron passage. Due to this gap closing/opening cycles, the mass-capture rates are eventually strongly phase dependent. This could provide observable diagnosis for the presence of supermassive binary black holes in merged galactic nuclei. ",Binary Black Hole Accretion Flows in Merged Galactic Nuclei " We study the persistence of eigenvalues and eigenvectors of perturbed eigenvalue problems in Hilbert spaces. We assume that the unperturbed problem has a nontrivial kernel of odd dimension and we prove a Rabinowitz-type global continuation result. The approach is topological, based on a notion of degree for oriented Fredholm maps of index zero between real differentiable Banach manifolds. ",Global persistence of the unit eigenvectors of perturbed eigenvalue problems in Hilbert spaces: the odd multiplicity case " Artin solved Hilbert's 17th problem, proving that a real polynomial in $n$ variables that is positive semidefinite is a sum of squares of rational functions, and Pfister showed that only $2^n$ squares are needed. In this paper, we investigate situations where Pfister's theorem may be improved. We show that a real polynomial of degree $d$ in $n$ variables that is positive semidefinite is a sum of $2^n-1$ squares of rational functions if $d\leq 2n-2$. If $n$ is even, or equal to $3$ or $5$, this result also holds for $d=2n$. ",On Hilbert's 17th problem in low degree " Motivated by the possibility of ${\cal S}_{\phi K_S}<0$, we study the implications for $B_s$ meson system. In a specific model that realizes ${\cal S}_{\phi K_S}<0$ with large $s$-$b$ mixing, right-handed dynamics and a new CP phase, we present predictions for CP asymmetries in $B_s\to J/\psi\phi$, $K^+K^-$ and $\phi\gamma$ decays. Even if the measurement of time-dependent CP asymmetry becomes hampered by very fast $B_s$ oscillation, a finite difference between the decay rates of $B_s$ mass eigenstates may enable the studies of CP violations with untagged data samples. Thus, studies of CP violation in the $B_s$ system would remain useful for the extraction of new physics information. ",Effect of Supersymmetric Right-handed Flavor Mixing on $B_s$ decays We present a polynomial quantum algorithm for the Abelian stabilizer problem which includes both factoring and the discrete logarithm. Thus we extend famous Shor's results. Our method is based on a procedure for measuring an eigenvalue of a unitary operator. Another application of this procedure is a polynomial quantum Fourier transform algorithm for an arbitrary finite Abelian group. The paper also contains a rather detailed introduction to the theory of quantum computation. ,Quantum measurements and the Abelian Stabilizer Problem " Boson stars are descendants of the so-called geons of Wheeler, except that they are built from scalar particles instead of electromagnetic fields. If scalar fields exist in nature, such localized configurations kept together by their self-generated gravitational field can form within Einstein's general relativity. In the case of complex scalar fields, an absolutely stable branch of such non-topological solitons with conserved particle number exists. Our present surge stems from the speculative possibility that these compact objects could provide a considerable fraction of the non-baryonic part of dark matter. In any case, they may serve as a convenient ""laboratory"" for studying numerically rapidly rotating bodies in general relativity and the generation of gravitational waves. ",Boson Stars: Early History and Recent Prospects " Genetic programming (GP) is an evolutionary computation technique to solve problems in an automated, domain-independent way. Rather than identifying the optimum of a function as in more traditional evolutionary optimization, the aim of GP is to evolve computer programs with a given functionality. While many GP applications have produced human competitive results, the theoretical understanding of what problem characteristics and algorithm properties allow GP to be effective is comparatively limited. Compared with traditional evolutionary algorithms for function optimization, GP applications are further complicated by two additional factors: the variable-length representation of candidate programs, and the difficulty of evaluating their quality efficiently. Such difficulties considerably impact the runtime analysis of GP, where space complexity also comes into play. As a result, initial complexity analyses of GP have focused on restricted settings such as the evolution of trees with given structures or the estimation of solution quality using only a small polynomial number of input/output examples. However, the first computational complexity analyses of GP for evolving proper functions with defined input/output behavior have recently appeared. In this chapter, we present an overview of the state of the art. ",Computational Complexity Analysis of Genetic Programming " We study the propagation of quasi-discrete microwave solitons in a nonlinear left-handed coplanar waveguide coupled with split ring resonators. By considering the relevant transmission line analogue, we derive a nonlinear lattice model which is studied analytically by means of a quasi-discrete approximation. We derive a nonlinear Schr{\""o}dinger equation, and find that the system supports bright envelope soliton solutions in a relatively wide subinterval of the left-handed frequency band. We perform systematic numerical simulations, in the framework of the nonlinear lattice model, to study the propagation properties of the quasi-discrete microwave solitons. Our numerical findings are in good agreement with the analytical predictions, and suggest that the predicted structures are quite robust and may be observed in experiments. ",Quasi-discrete microwave solitons in a split ring resonator-based left-handed coplanar waveguide " In this work, the analysis of oblique anti-plane shear waves propagation and scattering in low frequency resonant micro-structured layered media with viscoelastic constituent layers is presented. The band structure of the infinitely periodic systems and scattering off a finite thickness slab of such media are determined using the transfer matrix method. A consistent dynamic field homogenization approach is applied, in which the micro-scale field equations are integrated and the overall macro-scale quantities are defined to be compatible with these integral forms. A reduced set of constitutive tensors is presented for general asymmetric repeating unit cells (RUC), utilizing the proposed homogenized macro-scale quantities combined with Onsager's principle and presumed material form of elastodynamic reciprocity. This set can be further restricted by studying the form of the dispersion equation leading to a unique constitutive tensor. It is shown that for an asymmetric RUC, the full constitutive tensor are required to match the scattering and band structure of the micro-structured media, but all the off-diagonal parameters vanish for a symmetric RUC. Therefore, it is possible to create an equivalent homogenized representation with a uniquely determined diagonal constitutive tensor for a symmetric RUC, though, as is the case also with asymmetric RUCs, all non-zero components will be wave-vector dependent. Numerical examples are presented to demonstrate the application and consistency of the proposed method. All the diagonal terms are converging to their appropriate Voigt or Reuss averages at the long-wavelength limit for all wave directions. The wave-vector dependent nature of the off-diagonal coupling constants can still be observed even in this limit. The conditions for lossy or lossless systems are presented and are shown to impose weak requirements on overall constitutive tensors. ",Overall dynamic properties of locally resonant viscoelastic layered media based on consistent field integration for oblique anti-plane shear waves " Understanding of star formation in the Universe is advancing through submillimeter-wave observations of the Milky Way and other galaxies. Technological constraints on such observations require a mixture of telescope sizes and observational techniques. For some purposes, small submillimeter-wave telescopes are more sensitive than large ones. The Antarctic Submillimeter Telescope and Remote Observatory (AST/RO) is a small, wide-field instrument located at an excellent observatory site. By observing the Milky Way and Magellanic Clouds at arcminute resolution, it provides a context for interpreting observations of distant galaxies made by large interferometric telescopes. AST/RO also provides hands-on training in submillimeter technology and allows testing of novel detector systems. ",AST/RO: A Small Submillimeter Telescope at the South Pole " Storage systems using Peer-to-Peer (P2P) architecture are an alternative to the traditional client-server systems. They offer better scalability and fault tolerance while at the same time eliminate the single point of failure. The nature of P2P storage systems (which consist of heterogeneous nodes) introduce however data placement challenges that create implementation trade-offs (e.g., between performance and scalability). Existing Kademlia-based DHT data placement method stores data at closest node, where the distance is measured by bit-wise XOR operation between data and a given node. This approach is highly scalable because it does not require global knowledge for placing data nor for the data retrieval. It does not however consider the heterogeneous performance of the nodes, which can result in imbalanced resource usage affecting the overall latency of the system. Other works implement criteria-based selection that addresses heterogeneity of nodes, however often cause subsequent data retrieval to require global knowledge of where the data stored. This paper introduces Residual Performance-based Data Placement (RPDP), a novel data placement method based on dynamic temporal residual performance of data nodes. RPDP places data to most appropriate selected nodes based on their throughput and latency with the aim to achieve lower overall latency by balancing data distribution with respect to the individual performance of nodes. RPDP relies on Kademlia-based DHT with modified data structure to allow data subsequently retrieved without the need of global knowledge. The experimental results indicate that RPDP reduces the overall latency of the baseline Kademlia-based P2P storage system (by 4.87%) and it also reduces the variance of latency among the nodes, with minimal impact to the data retrieval complexity. ",RPDP: An Efficient Data Placement based on Residual Performance for P2P Storage Systems " Mace4 is a program that searches for finite models of first-order formulas. For a given domain size, all instances of the formulas over the domain are constructed. The result is a set of ground clauses with equality. Then, a decision procedure based on ground equational rewriting is applied. If satisfiability is detected, one or more models are printed. Mace4 is a useful complement to first-order theorem provers, with the prover searching for proofs and Mace4 looking for countermodels, and it is useful for work on finite algebras. Mace4 performs better on equational problems than did our previous model-searching program Mace2. ",Mace4 Reference Manual and Guide " We study a control problem for queueing systems where customers may return for additional episodes of service after their initial service completion. At each service completion epoch, the decision maker can choose to reduce the probability of return for the departing customer but at a cost that is convex increasing in the amount of reduction in the return probability. Other costs are incurred as customers wait in the queue and every time they return for service. Our primary motivation comes from post-discharge Quality Improvement (QI) interventions (e.g., follow up phone-calls, appointments) frequently used in a variety of healthcare settings to reduce unplanned hospital readmissions. Our objective is to understand how the cost of interventions should be balanced with the reductions in congestion and service costs. To this end, we consider a fluid approximation of the queueing system and characterize the structure of optimal long-run average and bias-optimal transient control policies for the fluid model. Our structural results motivate the design of intuitive surge protocols whereby different intensities of interventions (corresponding to different levels of reduction in the return probability) are provided based on the congestion in the system. Through extensive simulation experiments, we study the performance of the fluid policy for the stochastic system and identify parameter regimes where it leads to significant cost savings compared to a fixed long-run average optimal policy that ignores holding costs and a simple policy that uses the highest level of intervention whenever the queue is non-empty. In particular, we find that in a parameter regime relevant to our motivating application, dynamically adjusting the intensity of interventions could result in up to 25.4% reduction in long-run average cost and 33.7% in finite-horizon costs compared to the simple aggressive policy. ",Dynamic Control of Service Systems with Returns: Application to Design of Post-Discharge Hospital Readmission Prevention Programs " In this paper we present a complete SLAM system for RGB-D cameras, namely RGB-iD SLAM. The presented approach is a dense direct SLAM method with the main characteristic of working with the depth maps in inverse depth parametrisation for the routines of dense alignment or keyframe fusion. The system consists in 2 CPU threads working in parallel, which share the use of the GPU for dense alignment and keyframe fusion routines. The first thread is a front-end operating at frame rate, which processes every incoming frame from the RGB-D sensor to compute the incremental odometry and integrate it in a keyframe which is changed periodically following a covisibility-based strategy. The second thread is a back-end which receives keyframes from the front-end. This thread is in charge of segmenting the keyframes based on their structure, describing them using Bags of Words, trying to find potential loop closures with previous keyframes, and in such case perform pose-graph optimisation for trajectory correction. In addition, our system allows is able to compute the odometry both with unregistered and registered depth maps, allowing to use customised calibrations of the RGB-D sensor. As a consequence in the paper we also propose a detailed calibration pipeline to compute customised calibrations for particular RGB-D cameras. The experiments with our approach in the TUM RGB-D benchmark datasets show results superior in accuracy to the state-of-the-art in many of the sequences. The code has been made available on-line for research purposes https://github.com/dangut/RGBiD-SLAM. ",RGBiD-SLAM for Accurate Real-time Localisation and 3D Mapping We prove a determinantal type formula to compute the irreducible characters of the general Lie superalgebra $\mathfrak{gl}(m|1)$ in terms of the characters of the symmetric powers of the fundamental representation and their duals. This formula was conjectured by J. van der Jeugt and E. Moens for the Lie superalgebra $\frak{gl}(m|n)$ and generalizes the well-known Jacobi-Trudi formula. ,Jacobi-Trudi type formula for character of irreducible representations of $\frak{gl}(m|1)$ We develop a method for constructing exact cosmological solutions in brane world cosmology. New classes of cosmological solutions on Randall-Sandrum brane are obtained. The superpotential and Hubble parameter are represented in quadratures. These solutions have inflationary phases under general assumptions and also describe an exit from the inflationary phase without a fine tuning of the parameters. Another class solutions can describe the current phase of accelerated expansion with or without possible exit from it. ,New exact cosmologies on the brane " The effect of the rotation on the turbulent mixing of two miscible fluids of small contrasting density, produced by Faraday instability, is investigated using direct numerical simulations (DNS). We demonstrate that at lower forcing amplitudes, the t.k.e. increases with an increase in f till (f/\omega\right)^2<0.25, where \omega is the forcing frequency, during the sub-harmonic instability phase. The increase in t.k.e. increases B_V, which increases the total potential energy (TPE). A portion of TPE is the APE. Some parts of APE can convert to $t.k.e.$ via B_V, whereas the rest converts to internal energy, increasing BPE through \phi_i. The remaining TPE also converts to BPE through the diapycnal flux \phi_d resulting in irreversible mixing. With the saturation of the instability, irreversible mixing ceases. When (f/\omega\right)^2 > 0.25, the Coriolis force significantly delays the onset of the sub-harmonic instabilities. During this period, the initial concentration profile diffuses to increase TPE, which eventually expends in BPE. The strong rotational effects suppress t.k.e.. Therefore, B_V and APE become small, and the bulk of the TPE expends to BPE. Since the instability never saturates for $\left(f/\omega\right)^2 > 0.25$, the $B_V$ remains non-zero, resulting in a continuous increase in TPE. Conversion of TPE to BPE via $\phi_d$ continues, and we find prolonged irreversible mixing. At higher forcing amplitudes, the stabilizing effect of rotation is negligible, and the turbulence is less intense and short-lived. Therefore, the irreversible mixing phenomenon also ends quickly for $\left(f/\omega\right)^2<0.25$. However, when $\left(f/\omega\right)^2>0.25$ a continuous mixing is observed. We find that the turbulent mixing is efficient at lower forcing amplitudes and rotation rates of (f/\omega)^2 > 0.25. ",Effect of rotation on turbulent mixing driven by the Faraday instability " Supernova remnants (SNRs) have a variety of overall morphology as well as rich structures over a wide range of scales. Quantitative study of these structures can potentially reveal fluctuations of density and magnetic field originating from the interaction with ambient medium and turbulence in the expanding ejecta. We have used $1.5$GHz (L band) and $5$GHz (C band) VLA data to estimate the angular power spectrum $C_{\ell}$ of the synchrotron emission fluctuations of the Kepler SNR. This is done using the novel, visibility based, Tapered Gridded Estimator of $C_{\ell}$. We have found that, for $\ell = (1.9 - 6.9) \times 10^{4}$, the power spectrum is a broken power law with a break at $\ell = 3.3 \times 10^{4}$, and power law index of $-2.84\pm 0.07$ and $-4.39\pm 0.04$ before and after the break respectively. The slope $-2.84$ is consistent with 2D Kolmogorov turbulence and earlier measurements for the Tycho SNR. We interpret the break to be related to the shell thickness of the SNR ($0.35 $ pc) which approximately matches $\ell = 3.3 \times 10^{4}$ (i.e., $0.48$ pc). However, for $\ell > 6.9 \times 10^{4}$, the estimated $C_{\ell}$ of L band is likely to have dominant contribution from the foregrounds while for C band the power law slope $-3.07\pm 0.02$ is roughly consistent with $3$D Kolmogorov turbulence like that observed at large $\ell$ for Cas A and Crab SNRs. ",A study of Kepler supernova remnant: angular power spectrum estimation from radio frequency data " In this paper path integration in two- and three-dimensional spaces of constant curvature is discussed: i.e.\ the flat spaces $\bbbr^2$ and $\bbbr^3$, the two- and three-dimensional sphere and the two- and three dimensional pseudosphere. The Laplace operator in these spaces admits separation of variables in various coordinate systems. In all these coordinate systems the path integral formulation will be stated, however in most of them an explicit solution in terms of the spectral expansion can be given only on a formal level. What can be stated in all cases, are the propagator and the corresponding Green function, respectively, depending on the invariant distance which is a coordinate independent quantity. This property gives rise to numerous identities connecting the corresponding path integral representations and propagators in various coordinate systems with each other. ",Path Integration and Separation of Variables in Spaces of Constant Curvature in Two and Three Dimensions " A standard assumption adopted in the multi-armed bandit (MAB) framework is that the mean rewards are constant over time. This assumption can be restrictive in the business world as decision-makers often face an evolving environment where the mean rewards are time-varying. In this paper, we consider a non-stationary MAB model with $K$ arms whose mean rewards vary over time in a periodic manner. The unknown periods can be different across arms and scale with the length of the horizon $T$ polynomially. We propose a two-stage policy that combines the Fourier analysis with a confidence-bound-based learning procedure to learn the periods and minimize the regret. In stage one, the policy correctly estimates the periods of all arms with high probability. In stage two, the policy explores the periodic mean rewards of arms using the periods estimated in stage one and exploits the optimal arm in the long run. We show that our learning policy incurs a regret upper bound $\tilde{O}(\sqrt{T\sum_{k=1}^K T_k})$ where $T_k$ is the period of arm $k$. Moreover, we establish a general lower bound $\Omega(\sqrt{T\max_{k}\{ T_k\}})$ for any policy. Therefore, our policy is near-optimal up to a factor of $\sqrt{K}$. ",Learning and Optimization with Seasonal Patterns " In this paper, we prove that there is a strongly universal cellular automaton on the heptagrid with six states which is rotation invariant. This improves a previous paper of the author with 7 states. Here, the structures are modified and the number of rules is much less. ",A strongly universal cellular automaton on the heptagrif with six states " We evaluate the allowed $\beta^-$-decay properties of nuclei with $Z = 8 - 15$ systematically under the framework of the nuclear shell model with the use of the valence space Hamiltonians derived from modern $ab~intio$ methods, such as in-medium similarity renormalization group and coupled-cluster theory. For comparison we also show results obtained with fitted interaction derived from the chiral effective field theory and phenomenological USDB interaction. We have performed calculations for O $\rightarrow$ F, F $\rightarrow$ Ne, Ne $\rightarrow$ Na, Na $\rightarrow$ Mg, Mg $\rightarrow$ Al, Al $\rightarrow$ Si, Si $\rightarrow$ P and P $\rightarrow$ S transitions. Theoretical results of $B(GT)$, log$ft$ values and half-lives, are discussed and compared with the available experimental data. ",Shell model results for nuclear $\beta^-$-decay properties of $sd$ shell nuclei " We present near-infrared spectroscopy of the NLS1 galaxy PHL1092 (z=0.394), the strongest FeII emitter ever reported, combined with optical and UV data. We modeled the continuum and the broad emission lines using a power-law plus a black body function and Lorentzian functions, respectively. The strength of the FeII emission was estimated using the latest FeII templates in the literature. We re-estimate the ratio between the FeII complex centered at 4570Ang and the broad component of H-Beta, R_FeII, obtaining a value of 2.58, nearly half of that previously reported (R_FeII=6.2), but still placing PHL1092 among extreme FeII emitters. The FWHM found for low ionization lines are very similar (FWHM~1200km/s), but significantly narrower than those of the Hydrogen lines (FWHM(H-Beta)~1900km/s). Our results suggest that the FeII emission in PHL1092 follows the same trend as in normal FeII emitters, with FeII being formed in the outer portion of the BLR and co-spatial with CaII, and OI, while H-Beta is formed closer to the central source. The flux ratio between the UV lines suggest high densities, log(n_H)~13.0 cm^{-3}, and a low ionization parameter, log(U)~-3.5. The flux excess found in the FeII bump at 9200Ang after the subtraction of the NIR FeII template and its comparison with optical FeII emission suggests that the above physical conditions optimize the efficiency of the ly-Alpha fluorescence process, which was found to be the main excitation mechanism in the FeII production. We discuss the role of PHL1092 in the Eigenvector 1 context. ",Panchromatic Properties of the Extreme FeII Emitter PHL 1092 " We report on the detection of deuterated molecular hydrogen, HD, at $z = 0.18$. HD and H$_{\rm 2}$ are detected in HST/COS data of a low metallicity ($Z \sim 0.07Z_\odot$) damped Ly$\alpha$ system at $z = 0.18562$ toward QSO B0120$-$28, with log $N$(H I) = 20.50 $\pm$ 0.10. Four absorption components are clearly resolved in H$_{\rm 2}$ while two components are resolved in HD; the bulk of the molecular hydrogen is associated with the components traced by HD. We find total column densities log $N$(HD) = 14.82 $\pm$ 0.15 and log $N$(H$_{\rm 2}$) = 20.00 $\pm$ 0.10. This system has a high molecular fraction, $f$(H$_{\rm 2}$) = 0.39 $\pm$ 0.10 and a low HD to H$_{\rm 2}$ ratio, log (HD/2H$_{\rm 2}$) $= -5.5 \pm 0.2$ dex. The excitation temperature, $T_{01} = 65 \pm 2$ K, in the component containing the bulk of the molecular gas is lower than in other DLAs. These properties are unlike those in other higher redshift DLA systems known to contain HD, but are consistent with what is observed in dense clouds in the Milky Way. ",HST/COS Detection of Deuterated Molecular Hydrogen in a DLA at z = 0.18 " We propose a remedy for the unphysical oscillations arising in the current distribution of carbon nanotube and imperfectly conducting antennas center-driven by a delta-function generator when the approximate kernel is used. We do so by formulating an effective current, which was studied in detail in a 2011 and a 2013 paper for a perfectly conducting linear cylindrical antenna of infinite length, with application to the finite-length antenna. We discuss our results in connection with the perfectly conducting antenna, providing perturbative corrections to the current distribution for a large conductance, as well as presenting a delta-sequence and the field of a Hertzian dipole for the effective current in the limit of vanishing conductance. To that end, we employ both analytical tools and numerical methods to compare with experimental results. ",An Effective-Current Approach for Hall\'{e}n's Equation in Center-Fed Dipole Antennas with Finite Conductivity " In these proceedings I cover the latest results on the production and decay of the recently discovered Higgs boson. While the spin and properties of the new boson, such as its mass and couplings to bosons and fermions, are covered in a separate report, I focus on individual results in the main channels we use to study the properties of the new boson and to search for its possible cousins, with the focus on the latest results from the LHC and the Tevatron collaborations. ",Higgs Bosons in the Standard Model and Beyond " new generation of Wireless Local Area Networks (WLANs) will make its appearance in the market in the forthcoming years based on the amendments to the IEEE 802.11 standards that have recently been approved or are under development. Examples of the most expected ones are IEEE 802.11aa (Robust Audio Video Transport Streaming), IEEE 802.11ac (Very-high throughput at < 6 GHz), IEEE 802.11af (TV White Spaces) and IEEE 802.11ah (Machine-to-Machine communications) specifications. The aim of this survey is to provide a comprehensive overview of these novel technical features and the related open technical challenges that will drive the future WLAN evolution. In contrast to other IEEE 802.11 surveys, this is a use case oriented study. Specifically, we first describe the three key scenarios in which next-generation WLANs will have to operate. We then review the most relevant amendments for each of these use cases focusing on the additional functionalities and the new technologies they include, such as multi-user MIMO techniques, groupcast communications, dynamic channel bonding, spectrum databases and channel sensing, enhanced power saving mechanisms and efficient small data transmissions. We also discuss the related work to highlight the key issues that must still be addressed. Finally, we review emerging trends that can influence the design of future WLANs, with special focus on software-defined MACs and the internet-working with cellular systems. ","Next generation IEEE 802.11 Wireless Local Area Networks: Current status, future directions and open challenges" " The new measurement of $W$-boson mass by the CDF collaboration revealed a remarkable $7\sigma$ disagreement with the Standard Model (SM) prediction. If confirmed by other experiments, then the disagreement strongly indicates the existence of new physics beyond the SM. In this work, seven vectorlike quark (VLQ) extensions of the SM are investigated to interpret the anomaly, and it is found that three can explain the anomaly in broad parameter space. The explanations are consistent with the constraints from oblique parameters, the LHC search for VLQs, the measurements of the properties for the top quark, bottom quark, and Higgs boson, and the perturbativity criterion. The typical size of the involved Yukawa coupling is around 1, which is comparable to the top quark Yukawa coupling in the SM. The other extensions, however, either predict a negative correction to the mass in reasonable parameter space or explain the anomaly by unnatural theoretical input parameters. ",Interpreting the $W$ mass anomaly in the vector-like quark models " We have conducted a detailed thin film growth structure of oxygen engineered monoclinic HfO$_{2\pm x}$ grown by reactive molecular beam epitaxy (MBE). The oxidation conditions induce a switching between ($\bar{1}11$) and (002) texture of hafnium oxide. The band gap of oxygen deficient hafnia decreases with increasing amount of oxygen vacancies by more than 1 eV. For high oxygen vacancy concentrations, defect bands form inside the band gap that induce optical transitions and $p$-type conductivity. The resistivity changes by several orders of magnitude as a function of oxidation conditions. Oxygen vacancies do not give rise to ferromagnetic behavior. ",Physical properties and band structure of reactive molecular beam epitaxy grown oxygen engineered HfO$_{2\pm x}$ " In this paper we study geodesic mappings of $n$-dimensional surfaces of revolution. From the general theory of geodesic mappings of equidistant spaces we specialize to surfaces of revolution and apply the obtained formulas to the case of rotational ellipsoids. We prove that such $n$-dimensional ellipsoids admit non trivial smooth geodesic deformations onto $n$-dimensional surfaces of revolution, which are generally of a different type. ",On global geodesic mappings of $n$-dimensional surfaces of revolution " We relate the strategy sets that a player ends up with after refining his own strategies according to two very different models of rationality: namely, utility maximization and regret minimization. ",Bridging Utility Maximization and Regret Minimization " This paper proposes a novel neuronal current source localization method based on Deep Prior that represents a more complicated prior distribution of current source using convolutional networks. Deep Prior has been suggested as a means of an unsupervised learning approach that does not require learning using training data, and randomly-initialized neural networks are used to update a source location using a single observation. In our previous work, a Deep-Prior-based current source localization method in the brain has been proposed but the performance was not almost the same as those of conventional approaches, such as sLORETA. In order to improve the Deep-Prior-based approach, in this paper, a depth weight of the current source is introduced for Deep Prior, where depth weighting amounts to assigning more penalty to the superficial currents. Its effectiveness is confirmed by experiments of current source estimation on simulated MEG data. ",Current Source Localization Using Deep Prior with Depth Weighting " Without mutation and migration, evolutionary dynamics ultimately leads to the extinction of all but one species. Such fixation processes are well understood and can be characterized analytically with methods from statistical physics. However, many biological arguments focus on stationary distributions in a mutation-selection equilibrium. Here, we address the equilibration time required to reach stationarity in the presence of mutation, this is known as the mixing time in the theory of Markov processes. We show that mixing times in evolutionary games have the opposite behaviour from fixation times when the intensity of selection increases: In coordination games with bistabilities, the fixation time decreases, but the mixing time increases. In coexistence games with metastable states, the fixation time increases, but the mixing time decreases. Our results are based on simulations and the WKB approximation of the master equation. ",Mixing times in evolutionary game dynamics " We report a new search for 12CO(1-0) emission in high-velocity clouds (HVCs) performed with the IRAM 30 m telescope. This search was motivated by the recent detection of cold dust emission in the HVCs of Complex C. Despite a spatial resolution which is three times better and sensitivity twice as good compared to previous studies, no CO emission is detected in the HVCs of Complex C down to a best 5 sigma limit of 0.16 K km/s at a 22'' resolution. The CO emission non-detection does not provide any evidence in favor of large amounts of molecular gas in these HVCs and hence in favor of the infrared findings. We discuss different configurations which, however, allow us to reconcile the negative CO result with the presence of molecular gas and cold dust emission. H2 column densities higher than our detection limit, N(H2) = 3x10^{19} cm^{-2}, are expected to be confined in very small and dense clumps with 20 times smaller sizes than the 0.5 pc clumps resolved in our observations according to the results obtained in cirrus clouds, and might thus still be highly diluted. As a consequence, the inter-clump gas at the 1 pc scale has a volume density lower than 20 cm^{-3} and already appears as too diffuse to excite the CO molecules. The observed physical conditions in the HVCs of Complex C also play an important role against CO emission detection. It has been shown that the CO-to-H2 conversion factor in low metallicity media is 60 times higher than at the solar metallicity, leading for a given H2 column density to a 60 times weaker integrated CO intensity. And the very low dust temperature estimated in these HVCs implies the possible presence of gas cold enough (< 20 K) to cause CO condensation onto dust grains under interstellar medium pressure conditions and thus CO depletion in gas-phase observations. ",Molecular gas in high-velocity clouds: revisited scenario " We study the correlations of classical and quantum systems from the information theoretical points of view. We analyze a simple measure of correlations based on entropy (such measure was already investigated as the degree of entanglement by Belavkin, Matsuoka and Ohya). Contrary to naive expectation, it is shown that separable state might possesses stronger correlation than an entangled state. ",On correlations and mutual entropy in quantum composed systems " In this work, we experimentally and numerically investigate the propagation and attenuation of vertically polarized surface waves in an unconsolidated granular medium equipped with small-scale metabarriers of different depths, i.e., arrays composed of one, two, and three embedded layers of sub-wavelength resonators. Our findings reveal how such a multi-layer arrangement strongly affects the attenuation of the surface wave motion within and after the barrier. When the surface waves collide with the barriers, the wavefront is back-scattered and steered downward underneath the oscillators. Due to the stiffness gradient of the granular medium, part of the wavefield is then rerouted to the surface level after overcoming the resonant array. Overall, the in-depth insertion of additional layers of resonators leads to a greater and broader band wave attenuation when compared to the single layer case. ",Mitigation of Rayleigh-like waves in granular media via multi-layer resonant metabarriers " In this work, we tackle the problem of online camera-to-robot pose estimation from single-view successive frames of an image sequence, a crucial task for robots to interact with the world. ",Robot Structure Prior Guided Temporal Attention for Camera-to-Robot Pose Estimation from Image Sequence " String theory can accommodate black holes with the black hole parameters related to string moduli. It is a well known but remarkable feature that the near horizon geometry of a large class of black holes arising from string theory contains a BTZ part. A mathematical theorem (Sullivan's Theorem) relates the three dimensional geometry of the BTZ metric to the conformal structures of a two dimensional space, thus providing a precise kinematic statement of holography. Using this theorem it is possible to argue that the string moduli space in this region has to have negative curvature from the BTZ part of the associated spacetime. This is consistent with a recent conjecture of Ooguri and Vafa on string moduli space. ","Black Holes, Holography and Moduli Space Metric" " Cosmological applications of HII galaxies (HIIGx) and giant extragalactic HII regions (GEHR) to construct the Hubble diagram at higher redshifts require knowledge of the ""$L$--$\sigma$"" relation of the standard candles used. In this paper, we study the properties of a large sample of 156 sources (25 high-$z$ HII galaxies, 107 local HII galaxies, and 24 giant extragalactic HII regions) compiled by Terlevich et al.(2015). Using the the cosmological distances reconstructed through two new cosmology-independent methods, we investigate the correlation between the H$\beta$ emission-line luminosity $L$ and ionized-gas velocity dispersion $\sigma$. The method is based on non-parametric reconstruction using the measurements of Hubble parameters from cosmic clocks, as well as the simulated data of gravitational waves from the third-generation gravitational wave detector (the Einstein Telescope, ET), which can be considered as standard sirens. Assuming the emission-line luminosity versus ionized gas velocity dispersion relation, $\log L ($H$\beta) = \alpha \log \sigma($H$\beta)+\kappa$, we find the full sample provides a tight constraint on the correlation parameters. However, similar analysis done on three different sub-samples seems to support the scheme of treating HII galaxies and giant extragalactic HII regions with distinct strategies. Using the corrected ""$L$--$\sigma$"" relation for the HII observational sample beyond the current reach of Type Ia supernovae, we obtain a value of the matter density parameter, $\Omega_{m}=0.314\pm0.054$ (calibrated with standard clocks) and $\Omega_{m}=0.311\pm0.049$ (calibrated with standard sirens), in the spatially flat $\Lambda$CDM cosmology. ","Exploring the ""$L$--$\sigma$"" relation of HII galaxies and giant extragalactic HII regions acting as standard candles" " We generalize the construction of multitildes in the aim to provide multitilde operators for regular languages. We show that the underliying algebraic structure involves the action of some operads. An operad is an algebraic structure that mimics the composition of the functions. The involved operads are described in terms of combinatorial objects. These operads are obtained from more primitive objects, namely precompositions, whose algebraic counter-parts are investigated. One of these operads acts faithfully on languages in the sense that two different operators act in two different ways. ","Operads, quasiorders, and regular languages" " Networks-on-chips (NoCs) are an integral part of emerging manycore computing chips. They play a key role in facilitating communication among processing cores and between cores and memory. To meet the aggressive performance and energy-efficiency targets of machine learning and big data applications, NoCs have been evolving to leverage emerging paradigms such as silicon photonics and wireless communication. Increasingly, these NoC fabrics are becoming susceptible to security vulnerabilities, such as from hardware trojans that can snoop, corrupt, or disrupt information transfers on NoCs. This article surveys the landscape of security challenges and countermeasures across electronic, wireless, and photonic NoCs. ","Electronic, Wireless, and Photonic Network-on-Chip Security: Challenges and Countermeasures" " In recent years, algorithm research in the area of recommender systems has shifted from matrix factorization techniques and their latent factor models to neural approaches. However, given the proven power of latent factor models, some newer neural approaches incorporate them within more complex network architectures. One specific idea, recently put forward by several researchers, is to consider potential correlations between the latent factors, i.e., embeddings, by applying convolutions over the user-item interaction map. However, contrary to what is claimed in these articles, such interaction maps do not share the properties of images where Convolutional Neural Networks (CNNs) are particularly useful. In this work, we show through analytical considerations and empirical evaluations that the claimed gains reported in the literature cannot be attributed to the ability of CNNs to model embedding correlations, as argued in the original papers. Moreover, additional performance evaluations show that all of the examined recent CNN-based models are outperformed by existing non-neural machine learning techniques or traditional nearest-neighbor approaches. On a more general level, our work points to major methodological issues in recommender systems research. ",Critically Examining the Claimed Value of Convolutions over User-Item Embedding Maps for Recommender Systems " This paper presents models and optimization methods to rapidly compute the achievable lap time of a race car equipped with a battery electric powertrain. Specifically, we first derive a quasi-convex model of the electric powertrain, including the battery, the electric machine, and two transmission technologies: a single-speed fixed gear and a continuously variable transmission (CVT). Second, assuming an expert driver, we formulate the time-optimal control problem for a given driving path and solve it using an iterative convex optimization algorithm. Finally, we showcase our framework by comparing the performance achievable with a single-speed transmission and a CVT on the Le Mans track. Our results show that a CVT can balance its lower efficiency and higher weight with a higher-efficiency and more aggressive motor operation, and significantly outperform a fixed single-gear transmission. ",Time-optimal Control Strategies for Electric Race Cars with Different Transmission Technologies " We have estimated the number flux of of mu-neutrinos which are produced due to the hadronic interactions between the cosmic rays coming from a neutron star and the matter in a companion star. The event rate at 1 km^2 detectors of high-energy neutrinos such as ICECUBE, ANTARES and NESTOR is also estimated to be 2.7 \times 10^4 events yr^-1 when the source is located at 10 kpc away from the Earth. We have estimated the number of such a system and concluded that there will be several candidates in our galaxy. Taking these results into consideration, this scenario is promising and will be confirmed by the future observations. ",TeV Neutrinos from Companion Stars of Rapid-Rotating Neutron Stars " The just noticeable difference (JND) is the minimal difference between stimuli that can be detected by a person. The picture-wise just noticeable difference (PJND) for a given reference image and a compression algorithm represents the minimal level of compression that causes noticeable differences in the reconstruction. These differences can only be observed in some specific regions within the image, dubbed as JND-critical regions. Identifying these regions can improve the development of image compression algorithms. Due to the fact that visual perception varies among individuals, determining the PJND values and JND-critical regions for a target population of consumers requires subjective assessment experiments involving a sufficiently large number of observers. In this paper, we propose a novel framework for conducting such experiments using crowdsourcing. By applying this framework, we created a novel PJND dataset, KonJND++, consisting of 300 source images, compressed versions thereof under JPEG or BPG compression, and an average of 43 ratings of PJND and 129 self-reported locations of JND-critical regions for each source image. Our experiments demonstrate the effectiveness and reliability of our proposed framework, which is easy to be adapted for collecting a large-scale dataset. The source code and dataset are available at https://github.com/angchen-dev/LocJND. ",Localization of Just Noticeable Difference for Image Compression " Pre-trained language models of the BERT family have defined the state-of-the-arts in a wide range of NLP tasks. However, the performance of BERT-based models is mainly driven by the enormous amount of parameters, which hinders their application to resource-limited scenarios. Faced with this problem, recent studies have been attempting to compress BERT into a small-scale model. However, most previous work primarily focuses on a single kind of compression technique, and few attention has been paid to the combination of different methods. When BERT is compressed with integrated techniques, a critical question is how to design the entire compression framework to obtain the optimal performance. In response to this question, we integrate three kinds of compression methods (weight pruning, low-rank factorization and knowledge distillation (KD)) and explore a range of designs concerning model architecture, KD strategy, pruning frequency and learning rate schedule. We find that a careful choice of the designs is crucial to the performance of the compressed model. Based on the empirical findings, our best compressed model, dubbed Refined BERT cOmpreSsion with InTegrAted techniques (ROSITA), is $7.5 \times$ smaller than BERT while maintains $98.5\%$ of the performance on five tasks of the GLUE benchmark, outperforming the previous BERT compression methods with similar parameter budget. The code is available at https://github.com/llyx97/Rosita. ",ROSITA: Refined BERT cOmpreSsion with InTegrAted techniques " We report the observation of the Goos-H\""anchen effect in graphene via a weak value amplification scheme. We demonstrate that the amplified Goos-H\""anchen shift in weak measurements is sensitive to the variation of graphene layers. Combining the Goos-H\""anchen effect with weak measurements may provide important applications in characterizing the parameters of graphene. ","Observation of the Goos-H\""anchen shift in graphene via weak measurements" " We study the phase diagram of $q$-deformed Yang-Mills theory on $S^2$ at non-zero $\theta$-angle using the exact partition function at finite $N$. By evaluating the exact partition function numerically, we find evidence for the existence of a series of phase transitions at non-zero $\theta$-angle as conjectured in [hep-th/0509004]. ",Phase diagram of $q$-deformed Yang-Mills theory on $S^2$ at non-zero $\theta$-angle " We study deformation quantizations of the structure sheaf O_X of a smooth algebraic variety X in characteristic 0. Our main result is that when X is D-affine, any formal Poisson structure on X determines a deformation quantization of O_X (canonically, up to gauge equivalence). This is an algebro-geometric analogue of Kontsevich's celebrated result. ",Deformation Quantization in Algebraic Geometry We study numerically the dynamics of two-qubit gates with superconducting charge qubits. The exact ratio of $E_J$ to $E_L$ and the corresponding operation time are calculated in order to implement two-qubit gates. We investigate the effect of finite rise/fall times of pulses in realization of two-qubit gates. It is found that the error in implementing two-qubit gates grows quadratically in rise/fall times of pulses. ,Errors due to Finite Rise/fall Times of Pulses in Superconducting Charge Qubits " Atmospheric mass loss plays a major role in the evolution of exoplanets. This process is driven by the stellar high-energy irradiation, especially in the first hundreds of millions of years after dissipation of the proto-planetary disk. A major source of uncertainty in modeling atmospheric photo-evaporation and photo-chemistry is due to the lack of direct measurements of the stellar flux at EUV wavelengths. Several empirical relationships have been proposed in the past to link EUV fluxes to emission levels in X-rays, but stellar samples employed for this aim are heterogeneous, and available scaling laws provide significantly different predictions, especially for very active stars. We present new UV and X-ray observations of V1298 Tau with HST/COS and XMM-Newton, aimed to determine more accurately the XUV emission of this solar-mass pre-Main Sequence star, which hosts four exoplanets. Spectroscopic data were employed to derive the plasma emission measure distribution vs.\ temperature, from the chromosphere to the corona, and the possible variability of this irradiation on short and year-long time scales, due to magnetic activity. As a side result, we have also measured the chemical abundances of several elements in the outer atmosphere of V1298 Tau. We employ our results as a new benchmark point for the calibration of the X-ray to EUV scaling laws, and hence to predict the time evolution of the irradiation in the EUV band, and its effect on the evaporation of exo-atmospheres. ","XUV emission of the young planet-hosting star V1298\,Tau from coordinated observations with XMM-Newton and HST" " We consider the inverse problem in geophysics of imaging the subsurface of the Earth in cases where a region below the surface is known to be formed by strata of different materials and the depths and thicknesses of the strata and the (possibly anisotropic) conductivity of each of them need to be identified simultaneously. This problem is treated as a special case of the inverse problem of determining a family of nested inclusions in a medium $\Omega\subset\mathbb{R}^n$, $n \geq 3$. ",EIT in a layered anisotropic medium " We give an explicit description of the terms and differentials of the Tate resolution of sheaves arising from Segre embeddings of $\P^a\times\P^b$. We prove that the maps in this Tate resolution are either coming from Sylvester-type maps, or from Bezout-type maps arising from the so-called toric Jacobian. ",Tate Resolutions for Segre Embeddings We provide a complete description of possible covariance matrices consistent with a Gaussian latent tree model for any tree. We then present techniques for utilising these constraints to assess whether observed data is compatible with that Gaussian latent tree model. Our method does not require us first to fit such a tree. We demonstrate the usefulness of the inverse-Wishart distribution for performing preliminary assessments of tree-compatibility using semialgebraic constraints. Using results from Drton et al. (2008) we then provide the appropriate moments required for test statistics for assessing adherence to these equality constraints. These are shown to be effective even for small sample sizes and can be easily adjusted to test either the entire model or only certain macrostructures hypothesized within the tree. We illustrate our exploratory tetrad analysis using a linguistic application and our confirmatory tetrad analysis using a biological application. ,The correlation space of Gaussian latent tree models and model selection without fitting " We present a long-term study of the secondary star in the cataclysmic variable AE~Aqr, using Roche tomography to indirectly image starspots on the stellar surface spanning 8~years of observations. The 7 maps show an abundance of spot features at both high and low latitudes. We find that all maps have at least one large high-latitude spot region, and we discuss its complex evolution between maps, as well as its compatibility with current dynamo theories. Furthermore, we see the apparent growth in fractional spot coverage, $f_{\mathrm{s}}$, around $45^{\circ}$~latitude over the duration of observations, with a persistently high $f_{\mathrm{s}}$ near latitudes of $20^{\circ}$. These bands of spots may form as part of a magnetic activity cycle, with magnetic flux tubes emerging at different latitudes, similar to the `butterfly' diagram for the Sun. We discuss the nature of flux tube emergence in close binaries, as well as the activity of AE~Aqr in the context of other stars. ",Roche tomography of cataclysmic variables - VII. The long-term magnetic activity of AE Aqr " We investigate the ground state of the two-dimensional Heisenberg antiferromagnet on two Archimedean lattices, namely, the maple-leaf and bounce lattices as well as a generalized $J$-$J'$ model interpolating between both systems by varying $J'/J$ from $J'/J=0$ (bounce limit) to $J'/J=1$ (maple-leaf limit) and beyond. We use the coupled cluster method to high orders of approximation and also exact diagonalization of finite-sized lattices to discuss the ground-state magnetic long-range order based on data for the ground-state energy, the magnetic order parameter, the spin-spin correlation functions as well as the pitch angle between neighboring spins. Our results indicate that the ""pure"" bounce ($J'/J=0$) and maple-leaf ($J'/J=1$) Heisenberg antiferromagnets are magnetically ordered, however, with a sublattice magnetization drastically reduced by frustration and quantum fluctuations. We found that magnetic long-range order is present in a wide parameter range $0 \le J'/J \lesssim J'_c/J $ and that the magnetic order parameter varies only weakly with $J'/J$. At $J'_c \approx 1.45 J$ a direct first-order transition to a quantum orthogonal-dimer singlet ground state without magnetic long-range order takes place. The orthogonal-dimer state is the exact ground state in this large-$J'$ regime, and so our model has similarities to the Shastry-Sutherland model. Finally, we use the exact diagonalization to investigate the magnetization curve. We a find a 1/3 magnetization plateau for $J'/J \gtrsim 1.07$ and another one at 2/3 of saturation emerging only at large $J'/J \gtrsim 3$. ",The spin-half Heisenberg antiferromagnet on two Archimedian lattices: From the bounce lattice to the maple-leaf lattice and beyond " Control over various fragmentation reactions of a series of polyatomic molecules (acetylene, ethylene, 1,3-butadiene) by the optical waveform of intense few-cycle laser pulses is demonstrated experimentally. We show both experimentally and theoretically that the responsible mechanism is inelastic ionization from inner-valence molecular orbitals by recolliding electron wavepackets, whose recollision energy in few-cycle ionizing laser pulses strongly depends on the optical waveform. Our work demonstrates an efficient and selective way of pre-determining fragmentation and isomerization reactions in polyatomic molecules on sub-femtosecond time-scales. ",Attosecond-recollision-controlled selective fragmentation of polyatomic molecules " The first observation of the heavy baryonic state \Xi_b^0 is reported by the CDF Collaboration. A new decay mode of the established state \Xi_b^- is also observed. In both cases the decay into a \Xi_c plus a charged pion is seen, with an equivalent statistical significance of above 6.8 sigma. ",Observation of the Xi_b^0 Baryon " This paper presents a model reduction method for the class of linear quantum stochastic systems often encountered in quantum optics and their related fields. The approach is proposed on the basis of an interpolatory projection ensuring that specific input-output responses of the original and the reduced-order systems are matched at multiple selected points (or frequencies). Importantly, the physical realizability property of the original quantum system imposed by the law of quantum mechanics is preserved under our tangential interpolatory projection. An error bound is established for the proposed model reduction method and an avenue to select interpolation points is proposed. A passivity preserving model reduction method is also presented. Examples of both active and passive systems are provided to illustrate the merits of our proposed approach. ",Tangential Interpolatory Projection for Model Reduction of Linear Quantum Stochastic Systems Several spectral sequence techniques are used in order to derive information about the structure of finite free resolutions of graded modules. These results cover estimates of the minimal number of generators of defining ideals of projective varieties. There are also investigations about the shifts and the dimension of Betti numbers. ,Applications of Koszul homology to numbers of generators and syzygies " In this paper, we present recent progress in the development of hydrophobic silica aerogel as a Cherenkov radiator. In addition to the conventional method, the recently developed pin-drying method for producing high-refractive-index aerogels with high transparency was studied in detail. Optical qualities and large tile handling for crack-free aerogels were investigated. Sufficient photons were detected from high-performance aerogels in a beam test. ",Recent progress in silica aerogel Cherenkov radiator " Functional regression analysis is an established tool for many contemporary scientific applications. Regression problems involving large and complex data sets are ubiquitous, and feature selection is crucial for avoiding overfitting and achieving accurate predictions. We propose a new, flexible and ultra-efficient approach to perform feature selection in a sparse high dimensional function-on-function regression problem, and we show how to extend it to the scalar-on-function framework. Our method, called FAStEN, combines functional data, optimization, and machine learning techniques to perform feature selection and parameter estimation simultaneously. We exploit the properties of Functional Principal Components and the sparsity inherent to the Dual Augmented Lagrangian problem to significantly reduce computational cost, and we introduce an adaptive scheme to improve selection accuracy. In addition, we derive asymptotic oracle properties, which guarantee estimation and selection consistency for the proposed FAStEN estimator. Through an extensive simulation study, we benchmark our approach to the best existing competitors and demonstrate a massive gain in terms of CPU time and selection performance, without sacrificing the quality of the coefficients' estimation. The theoretical derivations and the simulation study provide a strong motivation for our approach. Finally, we present an application to brain fMRI data from the AOMIC PIOP1 study. ",FAStEN: an efficient adaptive method for feature selection and estimation in high-dimensional functional regressions " Quantum annealing is a heuristic quantum algorithm which exploits quantum resources to minimize an objective function embedded as the energy levels of a programmable physical system. To take advantage of a potential quantum advantage, one needs to be able to map the problem of interest to the native hardware with reasonably low overhead. Because experimental considerations constrain our objective function to take the form of a low degree PUBO (polynomial unconstrained binary optimization), we employ non-convex loss functions which are polynomial functions of the margin. We show that these loss functions are robust to label noise and provide a clear advantage over convex methods. These loss functions may also be useful for classical approaches as they compile to regularized risk expressions which can be evaluated in constant time with respect to the number of training examples. ",Construction of non-convex polynomial loss functions for training a binary classifier with quantum annealing " We study electromagnetic radiation reaction in curved space and the dynamics of radiating charged particles. The equation of motion for such particles is the DeWitt-Brehme equation, and it contains a particularly complicated, non-local, tail term. It has been claimed that the tail term can be neglected in certain magnetized black hole spacetimes, and that radiation reaction may then lead to energy extraction (""orbital widening"") in the absence of an ergoregion. We show that such claims are incorrect, at least in the Newtonian limit: the tail term can never be neglected consistently in the relevant scenarios, and when it is included the reported energy extraction no longer occurs. Thus, previous results are called into question by our work. ",Electromagnetic radiation reaction and energy extraction from black holes: The tail term cannot be ignored " In these lectures I will present an introduction to the modern way of studying the properties of glassy systems. I will start from soluble models of increasing complications, the Random Energy Model, the $p$-spins interacting model and I will show how these models can be solved due their mean field properties. Finally, in the last section, I will discuss the difficulties in the generalization of these findings to short range models. ",Slow dynamics of glassy systems " Black hole membrane paradigm suggests to consider the black hole horizon as a fluid membrane. The membrane has a particular energy-momentum tensor which characterizes the interactions with the falling matter. In this paper, we show that we can construct an action from the scalar field on the horizon which can give the same energy-momentum tensor for the membrane. That is, the membrane can be described effectively by the scalar field on it. ",Black Hole Membrane Paradigm from Boundary Scalar Field " Carbon-enhanced metal-poor (CEMP) stars are the living fossils holding records of chemical enrichment from early generations of stars. In this work, we perform a set of numerical simulations of the enrichment from a supernova (SN) of a first generation of metal-free (Pop III) star and the gravitational collapse of the enriched cloud, considering all relevant cooling/heating processes and chemical reactions as well as the growth of dust grains. We adopt faint SN models for the first time with progenitor masses $M_{\rm PopIII} = 13$--$80 \ {\rm M}_{\bigodot}$, which yield C-enhanced abundance patterns (${\rm [C/Fe]} = 4.57$--$4.75$) through mixing and fallback of innermost layers of the ejecta. This model also considers the formation and destruction of dust grains. We find that the metals ejected by the SN can be partly re-accreted by the same dark matter minihalo, and carbon abundance of the enriched cloud $A({\rm C}) = 3.80$--$5.06$ is lower than the abundance range of observed CEMP stars ($A({\rm C}) \gtrsim 6$) because the mass of the metals ejected by faint SNe is smaller than normal core-collapse SNe due to extensive fallback. We also find that cloud fragmentation is induced by gas cooling from carbonaceous grains for $M_{\rm PopIII} = 13 \ {\rm M}_{\bigodot}$ even with the lowest iron abundance ${\rm [Fe/H]} \sim -9$. This leads to the formation of low-mass stars, and these ``giga metal-poor'' stars can survive until the present-day Universe and may be found by future observations. ",Seeding the second star -- II. CEMP star formation enriched from faint supernovae " The decoy-state high-dimensional quantum key distribution provides a practical secure way to share more private information with high photon-information efficiency. In this paper, based on detector-decoy method, we propose a detector-decoy high-dimensional quantum key distribution protocol. Employing threshold detectors and a variable attenuator, we can estimate single-photon fraction of postselected events and Eves Holevo information under the Gaussian collective attack with much simpler operations in practical implementation. By numerical evaluation, we show that without varying source intensity and optimizing decoy-state intensity, our protocol could perform much better than one-decoy-state protocol and as well as the two-decoy-state protocol. Specially, when the detector efficiency is lower, the advantage of the detector-decoy method becomes more prominent. ",Detector-decoy high-dimensional quantum key distribution " Despite their effective use in various fields, many aspects of neural networks are poorly understood. One important way to investigate the characteristics of neural networks is to explore the loss landscape. However, most models produce a high-dimensional non-convex landscape which is difficult to visualize. We discuss and extend existing visualization methods based on 1D- and 2D slicing with a novel method that approximates the actual loss landscape geometry by using charts with interpretable axes. Based on the assumption that observations on small neural networks can generalize to more complex systems and provide us with helpful insights, we focus on small models in the range of a few dozen weights, which enables computationally cheap experiments and the use of an interactive dashboard. We observe symmetries around the zero vector, the influence of different layers on the global landscape, the different weight sensitivities around a minimizer, and how gradient descent navigates high-loss obstacles. The user study resulted in an average SUS (System Usability Scale) score with suggestions for improvement and opened up a number of possible application scenarios, such as autoencoders and ensemble networks. ",FuNNscope: Visual microscope for interactively exploring the loss landscape of fully connected neural networks " We explore the cosmic evolution of the bar length, strength, and light deficit around the bar for 379 barred galaxies at 0.2 < z $\leq$ 0.835 using F814W images from the COSMOS survey. Our sample covers galaxies with stellar mass 10.0 $\leq$ log(M*/Msun) $\leq$ 11.4 and various Hubble types. The bar length is strongly related to the galaxy mass, the disk scale length (h), R50, and R90, where the last two are the radii containing 50 and 90% of total stellar mass, respectively. Bar length remains almost constant, suggesting little or no evolution in bar length over the last 7 Gyrs. The normalized bar lengths (Rbar/h, Rbar/R50, and Rbar/R90) do not show any clear cosmic evolution. Also, the bar strength (A2 and Qb) and the light deficit around the bar reveal little or no cosmic evolution. The constancy of the normalized bar lengths over cosmic time implies that the evolution of bars and of disks is strongly linked over all times. We discuss our results in the framework of predictions from numerical simulations. We conclude there is no strong disagreement between our results and up-to-date simulations. ",Cosmic Evolution of Barred Galaxies up to z ~ 0.84 " We study phase transitions for the topological pressure of geometric potentials of transitive sets. The sets considered are partially hyperbolic having a step skew product dynamics over a horseshoe with one-dimensional fibers corresponding to the central direction. The sets are genuinely non-hyperbolic containing intermingled horseshoes of different hyperbolic behavior (contracting and expanding center). We prove that for every $k\ge 1$ there is a diffeomorphism $F$ with a transitive set $\Lambda$ as above such that the pressure map $P(t)=P(t\, \varphi)$ of the potential $\varphi= -\log \,\lVert dF|_{E^c}\rVert$ ($E^c$ the central direction) defined on $\Lambda$ has $k$ rich phase transitions. This means that there are parameters $t_\ell$, $\ell=1,...,k$, where $P(t)$ is not differentiable and this lack of differentiability is due to the coexistence of two equilibrium states of $t_\ell\,\varphi$ with positive entropy and different Birkhoff averages. Each phase transition is associated to a gap in the central Lyapunov spectrum of $F$ on $\Lambda$. ",Abundant rich phase transitions in step skew products " Most 3d human pose estimation methods assume that input -- be it images of a scene collected from one or several viewpoints, or from a video -- is given. Consequently, they focus on estimates leveraging prior knowledge and measurement by fusing information spatially and/or temporally, whenever available. In this paper we address the problem of an active observer with freedom to move and explore the scene spatially -- in `time-freeze' mode -- and/or temporally, by selecting informative viewpoints that improve its estimation accuracy. Towards this end, we introduce Pose-DRL, a fully trainable deep reinforcement learning-based active pose estimation architecture which learns to select appropriate views, in space and time, to feed an underlying monocular pose estimator. We evaluate our model using single- and multi-target estimators with strong result in both settings. Our system further learns automatic stopping conditions in time and transition functions to the next temporal processing step in videos. In extensive experiments with the Panoptic multi-view setup, and for complex scenes containing multiple people, we show that our model learns to select viewpoints that yield significantly more accurate pose estimates compared to strong multi-view baselines. ",Deep Reinforcement Learning for Active Human Pose Estimation " Cosmological surveys must correct their observations for the reddening of extragalactic objects by Galactic dust. Existing dust maps, however, have been found to have spatial correlations with the large-scale structure of the Universe. Errors in extinction maps can propagate systematic biases into samples of dereddened extragalactic objects and into cosmological measurements such as correlation functions between foreground lenses and background objects and the primordial non-gaussianity parameter $f_{NL}$. Emission-based maps are contaminated by the cosmic infrared background, while maps inferred from stellar-reddenings suffer from imperfect removal of quasars and galaxies from stellar catalogs. Thus, stellar-reddening based maps using catalogs without extragalactic objects offer a promising path to making dust maps with minimal correlations with large-scale structure. We present two high-latitude integrated extinction maps based on stellar reddenings, with a point spread function of full-width half-maximum 6.1' and 15'. We employ a strict selection of catalog objects to filter out galaxies and quasars and measure the spatial correlation of our extinction maps with extragalactic structure. Our galactic extinction maps have reduced spatial correlation with large scale structure relative to most existing stellar-reddening based and emission-based extinction maps. ",Stellar Reddening Based Extinction Maps for Cosmological Applications Reversible Peres gates with more than two all over binary-valued control signals are discussed. Methods are disclosed for the low cost realization of this kind of Peres gates without requiring ancillary lines. Proper distribution of the controlled gates and their inverses allow driving the reversible Peres gate with control signals of different polarities. ,Mixed polarity reversible Peres gates " We solve the problem of optimal liquidation with volume weighted average price (VWAP) benchmark when the market impact is linear and transient. Our setting is indeed more general as it considers the case when the trading interval is not necessarily coincident with the benchmark interval: Implementation Shortfall and Target Close execution are shown to be particular cases of our setting. We find explicit solutions in continuous and discrete time considering risk averse investors having a CARA utility function. Finally, we show that, contrary to what is observed for Implementation Shortfall, the optimal VWAP solution contains both buy and sell trades also when the decay kernel is convex. ",Optimal VWAP execution under transient price impact " Using a geographical scale-free network to describe relations between people in a city, we explain both superlinear and sublinear allometric scaling of urban indicators that quantify activities or performances of the city. The urban indicator $Y(N)$ of a city with the population size $N$ is analytically calculated by summing up all individual activities produced by person-to-person relationships. Our results show that the urban indicator scales superlinearly with the population, namely, $Y(N)\propto N^{\beta}$ with $\beta>1$ if $Y(N)$ represents a creative productivity and the indicator scales sublinearly ($\beta<1$) if $Y(N)$ is related to the degree of infrastructure development. These coincide with allometric scaling observed in real-world urban indicators. We also show how the scaling exponent $\beta$ depends on the strength of the geographical constraint in the network formation. ",Superlinear and sublinear urban scaling in geographical network model of the city " In the pursuit of efficient optimization of expensive-to-evaluate systems, this paper investigates a novel approach to Bayesian multi-objective and multi-fidelity (MOMF) optimization. Traditional optimization methods, while effective, often encounter prohibitively high costs in multi-dimensional optimizations of one or more objectives. Multi-fidelity approaches offer potential remedies by utilizing multiple, less costly information sources, such as low-resolution simulations. However, integrating these two strategies presents a significant challenge. We suggest the innovative use of a trust metric to support simultaneous optimization of multiple objectives and data sources. Our method modifies a multi-objective optimization policy to incorporate the trust gain per evaluation cost as one objective in a Pareto optimization problem, enabling simultaneous MOMF at lower costs. We present and compare two MOMF optimization methods: a holistic approach selecting both the input parameters and the trust parameter jointly, and a sequential approach for benchmarking. Through benchmarks on synthetic test functions, our approach is shown to yield significant cost reductions - up to an order of magnitude compared to pure multi-objective optimization. Furthermore, we find that joint optimization of the trust and objective domains outperforms addressing them in sequential manner. We validate our results using the use case of optimizing laser-plasma acceleration simulations, demonstrating our method's potential in Pareto optimization of high-cost black-box functions. Implementing these methods in existing Bayesian frameworks is simple, and they can be readily extended to batch optimization. With their capability to handle various continuous or discrete fidelity dimensions, our techniques offer broad applicability in solving simulation problems in fields such as plasma physics and fluid dynamics. ",Leveraging Trust for Joint Multi-Objective and Multi-Fidelity Optimization " A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. ",Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain We propose a method of solving partial differential equations on the $n$-dimen\-sional unit sphere with methods based on the continuous wavelet transform derived from approximate identities. ,Wavelet methods in partial differential equations on spheres " We prove a homological stability theorem for the moduli spaces of manifolds diffeomorphic to $\#^{g}(S^{n+1}\times S^{n})$, provided $n \geq 4$. This is an odd dimensional analogue of a recent homological stability result of S. Galatius and O. Randal Williams for the moduli space of manifolds diffeomorphic to $\#^{g}(S^{n}\times S^{n})$ for $n \geq 3$. ",Homological Stability For Moduli Spaces of Odd Dimensional Manifolds " The core-cusp problem remains as one of the unsolved discrepancies between observations and theories predicted by the standard paradigm of cold dark matter (CDM) cosmology. To solve this problem, we perform N-body simulations to study the nonlinear response of CDM halos to the variance of the gravitational potential induced by gas removal from galaxy centers. In this study, we focus on the timescale of the gas ejection, which is strongly correlated with stellar activities, and demonstrate that it is one of the key factors in determining the dynamical response of CDM halos. The results of simulations show that the power-low index of the mass-density profile of the dark matter halo correlated with the timescale of the mass loss, and it is flatter when the mass loss occurs over a short time than when it occurs over a long time. However, it is still larger than typical observational values; in other words, the central cusp remains for any mass loss model in the simulations. Moreover, for the slow mass-loss case, the final density profile of the dark matter halo recovers the universal density profiles predicted by the CDM cosmology. Therefore, mass loss driven by stellar feedback may not be an effective mechanism to flatten the central cusp. ",The core-cusp problem in cold dark matter halos and supernova feedback: Effects of Mass Loss " We present a new scheme for teleporting multiqubit quantum information from a sender to a distant receiver via the control of many agents in a network. We show that the receiver can successfully restore the original state of each qubit as long as all the agents cooperate. However, it is remarkable that for certain type of teleported states, the receiver can not gain any amplitude information even if one agent does not collaborate. In addition, our analysis shows that for general input states of each message qubit, the average fidelity for the output states, when even one agent does not take action, is the same as that for the previous proposals. ",A scheme for the teleportation of multiqubit quantum information via the control of many agents in a network " We have demonstrated a high temperature vapor cell for absorption spectroscopy on the Ca intercombination line. The cell uses a dual chamber design to achieve the high temperatures necessary for an optically dense vapor while avoiding the necessity of high temperature vacuum valves and glass-to-metal seals. We have observed over 50 percent absorption in a single pass through the cell. Although pressure broadening in the cell prevented us from performing saturated-absorption spectroscopy, the broadening resulted in higher signal-to-noise ratios by allowing us to probe the atoms with intensities much greater than the 0.2 uW/cm^2 saturation intensity of the unbroadened transition. ",A High Temperature Calcium Vapor Cell for Spectroscopy on the 4s^2 1S0 to 4s4p 3P1 Intercombination Line " Light-fidelity (LiFi) is an emerging technology for high-speed short-range mobile communications. Inter-cell interference (ICI) is an important issue that limits the system performance in an optical attocell network. Angle diversity receivers (ADRs) have been proposed to mitigate ICI. In this paper, the structure of pyramid receivers (PRs) and truncated pyramid receivers (TPRs) are studied. The coverage problems of PRs and TPRs are defined and investigated, and the lower bound of field of view (FOV) for each PD is given analytically. The impact of random device orientation and diffuse link signal propagation are taken into consideration. The performances of PRs and TPRs are compared and then optimized ADR structures are proposed. The performance comparison between the select best combining (SBC) and maximum ratio combining (MRC) is given under different noise levels. It is shown that SBC will outperform MRC in an interference limited system, otherwise, MRC is a preferred scheme. In addition, the double source system, where each LiFi AP consists of two sources transmitting the same information signals but with opposite polarity, is proved to outperform the single source (SS) system under certain conditions. ",Interference Mitigation using Optimized Angle Diversity Receiver in LiFi Cellular network " A recent construction by Amarra, Devillers and Praeger of block designs with specific parameters depends on certain quadratic polynomials, with integer coefficients, taking prime power values. The Bunyakovsky Conjecture, if true, would imply that each of them takes infinitely many prime values, giving an infinite family of block designs with the required parameters. We have found large numbers of prime values of these polynomials, and the numbers found agree very closely with the estimates for them provided by Li's recent modification of the Bateman-Horn Conjecture. While this does not prove that these polynomials take infinitely many prime values, it provides strong evidence for this, and it also adds extra support for the validity of the Bunyakovsky and Bateman-Horn Conjectures. ",Block designs and prime values of polynomials " In the brane-world framework, we consider static, spherically symmetric configurations of a scalar field with the Lagrangian $(\d\phi)^2/2 - V(\phi)$, confined on the brane. We use the 4D Einstein equations on the brane obtained by Shiromizu et al., containing the usual stress tensor $T\mN$, the tensor $\Pi\mN$, quadratic in $T\mN$, and $E\mN$ describing interaction with the bulk. For models under study, the tensor $\Pi\mN$ has zero divergence, so we can consider a ""minimally coupled"" brane with $E\mN = 0$, whose 4D gravity is decoupled from the bulk geometry. Assuming $E\mN =0$, we try to extend to brane worlds some theorems valid for scalar fields in general relativity (GR). Thus, the list of possible global causal structures in all models under consideration is shown to be the same as is known for vacuum with a $Lambda$ term in GR: Minkowski, Schwarzschild, (A)dS and Schwarzschild-(A)dS. A no-hair theorem, saying that, given a potential $V\geq 0$, asymptotically flat black holes cannot have nontrivial external scalar fields, is proved under certain restrictions. Some objects, forbidden in GR, are allowed on the brane, e.g, traversable wormholes supported by a scalar field, but only at the expense of enormous matter densities in the strong field region. ",Scalar field in a minimally coupled brane world: no-hair and other no-go theorems " In this paper, we consider continuous-variable quantum key distribution with a discrete modulation, either binary or quaternary. We establish the security of these protocols against the class of collective attacks that induce a linear quantum channel. In particular, all Gaussian attacks are taken into account, as well as linear attacks which add a non-Gaussian noise. We give lower bounds for the secret key rate using extremality properties of Gaussian states. ",Continuous-variable Quantum Key Distribution protocols with a discrete modulation " Code reuse is an important part of software development. The adoption of code reuse practices is especially common among Node.js developers. The Node.js package manager, NPM, indexes over 1 Million packages and developers often seek out packages to solve programming tasks. Due to the vast number of packages, selecting the right package is difficult and time consuming. With the goal of improving productivity of developers that heavily reuse code through third-party packages, we present Node Code Query (NCQ), a Read-Eval-Print-Loop environment that allows developers to 1) search for NPM packages using natural language queries, 2) search for code snippets related to those packages, 3) automatically correct errors in these code snippets, 4) quickly setup new environments for testing those snippets, and 5) transition between search and editing modes. In two user studies with a total of 20 participants, we find that participants begin programming faster and conclude tasks faster with NCQ than with baseline approaches, and that they like, among other features, the search for code snippets and packages. Our results suggest that NCQ makes Node.js developers more efficient in reusing code. ",NCQ: Code reuse support for node.js developers " Inferring user characteristics such as demographic attributes is of the utmost importance in many user-centric applications. Demographic data is an enabler of personalization, identity security, and other applications. Despite that, this data is sensitive and often hard to obtain. Previous work has shown that purchase history can be used for multi-task prediction of many demographic fields such as gender and marital status. Here we present an embedding based method to integrate multifaceted sequences of transaction data, together with auxiliary relational tables, for better user modeling and demographic prediction. ",Fusing Multifaceted Transaction Data for User Modeling and Demographic Prediction " An increasing number of AGNs exhibit broad, double-peaked Balmer emission lines, which are thought to arise from the outer regions of the accretion disk which fuels the AGN. The line profiles are observed to vary on a characteristic timescales of 5-10 years. The variability is not a reverberation effect; it is a manifestation of physical changes in the disk. Our group has monitored a set of 20 double-peaked emitters for the past 8 years (longer for some objects). Here, we characterize the variability of the double-peaked H alpha line profiles in five objects from our sample. By experimenting with simple models, we find that disks with a single precessing spiral arm are able to reproduce many of the variability trends that are seen in the data. ",Long Term Profile Variability of Double-Peaked Emission Lines in AGNs " Anomalous paramagnetic effects in dc magnetization were observed in the mixed state of LuNi2B2C, unlike any reported previously. It appears as a kink-like feature for H > 30 kOe and becomes more prominent with increasing field. A specific heat jump at the corresponding temperature suggests that the anomaly is due to a true bulk transition. A magnetic flux transition from a square to an hexagonal lattice is consistent with the anomaly. ",Anomalous Paramagnetic Effects in the Mixed State of LuNi2B2C " We use scanning photocurrent microscopy (SPCM) to investigate the properties of internal p-n junctions as well as local defects in ambipolar carbon nanotube (CNT) transistors. Our SPCM images show strong signals near metal contacts whose polarity and positions change depending on the gate bias. SPCM images analyzed in conjunction with the overall conductance also indicate the existence and gate-dependent evolution of internal p-n junctions near contacts in the n-type operation regime. To determine the p-n junction position and the depletion width with a nanometer scale resolution, a Gaussian fit was used. We also measure the electric potential profile of CNT devices at different gate biases, which shows that both local defects and induced electric fields can be imaged using the SPCM technique. Our experiment clearly demonstrates that SPCM is a valuable tool for imaging and optimizing electrical and optoelectronic properties of CNT based devices. ",Photocurrent Imaging of p-n Junctions and Local Defects in Ambipolar Carbon Nanotube Transistors " Recent progress in self-supervised learning has demonstrated promising results in multiple visual tasks. An important ingredient in high-performing self-supervised methods is the use of data augmentation by training models to place different augmented views of the same image nearby in embedding space. However, commonly used augmentation pipelines treat images holistically, ignoring the semantic relevance of parts of an image-e.g. a subject vs. a background-which can lead to the learning of spurious correlations. Our work addresses this problem by investigating a class of simple, yet highly effective ""background augmentations"", which encourage models to focus on semantically-relevant content by discouraging them from focusing on image backgrounds. Through a systematic investigation, we show that background augmentations lead to substantial improvements in performance across a spectrum of state-of-the-art self-supervised methods (MoCo-v2, BYOL, SwAV) on a variety of tasks, e.g. $\sim$+1-2% gains on ImageNet, enabling performance on par with the supervised baseline. Further, we find the improvement in limited-labels settings is even larger (up to 4.2%). Background augmentations also improve robustness to a number of distribution shifts, including natural adversarial examples, ImageNet-9, adversarial attacks, ImageNet-Renditions. We also make progress in completely unsupervised saliency detection, in the process of generating saliency masks used for background augmentations. ",Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations " Adiabatic quantum computing is a universal model for quantum computing. Standard error correction methods require overhead that makes their application prohibitive for near-term devices. To mitigate the limitations of near-term devices, a number of hybrid approaches have been pursued in which a parameterized quantum circuit prepares and measures quantum states and a classical optimization algorithm minimizes an objective function that encompasses the solution to the problem of interest. In this work, we propose a different approach starting by analyzing how a small perturbation of a Hamiltonian affects the parameters that minimize the energy within a family of parameterized quantum states. We derive a set of equations that allow us to compute the new minimum by solving a constrained linear system of equations that is obtained from measuring a series of observables on the unperturbed system. We then propose a discrete version of adiabatic quantum computing which can be implemented with NISQ devices while at the same time is insensitive to the initialization of the parameters and to other limitations hindered in the optimization part of variational quantum algorithms. We also derive a lower bound on the number of discrete steps needed to guarantee success. We compare our proposed algorithm with the Variational Quantum Eigensolver on two classical optimization problems, namely MaxCut and Number Partitioning, and on a quantum-spin configuration problem, the Transverse-Field Ising Chain model, and confirm that our approach demonstrates superior performance. ",Adiabatic quantum computing with parameterized quantum circuits " We consider a generic Hamiltonian that is suitable for describing a uniform BCS superfluid on a lattice with a two-point basis, and study its collective excitations at zero temperature. For this purpose, we first derive an effective-Gaussian action for the pairing fluctuations, and then extract the low-energy dispersion relations for the in-phase Goldstone and out-of-phase Leggett modes along with the corresponding amplitude (i.e., the so-called Higgs) ones. We find that while the Goldstone mode is gapless at zero momentum and propagating in general, the Leggett mode becomes undamped only with sufficiently strong interactions. Furthermore, we show that, in addition to the conventional contribution that is controlled by the energy of the Bloch bands, the velocity of the Goldstone mode has a geometric contribution that is governed by the quantum metric tensor of the Bloch states. Our results suggest that the latter contribution dominates the velocity when the former one becomes negligible for a narrow- or a flat-band. ",Collective excitations of a BCS superfluid in the presence of two sublattices " The chi-squared based covariance approach allows one to estimate the correlations among desired observables related to nuclear matter directly from a set of fit data without taking recourse to the distributions of the nuclear matter parameters (NMPs). Such an approach is applied to study the correlations of tidal deformability of neutron star with the slope and the curvature parameters of nuclear symmetry energy governed by an extensive set of fit data on the finite nuclei together with the maximum mass of the neutron star. The knowledge of the distributions of NMPs consistent with the fit data is implicitly inbuilt in the Hessian matrix which is central to this covariance approach. Comparing our results with those obtained with the explicit use of the distributions of NMPs, we show that the appropriate correlations among NMPs as induced by the fit data are instrumental in strengthening the correlations of the tidal deformability with the symmetry energy parameters, without it, the said correlations tend to disappear. The interplay between isoscalar and isovector NMPs is also emphasized. ",Unveiling the correlations of tidal deformability with the nuclear symmetry energy parameters " We present asymptotic giant branch (AGB) models of solar metallicity, to allow the interpretation of observations of Galactic AGB stars, whose distances should be soon available after the first release of the Gaia catalogue. We find an abrupt change in the AGB physical and chemical properties, occurring at the threshold mass to ignite hot bottom burning,i.e. $3.5M_{\odot}$. Stars with mass below $3.5 M_{\odot}$ reach the C-star stage and eject into the interstellar medium gas enriched in carbon , nitrogen and $^{17}O$. The higher mass counterparts evolve at large luminosities, between $3\times 10^4 L_{\odot}$ and $10^5 L_{\odot}$. The mass expelled from the massive AGB stars shows the imprinting of proton-capture nucleosynthesis, with considerable production of nitrogen and sodium and destruction of $^{12}C$ and $^{18}O$. The comparison with the most recent results from other research groups are discussed, to evaluate the robustness of the present findings. Finally, we compare the models with recent observations of galactic AGB stars, outlining the possibility offered by Gaia to shed new light on the evolution properties of this class of objects. ",Studying the evolution of AGB stars in the Gaia epoch " We present a lattice computation of the effective potential for O(2)-invariant $(\lambda\Phi^4)_4$ theory in the region of bare parameters corresponding to a classically scale-invariant theory. As expected from ``triviality'' and as in the one-component theory, we find very good agreement with the one-loop prediction, while a perturbative leading-log improvement of the effective potential fails to reproduce the Monte Carlo data. The mass $m_h$ of the free shifted radial field is related to the renormalized vacuum expectation value $v_R$ through the same relation $m^2_h=8\pi^2 v^2_R$ as in the one-component case. This confirms the prediction of a weakly interacting 2.2 TeV Higgs particle in the standard model. ",Lattice Computation of the Effective Potential in O(2)-Invariant $\lambda\Phi^4$ Theory " We consider a Josephson bijunction consisting of three superconducting reservoirs connected through two quantum dots. In equilibrium, the interdot coupling is sizable only for distances smaller than the superconducting coherence length. Application of commensurate dc voltages results in a time-periodic Hamiltonian and induces an interdot coupling at large distances. The basic mechanism of this long-range coupling is shown to be due to local multiple Andreev reflections on each dot, followed by quasiparticle propagation at energies larger than the superconducting gap. At large interdot distances we derive an effective non-Hermitian Hamiltonian describing two resonances coupled through a continuum. ",Long-range coupling between superconducting dots induced by periodic driving " We present near-infrared and optical observations of the Type Ic Supernova (SN) 2020oi in the galaxy M100 and the broad-lined Type Ic SN2020bvc in UGC 9379, using Gemini, LCO, SOAR, and other ground-based telescopes. The near-IR spectrum of SN2020oi at day 63 since the explosion shows strong CO emissions and a rising K-band continuum, which is the first unambiguous dust detection from a Type Ic SN. Non-LTE CO modeling shows that CO is still optically thick, and that the lower limit to the CO mass is 0.001 Msun. The dust temperature is 810 K, and the dust mass is ~10^(-5) Msun. We explore the possibilities that the dust is freshly formed in the ejecta, heated dust in the pre-existing circumstellar medium, and an infrared echo. The light curves of SN2020oi are consistent with a STELLA model with canonical explosion energy, 0.07 Msun Ni mass, and 0.7 Msun ejecta mass. A model of high explosion energy of ~10^(52) erg, 0.4 Msun Ni mass, 6.5 Msun ejecta mass with the circumstellar matter, reproduces the double-peaked light curves of SN2020bvc. We observe temporal changes of absorption features of the IR Ca~II triplet, S~I at 1.043 micron, and Fe~II at 5169 Angstrom. The blue-shifted lines indicate high velocities, up to 60,000 km/s for SN2020bvc and 20,000 km/s for SN2020oi, and the expansion velocity rapidly declines before the optical maximum. We present spectral signatures and diagnostics of CO and SiO molecular bands between 1.4 and 10 microns. ","Near-Infrared and Optical Observations of Type Ic SN2020oi and broad-lined Ic SN2020bvc: Carbon Monoxide, Dust and High-Velocity Supernova Ejecta" " This document contains supplementary material for the main articles in our Random Cayley Graphs project. We prove refined results about simple random walks on the integers and on the cycle. We are primarily interested in the entropy of these random walks at certain times and how this entropy changes when the time changes slightly. Additionally, we prove some large deviation and exit time estimates. We prove some results on the size of discrete lattice balls and how this size changes when the radius changes slightly. We do this in a general $L_q$ norm, with $q \in [1,\infty]$. We also prove some other technical results deferred from the main papers. We hope that some of the results, particularly the simple random walk estimates, will be useful in their own right for other researchers. ",Supplementary Material for Random Cayley Graphs Project " We introduce the notion of the second lattice width of a lattice polygon and use this to classify lattice triangles by their width and second width. This is equivalent to classifying lattice triangles contained in an $n \times m$ rectangle (and no smaller) up to affine equivalence. Using this classification we investigate the automorphism groups and Ehrhart theory of lattice triangles. We also show that the sequence counting lattice triangles contained in dilations of the unit square has generating function equal to the Hilbert series of a degree 8 hypersurface in $\mathbb{P}(1,1,1,2,2,2)$. ",Classification of lattice triangles by their two smallest widths " Surrogate-based optimization relies on so-called infill criteria (acquisition functions) to decide which point to evaluate next. When Kriging is used as the surrogate model of choice (also called Bayesian optimization), one of the most frequently chosen criteria is expected improvement. We argue that the popularity of expected improvement largely relies on its theoretical properties rather than empirically validated performance. Few results from the literature show evidence, that under certain conditions, expected improvement may perform worse than something as simple as the predicted value of the surrogate model. We benchmark both infill criteria in an extensive empirical study on the `BBOB' function set. This investigation includes a detailed study of the impact of problem dimensionality on algorithm performance. The results support the hypothesis that exploration loses importance with increasing problem dimensionality. A statistical analysis reveals that the purely exploitative search with the predicted value criterion performs better on most problems of five or higher dimensions. Possible reasons for these results are discussed. In addition, we give an in-depth guide for choosing the infill criteria based on prior knowledge about the problem at hand, its dimensionality, and the available budget. ",Expected Improvement versus Predicted Value in Surrogate-Based Optimization " We propose Sideways, an approximate backpropagation scheme for training video models. In standard backpropagation, the gradients and activations at every computation step through the model are temporally synchronized. The forward activations need to be stored until the backward pass is executed, preventing inter-layer (depth) parallelization. However, can we leverage smooth, redundant input streams such as videos to develop a more efficient training scheme? Here, we explore an alternative to backpropagation; we overwrite network activations whenever new ones, i.e., from new frames, become available. Such a more gradual accumulation of information from both passes breaks the precise correspondence between gradients and activations, leading to theoretically more noisy weight updates. Counter-intuitively, we show that Sideways training of deep convolutional video networks not only still converges, but can also potentially exhibit better generalization compared to standard synchronized backpropagation. ",Sideways: Depth-Parallel Training of Video Models " We prove some Sawyer-type characterizations for multilinear fractional maximal function for the upper triangle case. We also provide some two-weight norm estimates for this operator. As one of the main tools, we use an extension of the usual Carleson Embedding that is an analogue of the P. L. Duren extension of the Carleson Embedding for measures. ",On two-weight norm estimates for multilinear fractional maximal function " We present a systematic treatment of line bundle geometry and Jacobi manifolds with an application to geometric mechanics that has not been noted in the literature. We precisely identify categories that generalise the ordinary categories of smooth manifolds and vector bundles to account for a lack of choice of a preferred unit, which in standard differential geometry is always given by the global constant function $1$. This is what we call the `unit-free' approach. After giving a characterisation of local Lie brackets via their symbol maps we apply our novel categorical language to review Jacobi manifolds and related notions such as Lichnerowicz brackets and Jacobi algebroids. The main advantage of our approach is that Jacobi geometry is recovered as the direct unit-free generalisation of Poisson geometry, with all the familiar notions translating in a straightforward manner. We then apply this formalism to the question of whether there is a unit-free generalisation of Hamiltonian mechanics. We identify the basic categorical structure of ordinary Hamiltonian mechanics to argue that it is indeed possible to find a unit-free analogue. This work serves as a prelude to the investigation of dimensioned structures, an attempt at a general mathematical framework for the formal treatment of physical quantities and dimensional analysis. ",Jacobi Geometry and Hamiltonian Mechanics: the Unit-Free Approach " The development of autonomous agents which can interact with other agents to accomplish a given task is a core area of research in artificial intelligence and machine learning. Towards this goal, the Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control, with a specific focus on deep reinforcement learning and multi-agent reinforcement learning. Research problems include scalable learning of coordinated agent policies and inter-agent communication; reasoning about the behaviours, goals, and composition of other agents from limited observations; and sample-efficient learning based on intrinsic motivation, curriculum learning, causal inference, and representation learning. This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions. ",Deep Reinforcement Learning for Multi-Agent Interaction " We report a fiber loop quantum buffer based on a low-loss 2$\times$2 switch and a unit delay made of a fiber delay line. We characterize the device by using a two-photon polarization entangled state in which one photon of the entangled photon pair is stored and retrieved at a repetition rate up to 78$\,\rm{kHz}$. The device, which enables integer multiples of a unit delay, can store the qubit state in a unit of fiber delay line up to 5.4$\,\rm{km}$ and the number of loop round-trips up to 3. Furthermore, we configure the device with other active elements to realize integer multiplies and divider of a unit delay of a qubit. The quantum state tomography is performed on the retrieved photon and its entangled photon. We obtain a state fidelity $>94\%$ with a maximum storage time of 52$\,\mu\rm{sec}$. To further characterize the storing and retrieving processes of the device, we perform entanglement-assisted quantum process tomography on the buffered qubit state. The process fidelity of the device is $>$ 0.98. Our result implies that the device preserves the superposition and entanglement of a qubit state from a two-photon polarization-entangled state. This is a significant step towards facilitating applications in optical asynchronous transfer mode (ATM) based quantum networks. ",Fiber Loop Quantum Buffer for Photonic Qubits " A model for laser light absorption in electron-positron plasmas self-consistently created via QED cascades is described. The laser energy is mainly absorbed due to hard photon emission via nonlinear Compton scattering. The degree of absorption depends on the laser intensity and the pulse duration. The QED cascades are studied with multi-dimensional particle-in-cell simulations complemented by a QED module and a macro-particle merging algorithm that allows to handle the exponential growth of the number of particles. Results range from moderate-intensity regimes ($\sim$ 10 PW) where the laser absorption is negligible, to extreme intensities (> 100 PW) where the degree of absorption reaches 80%. Our study demonstrates good agreement between the analytical model and simulations. The expected properties of the hard photon emission and the generated pair-plasma are investigated, and the experimental signatures for near-future laser facilities are discussed. ",Laser absorption via QED cascades in counter propagating laser pulses " In this paper, we derive a new form of maximum principle for smooth functions on a complete noncompact Riemannian manifold $M$ for which there exists a bounded vector field $X$ such that $\langle\nabla f,X\rangle\geq 0$ on $M$ and $\mathrm{div} X\geq af$ outside a suitable compact subset} of $M$, for some constant $a>0$, under the assumption that $M$ has either polynomial or exponential volume growth. We then use it to obtain some straightforward applications to smooth functions and, more interestingly, to Bernstein-type results for hypersurfaces immersed into a Riemannian manifold endowed with a Killing vector field, as well as to some results on the existence and size of minimal submanifolds immersed into a Riemannian manifold endowed with a conformal vector field. ",A maximum principle related to volume growth and applications " An interesting example for collective decision making is the so-called Mexican wave during which the spectators in a stadium leap to their feet with their arms up and then sit down again following those to their left (right) with a small delay. Here we use a simple, but realistic model to explain how the combination of the local and global interactions of the spectators produces a breaking of the symmetry resulting in the replacement of the symmetric solution -- containing two propagating waves -- by a single wave moving in one of the two possible directions. Our model is based on and compared to the extensive observations of volunteers filling out the related questionnaire we have posted on the Internet. We find that, as a function of the parameter controlling the strength of the global interactions, the transition to the single wave solution has features reminiscent of discontinuous transitions. After the spontaneous symmetry breaking the two directions of propagation are still statistically equivalent. We investigate also how this remaining symmetry is broken in real stadia by a small asymmetrical term in the perception of spectators. ",Initiating a Mexican wave: An instantaneous collective decision with both short and long range interactions " In recent years, superfluid dark matter (SfDM) has become a competitive model of emergent modified Newtonian dynamics (MOND) scenario: MOND phenomenons naturally emerge as a derived concept due to an extra force mediated between baryons by phonons as a result of axionlike particles condensed as superfluid at galactic scales; Beyond galactic scales, these axionlike particles behave as normal fluid without phonon-mediated MOND-like force between baryons, therefore SfDM also maintains the usual success of $\Lambda$CDM at cosmological scales. In this paper, we use gravitational waves (GWs) to probe the relevant parameter space of SfDM. GWs through Bose-Einstein condensate (BEC) could propagate with a speed slightly deviation from the speed-of-light due to the change in the effective refractive index, which depends on the SfDM parameters and GW-source properties. We find that Five hundred meter Aperture Spherical Telescope (FAST), Square Kilometre Array (SKA) and International Pulsar Timing Array (IPTA) are the most promising means as GW probe of relevant parameter space of SfDM. Future space-based GW detectors are also capable of probing SfDM if a multimessenger approach is adopted. ",Gravitational wave as probe of superfluid dark matter " Application markets provide a communication channel between app developers and their end-users in form of app reviews, which allow users to provide feedback about the apps. Although security and privacy in mobile apps are one of the biggest issues, it is unclear how much people are aware of these or discuss them in reviews. In this study, we explore the privacy and security concerns of users using reviews in the Google Play Store. For this, we conducted a study by analyzing around 2.2M reviews from the top 539 apps of this Android market. We found that 0.5\% of these reviews are related to the security and privacy concerns of the users. We further investigated these apps by performing dynamic analysis which provided us valuable insights into their actual behaviors. Based on the different perspectives, we categorized the apps and evaluated how the different factors influence the users' perception of the apps. It was evident from the results that the number of permissions that the apps request plays a dominant role in this matter. We also found that sending out the location can affect the users' thoughts about the app. The other factors do not directly affect the privacy and security concerns for the users. ",An Empirical Study on User Reviews Targeting Mobile Apps' Security & Privacy " This paper presents an analytical model of punctual elastic contact between a rigid body of arbitrary geometry and a plane surface. A simple analytical model is developed in order to evaluate the contact force knowing the volume of interpenetration, the surface and the perimeter of the base of this volume and the mechanical characteristics of surfaces in contact. Analytical and experimental validations are made for this model in the case of simple shapes (spherical, conical and pyramidal). Next, an approach for the resolution in case of contact between a rigid body and a viscoelastic plane is presented. The elastic constants are replaced by an integral operator corresponding to the viscoelastic stress-strain relation. At last, the viscoelastic punctual contact is studied analytically and validated experimentally. ",A simple model for elastic and viscoelastic punch indentation problems with experimental validation " A graph $G$ is $k$-critical if $G$ is not $(k-1)$-colorable, but every proper subgraph of $G$ is $(k-1)$-colorable. A graph $G$ is $k$-choosable if $G$ has an $L$-coloring from every list assignment $L$ with $|L(v)|=k$ for all $v$, and a graph $G$ is \emph{$k$-list-critical} if $G$ is not $(k-1)$-choosable, but every proper subgraph of $G$ is $(k-1)$-choosable. The problem of bounding (from below) the number of edges in a $k$-critical graph has been widely studied, starting with work of Gallai and culminating with the seminal results of Kostochka and Yancey, who essentially solved the problem. In this paper, we improve the best lower bound on the number of edges in a $k$-list-critical graph. Our proof uses the discharging method, which makes it simpler and more modular than previous work in this area. ","Edge Lower Bounds for List Critical Graphs, via Discharging" " Adopting the omni-Lie algebroid approach to Dirac-Jacobi structures, we propose and investigate a notion of weak dual pairs in Dirac-Jacobi geometry. Their main motivating examples arise from the theory of multiplicative precontact structures on Lie groupoids. Among other properties of weak dual pairs, we prove two main results. 1) We show that the property of fitting in a weak dual pair defines an equivalence relation for Dirac-Jacobi manifolds. So, in particular, we get the existence of self-dual pairs and this immediately leads to an alternative proof of the normal form theorem around Dirac-Jacobi transversals. 2) We prove the characteristic leaf correspondence theorem for weak dual pairs paralleling and extending analogous results for symplectic and contact dual pairs. Moreover, the same ideas of this proof apply to get a presymplectic leaf correspondence for weak dual pairs in Dirac geometry (not yet present in literature). ",Weak Dual Pairs in Dirac-Jacobi Geometry " Here we report observations of the two lowest inversion transitions of ammonia with the 70-m Tidbinbilla radio telescope. They were conducted to determine the kinetic temperatures in the dense clumps of the G333 giant molecular cloud associated with RCW 106 and to examine the effect that accurate temperatures have on the calculation of derived quantities such as mass. This project is part of a larger investigation to understand the timescales and evolutionary sequence associated with high-mass star formation, particularly its earliest stages. Assuming that the initial chemical composition of a giant molecular cloud is uniform, any abundance variations within will be due to evolutionary state. We have identified 63 clumps using SIMBA 1.2-mm dust continuum maps and have calculated gas temperatures for most (78 per cent) of these dense clumps. After using Spitzer GLIMPSE 8.0 $\mu$m emission to separate the sample into IR-bright and IR-faint clumps, we use statistical tests to examine whether our classification shows different populations in terms of mass and temperature. We find that clump mass and column density show no significant population difference, and that kinetic temperature is the best parameter to distinguish between the gravitationally bound state of each clump. The kinetic temperature was the only parameter found to have a significantly low probability of being drawn from the same population. This suggests that clump radii does not have a large effect on the temperature of a clump, so clumps of similar radii may have different internal heating mechanisms. We also find that while the IR-bright clumps have a higher median virial mass, both samples have a similar range for both virial mass and FWHM. There are 87 per cent (40 of 46) of the clumps with masses larger than the virial mass, suggesting that they will form stars or are already undergoing star formation. ",Molecular line mapping of the giant molecular cloud associated with RCW 106 - IV. Ammonia towards dust emission We report a study of the pressure effect (PE) on the in-plane magnetic field penetration depth lambda_{ab} in YBa_2Cu_4O_8 by means of Meissner fraction measurements. A pronounced PE on lambda_{ab}^{-2}(0) was observed with a maximum relative shift of \Delta\lambda^{-2}_{ab}/\lambda^{-2}_{ab}= 44(3)% at a pressure of 10.2 kbar. It arises from the pressure dependence of the effective in-plane charge carrier mass and pressure induced charge carrier transfer from the CuO chains to the superconducting CuO_2 planes. The present results imply that the charge carriers in YBa_2Cu_4O_8 are coupled to the lattice. ,Pressure effect on the in-plane magnetic penetration depth in YBa_2Cu_4O_8 " The acceleration of the universe can be explained either through dark energy or through the modification of gravity on large scales. In this paper we investigate modified gravity models and compare their observable predictions with dark energy models. Modifications of general relativity are expected to be scale-independent on super-horizon scales and scale-dependent on sub-horizon scales. For scale-independent modifications, utilizing the conservation of the curvature scalar and a parameterized post-Newtonian formulation of cosmological perturbations, we derive results for large scale structure growth, weak gravitational lensing, and cosmic microwave background anisotropy. For scale-dependent modifications, inspired by recent $f(R)$ theories we introduce a parameterization for the gravitational coupling $G$ and the post-Newtonian parameter $\gamma$. These parameterizations provide a convenient formalism for testing general relativity. However, we find that if dark energy is generalized to include both entropy and shear stress perturbations, and the dynamics of dark energy is unknown a priori, then modified gravity cannot in general be distinguished from dark energy using cosmological linear perturbations. ",Distinguishing Modified Gravity from Dark Energy " We demonstrate that categories of continuous actions of topological monoids on discrete spaces are Grothendieck toposes. We exhibit properties of these toposes, giving a solution to the corresponding Morita-equivalence problem. We characterize these toposes in terms of their canonical points. We identify natural classes of representatives with good topological properties, `powder monoids' and then `complete monoids', for the Morita-equivalence classes of topological monoids. Finally, we show that the construction of these toposes can be made (2-)functorial by considering geometric morphisms induced by continuous semigroup homomorphisms. ",Toposes of Topological Monoid Actions " Context: The complex system HD 100453 AB with a ring-like circumprimary disk and two spiral arms, one of which is pointing to the secondary, is a good laboratory to test spiral formation theories. Aims: To quantify the interaction of HD 100453 B with the circumprimary disk. Methods: Using ALMA band 6 dust continuum and CO isotopologue observations we study the HD 100453 AB system with a spatial resolution of 0.09"" x 0.17"" at 234 GHz. We use SPH simulations and orbital fitting to investigate the tidal influence of the companion on the disk. Results: We resolve the continuum emission around HD 100453 A into a disk between 0.22"" and 0.40"" with an inclination of 29.5 deg. and a position angle of 151.0 deg., an unresolved inner disk, and excess mm emission cospatial with the northern spiral arm which was previously detected using scattered light observations. We also detect CO emission from 7 au (well within the disk cavity) out to 1.10"", i.e., overlapping with HD 100453 B at least in projection. The outer CO disk PA and inclination differ by up to 10 deg. from the values found for the inner CO disk and the dust continuum emission, which we interpret as due to gravitational interaction with HD 100453 B. Both the spatial extent of the CO disk and the detection of mm emission at the same location as the northern spiral arm are in disagreement with the previously proposed near co-planar orbit of HD 100453 B. Conclusions: We conclude that HD 100453 B has an orbit that is significantly misaligned with the circumprimary disk. Because it is unclear whether such an orbit can explain the observed system geometry we highlight an alternative scenario that explains all detected disk features where another, (yet) undetected, low mass close companion within the disk cavity, shepherds a misaligned inner disk whose slowly precessing shadows excite the spiral arms. ",ALMA study of the HD 100453 AB system and the tidal interaction of the companion with the disk " The thermal properties of cold dense nuclear matter are investigated with chiral perturbation theory. The evolution curves for the baryon number density, baryon number susceptibility, pressure and the equation of state are obtained. The chiral condensate is calculated and our result shows that when the baryon chemical potential goes beyond $1150 \mathrm{MeV}$, the absolute value of the quark condensate decreases rapidly, which indicates a tendency of chiral restoration. ",The thermal evolution of nuclear matter at zero temperature and definite baryon number density in chiral perturbation theory " A review of the potential sensitivity of the LHCb experiment to the parton distribution functions is given. Studies of dimuon events coming from Z, W and low mass Drell Yan production are presented and compared to MSTW theoretical predictions. ",Potential PDF sensitivity at LHCb " The solutions that describe the motion of the classical simple pendulum have been known for very long time and are given in terms of elliptic functions, which are doubly periodic functions in the complex plane. The independent variable of the solutions is time and it can be considered either as a real variable or as a purely imaginary one, which introduces a rich symmetry structure in the space of solutions. When solutions are written in terms of the Jacobi elliptic functions the symmetry is codified in the functional form of its modulus, and is described mathematically by the six dimensional coset group $\Gamma/\Gamma(2)$ where $\Gamma$ is the modular group and $\Gamma(2)$ is its congruence subgroup of second level. In this paper we discuss the physical consequences this symmetry has on the pendulum motions and it is argued they have similar properties to the ones termed as duality symmetries in other areas of physics, such as field theory and string theory. In particular a single solution of pure imaginary time for all allowed value of the total mechanical energy is given and obtained as the $S$-dual of a single solution of real time, where $S$ stands for the $S$ generator of the modular group. ",Duality symmetries behind solutions of the classical simple pendulum " We utilize the ALCOR model for mid-rapidity hadron number predictions at AGS, SPS and RHIC energies. We present simple fits for the energy dependence of stopping and quark production. ",Quark coalescence in the mid rapidity region at RHIC " In this work, the quantization of the most general Bianchi Type I geometry, with and without a cosmological constant, is considered. In the spirit of identifying and subsequently removing as many gauge degrees of freedom as possible, a reduction of the initial 6--dimensional configuration space is presented. This reduction is achieved by imposing as additional conditions on the wave function, the quantum version of the --linear in momenta-- classical integrals of motion (conditional symmetries). The vector fields inferred from these integrals induce, through their integral curves, motions in the configuration space which can be identified to the action of the automorphism group of Type I, i.e. $GL(3,\Re)$. Thus, a wave function depending on one degree of freedom, namely the determinant of the scale factor matrix, is found. A measure for constructing the Hilbert space is proposed. This measure respects the above mentioned symmetries, and is also invariant under the classical property of covariance under arbitrary scalings of the Hamiltonian (quadratic constraint). ",Conditional Symmetries and the Quantization of Bianchi Type I Vacuum Cosmologies with and without Cosmological Constant " In this paper, we investigate the use of data obtained from prompting a large generative language model, ChatGPT, to generate synthetic training data with the aim of augmenting data in low resource scenarios. We show that with appropriate task-specific ChatGPT prompts, we outperform the most popular existing approaches for such data augmentation. Furthermore, we investigate methodologies for evaluating the similarity of the augmented data generated from ChatGPT with the aim of validating and assessing the quality of the data generated. ",ZeroShotDataAug: Generating and Augmenting Training Data with ChatGPT " From the work of Erd\H{o}s and R\'{e}nyi from 1963 it is known that almost all graphs have no symmetry. In 2017, Lupini, Man\v{c}inska and Roberson proved a quantum counterpart: Almost all graphs have no quantum symmetry. Here, the notion of quantum symmetry is phrased in terms of Banica's definition of quantum automorphism groups of finite graphs from 2005, in the framework of Woronowicz's compact quantum groups. Now, Erd\H{o}s and R\'{e}nyi also proved a complementary result in 1963: Almost all trees do have symmetry. The crucial point is the almost sure existence of a cherry in a tree. But even more is true: We almost surely have two cherries in a tree - and we derive that almost all trees have quantum symmetry. We give an explicit proof of this quantum counterpart of Erd\H{o}s and R\'{e}nyi's result on trees. ",Almost all trees have quantum symmetry " This workshop focuses on visualization education, literacy, and activities. It aims to streamline previous efforts and initiatives of the visualization community to provide a format for education and engagement practices in visualization. It intends to bring together junior and senior scholars to share research and experience and to discuss novel activities, teaching methods, and research challenges. The workshop aims to serve as a platform for interdisciplinary researchers within and beyond the visualization community such as education, learning analytics, science communication, psychology, or people from adjacent fields such as data science, AI, and HCI. It will include presentations of research papers and practical reports, as well as hands-on activities. In addition, the workshop will allow participants to discuss challenges they face in data visualization education and sketch a research agenda of visualization education, literacy, and activities. ","EduVis: Workshop on Visualization Education, Literacy, and Activities" " Over the last decade, most of the increase in computing power has been gained by advances in accelerated many-core architectures, mainly in the form of GPGPUs. While accelerators achieve phenomenal performances in various computing tasks, their utilization requires code adaptations and transformations. Thus, OpenMP, the most common standard for multi-threading in scientific computing applications, introduced offloading capabilities between host (CPUs) and accelerators since v4.0, with increasing support in the successive v4.5, v5.0, v5.1, and the latest v5.2 versions. Recently, two state-of-the-art GPUs -- the Intel Ponte Vecchio Max 1100 and the NVIDIA A100 GPUs -- were released to the market, with the oneAPI and NVHPC compilers for offloading, correspondingly. In this work, we present early performance results of OpenMP offloading capabilities to these devices while specifically analyzing the portability of advanced directives (using SOLLVE's OMPVV test suite) and the scalability of the hardware in representative scientific mini-app (the LULESH benchmark). Our results show that the coverage for version 4.5 is nearly complete in both latest NVHPC and oneAPI tools. However, we observed a lack of support in versions 5.0, 5.1, and 5.2, which is particularly noticeable when using NVHPC. From the performance perspective, we found that the PVC1100 and A100 are relatively comparable on the LULESH benchmark. While the A100 is slightly better due to faster memory bandwidth, the PVC1100 reaches the next problem size (400^3) scalably due to the larger memory size. ",Portability and Scalability of OpenMP Offloading on State-of-the-art Accelerators " We calculate production cross sections of a forward quark-gluon pair and of two gluons at mid-rapidity in Deep Inelastic Scattering and in high energy proton-nucleus collisions. The calculation is performed in the framework of the Color Glass Condensate formalism. We first calculate the cross sections in the quasi-classical approximation, which includes multiple rescatterings in the target. We then proceed to include the effects of non-linear small-x evolution in the production cross sections. It is interesting to note that our result for the two-gluon production cross section appears to be in direct violation of AGK cutting rules, which is the first example of such violation in QCD. The calculated quark-gluon and gluon-gluon production cross sections can be used to construct theoretical predictions for two-particle azimuthal correlations at RHIC and LHC (I^{p(d)A}) as well as for Deep Inelastic Scattering experiments at HERA and eRHIC. ",Inclusive Two-Gluon and Valence Quark-Gluon Production in DIS and pA " The inherent ambiguity in ground-truth annotations of 3D bounding boxes, caused by occlusions, signal missing, or manual annotation errors, can confuse deep 3D object detectors during training, thus deteriorating detection accuracy. However, existing methods overlook such issues to some extent and treat the labels as deterministic. In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects. Then, we propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables. The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors to build probabilistic detectors and supervise the learning of the localization uncertainty. Besides, we propose an uncertainty-aware quality estimator architecture in probabilistic detectors to guide the training of the IoU-branch with predicted localization uncertainty. We incorporate the proposed methods into various popular base 3D detectors and demonstrate significant and consistent performance gains on both KITTI and Waymo benchmark datasets. Especially, the proposed GLENet-VR outperforms all published LiDAR-based approaches by a large margin and achieves the top rank among single-modal methods on the challenging KITTI test set. The source code and pre-trained models are publicly available at \url{https://github.com/Eaphan/GLENet}. ",GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation " Thorium-227-based alpha-particle radiopharmaceutical therapies (alpha-RPTs) are currently being investigated in several clinical and pre-clinical studies. After administration, Thorium-227 decays to Radium-223, another alpha-particle-emitting isotope, which redistributes within the patient. Reliable dose quantification of both Thorium-227 and Radium-223 is clinically important, and SPECT can perform this quantification as these isotopes also emit gamma-ray photons. However, reliable quantification is challenging for several reasons: the orders-of-magnitude lower activity compared to conventional SPECT, resulting in a very low number of detected counts, the presence of multiple photopeaks and substantial overlap in the emission spectra of these isotopes. To address these issues, we propose a multiple-energy-window projection-domain quantification (MEW-PDQ) method that jointly estimates the regional activity uptake of both Thorium-227 and Radium-223 directly using the SPECT projection data from multiple energy windows. We evaluated the method with realistic simulation studies conducted with anthropomorphic digital phantoms, including a virtual imaging trial in the context of imaging patients with bone metastases of prostate cancer who were treated with Thorium-227-based alpha-RPTs. The proposed method yielded reliable regional uptake estimates of both isotopes and outperformed state-of-art methods across different lesion sizes, contrasts, and varying levels of intra-lesion heterogeneity. This superior performance was also observed in the virtual imaging trial. Additionally, the variance of the estimated uptake approached the Cram\'er-Rao lower bound-defined theoretical limit. These results provide strong evidence in support of this method for reliable uptake quantification in Thorium-227-based alpha-RPTs. ",Joint regional uptake quantification of Thorium-227 and Radium-223 using a multiple-energy-window projection-domain quantitative SPECT method " We introduce a string-inspired model for a bouncing/cyclic universe, utilizing the scalar-tachyon coupling as well as contribution from curvature in a closed universe. The universe undergoes the locked inflation, tachyon matter dominated rolling expansion, turnaround and contraction, as well as the subsequent deflation and ""bounce"" in each cycle of the cosmological evolution. We perform extensive analytic and numerical studies of the above evolution process. The minimum size of the universe is nonzero for generic initial values. The smooth bounce are made possible because of the negative contribution to effective energy density by the curvature term. No ghosts are ever generated at any point in the entire evolution of the universe, with the Null, Weak, and Dominant Energy Conditions preserved even at the bounce points, contrary to many bounce models previously proposed. And the Strong Energy Condition is satisfied in periods with tachyon matter domination. ",Bound to bounce: a coupled scalar-tachyon model for a smooth bouncing/cyclic universe " We report on comprehensive results identifying the ground state of a triangular-lattice structured YbZnGaO$_4$ to be spin glass, including no long-range magnetic order, prominent broad excitation continua, and absence of magnetic thermal conductivity. More crucially, from the ultralow-temperature a.c. susceptibility measurements, we unambiguously observe frequency-dependent peaks around 0.1 K, indicating the spin-glass ground state. We suggest this conclusion to hold also for its sister compound YbMgGaO$_4$, which is confirmed by the observation of spin freezing at low temperatures. We consider disorder and frustration to be the main driving force for the spin-glass phase. ",Spin-glass ground state in a triangular-lattice compound YbZnGaO$_4$ " Motivated by the direct discovery of gravitational waves (GWs) from black holes and neutron stars, there is a growing interest in investigating GWs from other sources. Among them, GWs from cosmic strings are particularly fascinating since they naturally appear in a large class of grand unified theories (GUTs). Remarkably, a series of pulsar-timing arrays (PTAs) might have already observed GWs in the nHz regime, hinting towards forming a cosmic string network in the early universe, which could originate from phase transition associated with the seesaw scale emerging from GUT. In this work, we show that if these observations from PTAs are confirmed, GWs from cosmic strings, when combined with fermion masses, gauge coupling unification, and proton decay constraints, the parameter space of the minimal SO(10) GUT becomes exceedingly restrictive. The proposed minimal model is highly predictive and will be fully tested in a number of upcoming gravitational wave observatories. ","Probing Minimal Grand Unification through Gravitational Waves, Proton Decay, and Fermion Masses" " Despite much recent work, detecting out-of-distribution (OOD) inputs and adversarial attacks (AA) for computer vision models remains a challenge. In this work, we introduce a novel technique, DAAIN, to detect OOD inputs and AA for image segmentation in a unified setting. Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution. We equip the density estimator with a classification head to discriminate between regular and anomalous inputs. To deal with the high-dimensional activation-space of typical segmentation networks, we subsample them to obtain a homogeneous spatial and layer-wise coverage. The subsampling pattern is chosen once per monitored model and kept fixed for all inputs. Since the attacker has access to neither the detection model nor the sampling key, it becomes harder for them to attack the segmentation network, as the attack cannot be backpropagated through the detector. We demonstrate the effectiveness of our approach using an ESPNet trained on the Cityscapes dataset as segmentation model, an affine Normalizing Flow as density estimator and use blue noise to ensure homogeneous sampling. Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators. ",DAAIN: Detection of Anomalous and Adversarial Input using Normalizing Flows " Ultra-diffuse galaxies (UDGs) are spatially extended, low surface brightness stellar systems with regular elliptical-like morphology found in a wide range of environments. Studies of the internal dynamics and dark matter content of UDGs that would elucidate their formation and evolution have been hampered by their low surface brightnesses. Here we present spatially resolved velocity profiles, stellar velocity dispersions, ages and metallicities for 9 UDGs in the Coma cluster. We use intermediate-resolution spectra obtained with Binospec, the MMT's new high-throughput optical spectrograph. We derive dark matter fractions between 50~\%\ and 90~\% within the half-light radius using Jeans dynamical models. Three galaxies exhibit major axis rotation, two others have highly anisotropic stellar orbits, and one shows signs of triaxiality. In the Faber--Jackson and mass--metallicity relations, the 9 UDGs fill the gap between cluster dwarf elliptical (dE) and fainter dwarf spheroidal (dSph) galaxies. Overall, the observed properties of all 9 UDGs can be explained by a combination of internal processes (supernovae feedback) and environmental effects (ram-pressure stripping, interaction with neighbors). These observations suggest that UDGs and dEs are members of the same galaxy population. ",Internal dynamics and stellar content of nine ultra-diffuse galaxies in the Coma cluster prove their evolutionary link with dwarf early-type galaxies " We show how a proper radial modulation of the composition of core-multi-shell nanowires critically enhances the control of the free-carrier density in the high-mobility core with respect to core-single-shell structures, thus overcoming the technological difficulty of fine tuning the remote doping density. We calculate the electron population of the different nanowire layers as a function of the doping density and of several geometrical parameters by means of a self-consistent Schr\""odinger-Poisson approach: Free carriers tend to localize in the outer shell and screen the core from the electric field of the dopants. ",Tailoring the core electron density in modulation-doped Core-Multi-Shell nanowires " We investigate the morphology of the stellar distribution in a sample of Milky Way (MW) like galaxies in the TNG50 simulation. Using a local in shell iterative method (LSIM) as the main approach, we explicitly show evidence of twisting (in about 52% of halos) and stretching (in 48% of them) in the real space. This is matched with the re-orientation observed in the eigenvectors of the inertia tensor and gives us a clear picture of having a re-oriented stellar distribution. We make a comparison between the shape profile of dark matter (DM) halo and stellar distribution and quite remarkably see that their radial profiles are fairly close, especially at small galactocentric radii where the stellar disk is located. This implies that the DM halo is somewhat aligned with stars in response to the baryonic potential. The level of alignment mostly decreases away from the center. We study the impact of substructures in the orbital circularity parameter. It is demonstrated that in some cases, far away substructures are counter-rotating compared with the central stars and may flip the sign of total angular momentum and thus the orbital circularity parameter. Truncating them above 150 kpc, however, retains the disky structure of the galaxy as per initial selection. Including the impact of substructures in the shape of stars, we explicitly show that their contribution is subdominant. Overlaying our theoretical results to the observational constraints from previous literature, we establish fair agreement. ",Inferring the Morphology of Stellar Distribution in TNG50: Twisted and Twisted-Stretched shapes " The differential equation (DE) with proportional delay is a particular case of the time-dependent delay differential equation (DDE). In this paper, we solve non-linear DEs with proportional delay using the successive approximation method (SAM). We prove the existence, uniqueness of theorems, and stability for DEs with proportional delay using SAM. We derive convergence results for these equations by using the Lipschitz condition. We generalize these results to the fractional differential equations (FDEs) and system of FDEs containing Caputo fractional derivative. Further, we obtain the series solution of the pantograph equation and Ambartsumian equation in the form of a power series which are convergent for all reals. Finally, we illustrate the efficacy of the SAM by example. The results obtained by SAM are compared with exact solutions and other iterative methods. It is observed that SAM is simpler compared to other methods and the solutions obtained using SAM are consistent with the exact solution. ",Existence and Uniqueness Theorems for Differential Equations with Proportional Delay " The short gravitational wave signal from the merger of compact binaries encodes a surprising amount of information about the strong-field dynamics of merger into frequencies accessible to ground-based interferometers. In this paper we describe a previously-unknown ""precession"" of the peak emission direction with time, both before and after the merger, about the total angular momentum direction. We demonstrate the gravitational wave polarization encodes the orientation of this direction to the line of sight. We argue the effects of polarization can be estimated nonparametrically, directly from the gravitational wave signal as seen along one line of sight, as a slowly-varying feature on top of a rapidly-varying carrier. After merger, our results can be interpreted as a coherent excitation of quasinormal modes of different angular orders, a superposition which naturally ""precesses"" and modulates the line-of-sight amplitude. Recent analytic calculations have arrived at a similar geometric interpretation. We suspect the line-of-sight polarization content will be a convenient observable with which to define new high-precision tests of general relativity using gravitational waves. Additionally, as the nonlinear merger process seeds the initial coherent perturbation, we speculate the amplitude of this effect provides a new probe of the strong-field dynamics during merger. To demonstrate the ubiquity of the effects we describe, we summarize the post-merger evolution of 104 generic precessing binary mergers. Finally, we provide estimates for the detectable impacts of precession on the waveforms from high-mass sources. These expressions may identify new precessing binary parameters whose waveforms are dissimilar from the existing sample. ",Precession during merger 1: Strong polarization changes are observationally accessible features of strong-field gravity during binary black hole merger " The morphology of haloes inform about both cosmological and galaxy formation models. We use the Minkowski Functionals (MFs) to characterize the actual morphology of haloes, only partially captured by smooth density profile, going beyond the spherical or ellipsoidal symmetry. We employ semi-analytical haloes with NFW and $\alpha\beta\gamma$-profile and spherical or ellipsoidal shape to obtain a clear interpretation of MFs as function of inner and outer slope, concentration and sphericity parameters. We use the same models to mimic the density profile of $N$-body haloes, showing that their MFs clearly differ as sensitive to internal substructures. This highlights the benefit of MFs at the halo scales as promising statistics to improve the spatial modeling of dark matter, crucial for future lensing, Sunyaev-Zel'dovich, and X-ray mass maps as well as dark matter detection based on high-accuracy data. ",Morphology of dark matter haloes beyond triaxiality " Massive protostars attain high luminosities as they are actively accreting and the radiation pressure exerted on the gas in the star's atmosphere may launch isotropic high-velocity winds. These winds will collide with the surrounding gas producing shock-heated ($T\sim 10^7$ K) tenuous gas that adiabatically expands and pushes on the dense gas that may otherwise be accreted. We present a suite of 3D radiation-magnetohydrodynamic simulations of the collapse of massive prestellar cores and include radiative feedback from the stellar and dust-reprocessed radiation fields, collimated outflows, and, for the first time, isotropic stellar winds to model how these processes affect the formation of massive stars. We find that winds are initially launched when the massive protostar is still accreting and its wind properties evolve as the protostar contracts to the main-sequence. Wind feedback drives asymmetric adiabatic wind bubbles that have a bipolar morphology because the dense circumstellar material pinches the expansion of the hot shock-heated gas. We term this the ""wind tunnel effect."" If the core is magnetized, wind feedback is less efficient at driving adiabatic wind bubbles initially because magnetic tension delays their growth. We find that wind feedback eventually quenches accretion onto $\sim$30 $\rm{M_{\rm \odot}}$ protostars that form from the collapse of the isolated cores simulated here. Hence, our results suggest that $\gtrsim$30 $\rm{M_{\rm \odot}}$ stars likely require larger-scale dynamical inflows from their host cloud to overcome wind feedback. Additionally, we discuss the implications of observing adiabatic wind bubbles with \textit{Chandra} while the massive protostars are still highly embedded. ","A Massive Star is Born: How Feedback from Stellar Winds, Radiation Pressure, and Collimated Outflows Limits Accretion onto Massive Stars" " Recent theoretical work has demonstrated that Neighbor Joining applied to concatenated DNA sequences is a statistically consistent method of species tree reconstruction. This brief note compares the accuracy of this approach to other popular statistically consistent species tree reconstruction algorithms including ASTRAL-II Neighbor Joining using average gene-tree internode distances (NJst) and SVD-Quartets+PAUP*, as well as concatenation using maximum likelihood (RaxML). We find that the faster Neighbor Joining, applied to concatenated sequences, is among the most effective of these methods for accurate species tree reconstruction. ",Species tree estimation using Neighbor Joining " The realization of quantum error correction is an essential ingredient for reaching the full potential of fault-tolerant universal quantum computation. Using a range of different schemes, logical qubits can be redundantly encoded in a set of physical qubits. One such scalable approach is based on the surface code. Here we experimentally implement its smallest viable instance, capable of repeatedly detecting any single error using seven superconducting qubits, four data qubits and three ancilla qubits. Using high-fidelity ancilla-based stabilizer measurements we initialize the cardinal states of the encoded logical qubit with an average logical fidelity of 96.1%. We then repeatedly check for errors using the stabilizer readout and observe that the logical quantum state is preserved with a lifetime and coherence time longer than those of any of the constituent qubits when no errors are detected. Our demonstration of error detection with its resulting enhancement of the conditioned logical qubit coherence times in a 7-qubit surface code is an important step indicating a promising route towards the realization of quantum error correction in the surface code. ",Repeated Quantum Error Detection in a Surface Code We introduce braid monodromy for the discriminant hypersurface in versal unfoldings of hypersurface singularities. Our objective is then to compute this invariant for singularities of Brieskorn Pham type: First we consider the unfolding by linear monomials in some detail and relate it to the versal unfolding. Next we present the A_n case in our framework. The two main chapters then provide the result in case of two variables and the inductive step to all higher dimensions. We finish with a presentation of the fundamental group of the discriminant complement. ,Braid Monodromy of Hypersurface Singularities " We discuss how hadronic total cross sections at high energy depend on the details of QCD, namely on the number of colours $N_c$ and the quark masses. We find that while a ""Froissart""-type behaviour $\sigma_{\rm tot}\sim B\log^2s$ is rather general, relying only on the presence of higher-spin stable particles in the spectrum, the value of $B$ depends quite strongly on the quark masses. Moreover, we argue that $B$ is of order ${\cal O}(N_c^0)$ at large $N_c$, and we discuss a bound for $B$ which does not become singular in the $N_f=2$ chiral limit, unlike the Froissart-\L ukaszuk-Martin bound. ",Comments on high-energy total cross sections in QCD " We study sequences of conformal deformations of a smooth closed Riemannian manifold of dimension $n$, assuming uniform volume bounds and $L^{n/2}$ bounds on their scalar curvatures. Singularities may appear in the limit. Nevertheless, we show that under such bounds the underlying metric spaces are pre-compact in the Gromov-Hausdorff topology. Our study is based on the use of $A_\infty$-weights from harmonic analysis, and provides geometric controls on the limit spaces thus obtained. Our techniques also show that any conformal deformation of the Euclidean metric on $R^n$ with infinite volume and finite $L^{n/2}$ norm of the scalar curvature satisfies the Euclidean isoperimetric inequality. ",$A\_\infty$ weights and compactness of conformal metrics under $L^{n/2}$ curvature bounds " We propose that statistical averages in relativistic turbulence exhibit universal properties. We consider analytically the velocity and temperature differences structure functions in the (1+1)-dimensional relativistic turbulence in which shock waves provide the main contribution to the structure functions in the inertial range. We study shock scattering, demonstrate the stability of the shock waves, and calculate the anomalous exponents. We comment on the possibility of finite time blowup singularities. ",Shocks and Universal Statistics in (1+1)-Dimensional Relativistic Turbulence " Many real world problems can be defined as optimisation problems in which the aim is to maximise an objective function. The quality of obtained solution is directly linked to the pertinence of the used objective function. However, designing such function, which has to translate the user needs, is usually fastidious. In this paper, a method to help user objective functions designing is proposed. Our approach, which is highly interactive, is based on man machine dialogue and more particularly on the comparison of problem instance solutions by the user. We propose an experiment in the domain of cartographic generalisation that shows promising results. ",Objective Function Designing Led by User Preferences Acquisition " Most NLP datasets are manually labeled, so suffer from inconsistent labeling or limited size. We propose methods for automatically improving datasets by viewing them as graphs with expected semantic properties. We construct a paraphrase graph from the provided sentence pair labels, and create an augmented dataset by directly inferring labels from the original sentence pairs using a transitivity property. We use structural balance theory to identify likely mislabelings in the graph, and flip their labels. We evaluate our methods on paraphrase models trained using these datasets starting from a pretrained BERT model, and find that the automatically-enhanced training sets result in more accurate models. ",Finding Friends and Flipping Frenemies: Automatic Paraphrase Dataset Augmentation Using Graph Theory " We have solved the Einstein-Maxwell equations for a class of isotropic metrics with constant spatial curvature in the presence of magnetic fields. We consider a slight modification of the Tolman averaging relations so that the energy-momentum tensor of the electromagnetic field possesses an anisotropic pressure component. This inhomogeneous magnetic universe is isotropic and its time evolution is guided by the usual Friedmann equations. In the case of flat universe, the space-time metric is free of singularities (except the well-known initial singularity at t = 0). It is shown that the anisotropic pressure of our model has a straightforward relation to the Weyl tensor. We also analyze the effect of this new ingredient on the motion of test particles and on the geodesic deviation of the cosmic fluid. ",Magnetic fields and the Weyl tensor in the early universe " Surface effects of a doped thin film made of a strongly correlated material are investigated both in the absence and presence of a perpendicular electric field. We use an inhomogeneous Gutzwiller approximation for a single band Hubbard model in order to describe correlation effects. For low doping, the bulk value of the quasiparticle weight is recovered exponentially deep into the slab, but with increasing doping, additional Friedel oscillations appear near the surface. We show that the inverse correlation length has a power-law dependence on the doping level. In the presence of an electrical field, considerable changes in the quasiparticle weight can be realized throughout the system. We observe a large difference (as large as five orders of magnitude) in the quasiparticle weight near the opposite sides of the slab. This effect can be significant in switching devices that use the surface states for transport. ",Field effect on surface states in a doped Mott-Insulator thin film " We re-analyze the M31 microlensing event WeCAPP-GL1/Point-AGAPE-S3 taking into account that stars are not point-like but extended. We show that the finite size of stars can dramatically change the self-lensing eventrate and (less dramatically) also the halo lensing eventrate, if events are as bright as WeCAPP-GL1. The brightness of the brightest events mostly depends on the source sizes and fluxes and on the distance distribution of sources and lenses and therefore can be used as a sensitive discriminator between halo-lensing and self-lensing events, provided the stellar population mix of source stars is known well enough. Using a realistic model for the 3D-light distribution, stellar population and extinction of M31, we show that an event like WeCAPP-GL1 is very unlikely to be caused by self-lensing. In the entire WeCAPP-field ($17.2'\times 17.2'$ centered on the bulge) we expect only one self-lensing event every 49 years with the approximate parameters of WeCAPP-GL1 (time-scale 1-3d, $R$ flux-excess <19.0 mag). If we assume only 20% of the dark halos of M31 and the Milky-Way consist of 1 solar mass MACHOs an event like WeCAPP-GL1 would occur every 10 years. Further more, if one uses position, FWHM time scale, flux excess and color of WeCAPP-GL1, self-lensing is even 13 times less likely than lensing by a MACHO, if MACHOs contribute 20% to the total halo mass and have masses in the range of 0.1 to 4 solar masses. We also demonstrate that (i) the brightness distribution of events in general is a good discriminator between self and halo lensing (ii) the time-scale distribution is a good discriminator if the MACHO mass is larger than 0.5 solar masses. Future surveys of M31 like PAndromeda (Pan-STARRS 1) should be able to provide many more such events within the next 4 years. ",The M31 microlensing event WeCAPP-GL1/Point-AGAPE-S3: evidence for a MACHO component in the dark halo of M31? " We calculate the back-reaction of long wavelength cosmological perturbations on a general relativistic measure of the local expansion rate of the Universe. Specifically, we consider a cosmological model in which matter is described by two scalar matter fields, one being the inflaton and the other one representing a matter field which is used as a clock. We analyze back-reaction in a phase of inflaton-driven slow-roll inflation, and find that the leading infrared back-reaction terms contributing to the evolution of the expansion rate do not vanish when measured at a fixed value of the clock field. We also analyze the back-reaction of entropy modes in a specific cosmological model with negative square mass for the entropy field and find that back-reaction can become significant. Our work provides evidence that, in general, the back-reaction of infrared fluctuations could be locally observable. ",Back Reaction Of Perturbations In Two Scalar Field Inflationary Models " It is well-known that every multicritical circle map without periodic orbits admits a unique invariant Borel probability measure which is purely singular with respect to Lebesgue measure. Can such a map leave invariant an infinite, $\sigma$-finite invariant measure which is absolutely continuous with respect to Lebesgue measure? In this paper, using an old criterion due to Katznelson, we show that the answer to this question is no. ",There are no $\sigma$-finite absolutely continuous invariant measures for multicritical circle maps " Stereo data collected by the HiRes experiment over a six year period are examined for large-scale anisotropy related to the inhomogeneous distribution of matter in the nearby Universe. We consider the generic case of small cosmic-ray deflections and a large number of sources tracing the matter distribution. In this matter tracer model the expected cosmic ray flux depends essentially on a single free parameter, the typical deflection angle theta. We find that the HiRes data with threshold energies of 40 EeV and 57 EeV are incompatible with the matter tracer model at a 95% confidence level unless theta is larger than 10 degrees and are compatible with an isotropic flux. The data set above 10 EeV is compatible with both the matter tracer model and an isotropic flux. ",Analysis of large-scale anisotropy of ultra-high energy cosmic rays in HiRes data " An ultrametric topology formalizes the notion of hierarchical structure. An ultrametric embedding, referred to here as ultrametricity, is implied by a hierarchical embedding. Such hierarchical structure can be global in the data set, or local. By quantifying extent or degree of ultrametricity in a data set, we show that ultrametricity becomes pervasive as dimensionality and/or spatial sparsity increases. This leads us to assert that very high dimensional data are of simple structure. We exemplify this finding through a range of simulated data cases. We discuss also application to very high frequency time series segmentation and modeling. ",The Remarkable Simplicity of Very High Dimensional Data: Application of Model-Based Clustering " A theory governing the metric and matter fields in spacetime is {\it locally causal} if the probability distribution for the fields in any region is determined solely by physical data in the region's past, i.e. it is independent of events at space-like separated points. General relativity is manifestly locally causal, since the fields in a region are completely determined by physical data in its past. It is natural to ask whether other possible theories in which the fundamental description of space-time is classical and geometric -- for instance, hypothetical theories which stochastically couple a classical spacetime geometry to a quantum field theory of matter -- might also be locally causal. A quantum theory of gravity, on the other hand, should allow the creation of spacetimes which violate local causality at the macroscopic level. This paper describes an experiment to test the local causality of spacetime, and hence to test whether or not gravity behaves as quantum theories of gravity suggest, in this respect. The experiment will either produce direct evidence that the gravitational field is not locally causal, and thus weak confirmation of quantum gravity, or else identify a definite limit to the domain of validity of quantum theory. ",A Proposed Test of the Local Causality of Spacetime " Analytic and semi-analytic solution are often used by researchers and practicioners to estimate aquifer parameters from unconfined aquifer pumping tests. The non-linearities associated with unconfined (i.e., water table) aquifer tests makes their analysis more complex than confined tests. Although analytical solutions for unconfined flow began in the mid-1800s with Dupuit, Thiem was possibly the first to use them to estimate aquifer parameters from pumping tests in the early 1900s. In the 1950s, Boulton developed the first transient well test solution specialized to unconfined flow. By the 1970s Neuman had developed solutions considering both primary transient storage mechanisms (confined storage and delayed yield) without non-physical fitting parameters. In the last decade, research into developing unconfined aquifer test solutions has mostly focused on explicitly coupling the aquifer with the linearized vadose zone. Despite the many advanced solution methods available, there still exists a need for realism to accurately simulate real-world aquifer tests. ",Unconfined Aquifer Flow Theory - from Dupuit to present " Millions of particles are collided every second at the LHCb detector placed inside the Large Hadron Collider at CERN. The particles produced as a result of these collisions pass through various detecting devices which will produce a combined raw data rate of up to 40 Tbps by 2021. These data will be fed through a data acquisition system which reconstructs individual particles and filters the collision events in real time. This process will occur in a heterogeneous farm employing exclusively off-the-shelf CPU and GPU hardware, in a two stage process known as High Level Trigger. The reconstruction of charged particle trajectories in physics detectors, also referred to as track reconstruction or tracking, determines the position, charge and momentum of particles as they pass through detectors. The Vertex Locator subdetector (VELO) is the closest such detector to the beamline, placed outside of the region where the LHCb magnet produces a sizable magnetic field. It is used to reconstruct straight particle trajectories which serve as seeds for reconstruction of other subdetectors and to locate collision vertices. The VELO subdetector will detect up to 1000 million particles every second, which need to be reconstructed in real time in the High Level Trigger. We present Search by triplet, an efficient track reconstruction algorithm. Our algorithm is designed to run efficiently across parallel architectures. We extend on previous work and explain the algorithm evolution since its inception. We show the scaling of our algorithm under various situations, and analyze its amortized time in terms of complexity for each of its constituent parts and profile its performance. Our algorithm is the current state-of-the-art in VELO track reconstruction on SIMT architectures, and we qualify its improvements over previous results. ",Search by triplet: An efficient local track reconstruction algorithm for parallel architectures " Finite-temperature Drude weight (spin stiffness) D(T) is evaluated within the anisotropic spin-1/2 Heisenberg model on a chain using the exact diagonalization for small systems. It is shown that odd-side chains allow for more reliable scaling and results, in particular, if one takes into account corrections due to low-frequency finite-size anomalies. At high T and zero magnetization D is shown to scale to zero approaching the isotropic point {\Delta}=1. On the other hand, for {\Delta}>2 at all magnetizations D is nearly exhausted with the overlap with the conserved energy current. Results for the T variation D(T) are also presented. ",Finite-temperature Drude weight within the anisotropic Heisenberg chain " This thesis is concerned with heterotic E8 x E8 string models that can produce quasirealistic N = 1 supersymmetric extensions of the Standard Model in the low-energy limit. We start rather generally by deriving the four-dimensional spectrum and Lagrangian terms from the ten-dimensional theory, through a process of compactification over six-dimensional Calabi-Yau manifolds, upon which holomorphic poly-stable vector bundles are defined. We then specialise to a class of heterotic string models for which the vector bundle is split into a sum of line bundles and the Calabi-Yau manifold is defined as a complete intersection in projective ambient spaces. We develop a method for calculating holomorphic Yukawa couplings for such models, by relating bundle-valued forms on the Calabi-Yau manifold to their ambient space counterparts, so that the relevant integrals can be evaluated over projective spaces. The method is applicable for any of the 7890 CICY manifolds known in the literature, and we show that it can be related to earlier algebraic techniques to compute holomorphic Yukawa couplings. We provide explicit calculations of the holomorphic Yukawa couplings for models compactified on the tetra-quadric and on a co-dimension two CICY. A vanishing theorem is formulated, showing that in some cases, topology rather than symmetry is responsible for the absence of certain trilinear couplings. In addition, some Yukawa matrices are found to be dependent on the complex structure moduli and their rank is reduced in certain regions of the moduli space. In the final part, we focus on a method to evaluate the matter field Kahler potential without knowing the Ricci-flat Calabi-Yau metric. This is possible for large internal gauge fluxes, for which the normalisation integral localises around a point on the compactification manifold. ",Holomorphic Yukawa Couplings in Heterotic String Theory " This work is devoted to the analysis and resolution of a well-posed mathematical model for several processes involved in the artificial circulation of water in a large waterbody. This novel formulation couples the convective heat transfer equation with the modified Navier-Stokes system following a Smagorinsky turbulence model, completed with a suitable set of mixed, nonhomogeneous boundary conditions of diffusive, convective and radiative type. We prove several theoretical results related to existence of solution, and propose a full algorithm for its computation, illustrated with some realistic numerical examples. ",Mathematical analysis and numerical resolution of a heat transfer problem arising in water recirculation " If gravity is asymptotically safe, operators will exhibit anomalous scaling at the ultraviolet fixed point in a way that makes the theory effectively two-dimensional. A number of independent lines of evidence, based on different approaches to quantization, indicate a similar short-distance dimensional reduction. I will review the evidence for this behavior, emphasizing the physical question of what one means by `dimension' in a quantum spacetime, and will discuss possible mechanisms that could explain the universality of this phenomenon. ",Dimension and Dimensional Reduction in Quantum Gravity The influence of the entrance channel asymmetry upon the fragmentation process is addressed by studying heavy-ion induced reactions around the Fermi energy. The data have been recorded with the INDRA 4pi array. An event selection method called the Principal Component Analysis is presented and discussed. It is applied for the selection of central events and furthermore to multifragmentation of single source events. The selected subsets of data are compared to the Statistical Multifragmentation Model (SMM) to check the equilibrium hypothesis and get the source characteristics. Experimental comparisons show the evidence of a decoupling between thermal and compresional (radial flow) degrees of freedom in such nuclear systems. ,Multifragmentation process for different mass asymmetry in the entrance channel around the Fermi energy " We prove, using the Brascamp-Lieb inequality, that the Gaussian measure is the only strong log-concave measure having a strong log-concavity parameter equal to its covariance matrix. We also give a similar characterization of the Poisson measure in the discrete case, using ""Chebyshev's other inequality"". We briefly discuss how these results relate to Stein and Stein-Chen methods for Gaussian and Poisson approximation, and to the Bakry-Emery calculus. ","An extremal property of the normal distribution, with a discrete analog" " Propagation of turbulent premixed flames influenced by the intrinsic hydrodynamic flame instability (the Darrieus-Landau instability) is considered in a two-dimensional case using the model nonlinear equation proposed recently. The nonlinear equation takes into account both influence of external turbulence and intrinsic properties of a flame front, such as small but finite flame thickness and realistically large density variations across the flame front. Dependence of the flame velocity on the turbulent length scale, on the turbulent intensity and on the density variations is investigated in the case of weak non-linearity and weak external turbulence. It is shown that the Darrieus-Landau instability influences the flamelet velocity considerably. The obtained results are in agreement with experimental data on turbulent burning of moderate values of the Reynolds number. ",Effect of the Darrieus-Landau instability on turbulent flame velocity " The Berry curvature involving time and momentum derivatives, which we term emergent electric field, induces a nondisspative current known as the adiabatic charge pumping or Thouless pumping in periodically driven systems. We study dissipative currents originated from the interplay between emergent electric fields and electric/magnetic fields in two and three dimensions on the basis of the Boltzmann transport theory. As an example of two-dimensional models, we study the Rashba Hamiltonian with time-dependent and anisotropic spin-orbit coupling. We show that the interplay between emergent electric fields and electric fields leads to a current transverse to electric fields, which is symmetric and contributes to the entropy production. As an example of three-dimensional models, we study the Weyl Hamiltonian under AC electric fields. We show that the interplay between emergent electric fields and magnetic fields leads to a Hall-type current at zero DC electric fields, which is now transverse to DC magnetic fields: $j_{x}=\sigma_{xy}B_y$ ($\sigma_{xy}=-\sigma_{yx}$). The Hall photocurrent is relevant in the inversion symmetry breaking Weyl semimetals such as TaAs or SrSi$_2$. ",Anomalous transport phenomena from dissipative charge pumping " The spectral properties of the adjacency matrix, in particular its largest eigenvalue and the associated principal eigenvector, dominate many structural and dynamical properties of complex networks. Here we focus on the localization properties of the principal eigenvector in real networks. We show that in most cases it is either localized on the star defined by the node with largest degree (hub) and its nearest neighbors, or on the densely connected subgraph defined by the maximum $K$-core in a $K$-core decomposition. The localization of the principal eigenvector is often strongly correlated with the value of the largest eigenvalue, which is given by the local eigenvalue of the corresponding localization subgraph, but different scenarios sometimes occur. We additionally show that simple targeted immunization strategies for epidemic spreading are extremely sensitive to the actual localization set. ",Eigenvector localization in real networks and its implications for epidemic spreading " This note shows that, for a fixed Lipschitz constant $L > 0$, one layer neural networks that are $L$-Lipschitz are dense in the set of all $L$-Lipschitz functions with respect to the uniform norm on bounded sets. ",Lipschitz neural networks are dense in the set of all Lipschitz functions We show how the scattering-into-cones and flux-across-surfaces theorems in Quantum Mechanics have very intuitive pathwise probabilistic versions based on some results by Carlen about large time behaviour of paths of Nelson diffusions. The quantum mechanical results can be then recovered by taking expectations in our pathwise statements. ,Scattering into Cones and Flux across Surfaces in Quantum Mechanics: a Pathwise Probabilistic Approach " We consider error correction in quantum key distribution. To avoid that Alice and Bob unwittingly end up with different keys precautions must be taken. Before running the error correction protocol, Bob and Alice normally sacrifice some bits to estimate the error rate. To reduce the probability that they end up with different keys to an acceptable level, we show that a large number of bits must be sacrificed. Instead, if Alice and Bob can make a good guess about the error rate before the error correction, they can verify that their keys are similar after the error correction protocol. This verification can be done by utilizing properties of Low Density Parity Check codes used in the error correction. We compare the methods and show that by verification it is often possible to sacrifice less bits without compromising security. The improvement is heavily dependent on the error rate and the block length, but for a key produced by the IdQuantique system Clavis^2, the increase in the key rate is approximately 5 percent. We also show that for systems with large fluctuations in the error rate a combination of the two methods is optimal. ","Error Estimation, Error Correction and Verification In Quantum Key Distribution" " In this paper, we show that if $p$ is a prime and $G$ is a $p$-solvable group, then $| G:O_p (G) |_p \le (b(G)^p/p)^{1/(p-1)}$ where $b(G)$ is the largest character degree of $G$. If $p$ is an odd prime that is not a Mersenne prime or if the nilpotence class of a Sylow $p$-subgroup of $G$ is at most $p$, then $| G:O_p (G) |_p \le b(G)$. ",Bounding an index by the largest character degree of a solvable group " Purpose: Echocardiography is commonly used as a non-invasive imaging tool in clinical practice for the assessment of cardiac function. However, delineation of the left ventricle is challenging due to the inherent properties of ultrasound imaging, such as the presence of speckle noise and the low signal-to-noise ratio. Methods: We propose a semi-automated segmentation algorithm for the delineation of the left ventricle in temporal 3D echocardiography sequences. The method requires minimal user interaction and relies on a diffeomorphic registration approach. Advantages of the method include no dependence on prior geometrical information, training data, or registration from an atlas. Results: The method was evaluated using three-dimensional ultrasound scan sequences from 18 patients from the Mazankowski Alberta Heart Institute, Edmonton, Canada, and compared to manual delineations provided by an expert cardiologist and four other registration algorithms. The segmentation approach yielded the following results over the cardiac cycle: a mean absolute difference of 1.01 (0.21) mm, a Hausdorff distance of 4.41 (1.43) mm, and a Dice overlap score of 0.93 (0.02). Conclusions: The method performed well compared to the four other registration algorithms. ",A New Semi-Automated Algorithm for Volumetric Segmentation of the Left Ventricle in Temporal 3D Echocardiography Sequences " Hybrid beamforming (HBF) is a key enabler for wideband terahertz (THz) massive multiple-input multiple-output (mMIMO) communications systems. A core challenge with designing HBF systems stems from the fact their application often involves a non-convex, highly complex optimization of large dimensions. In this paper, we propose HBF schemes that leverage data to enable efficient designs for both the fully-connected HBF (FC-HBF) and dynamic sub-connected HBF (SC-HBF) architectures. We develop a deep unfolding framework based on factorizing the optimal fully digital beamformer into analog and digital terms and formulating two corresponding equivalent least squares (LS) problems. Then, the digital beamformer is obtained via a closed-form LS solution, while the analog beamformer is obtained via ManNet, a lightweight sparsely-connected deep neural network based on unfolding projected gradient descent. Incorporating ManNet into the developed deep unfolding framework leads to the ManNet-based FC-HBF scheme. We show that the proposed ManNet can also be applied to SC-HBF designs after determining the connections between the radio frequency chain and antennas. We further develop a simplified version of ManNet, referred to as subManNet, that directly produces the sparse analog precoder for SC-HBF architectures. Both networks are trained with an unsupervised training procedure. Numerical results verify that the proposed ManNet/subManNet-based HBF approaches outperform the conventional model-based and deep unfolded counterparts with very low complexity and a fast run time. For example, in a simulation with 128 transmit antennas, it attains a slightly higher spectral efficiency than the Riemannian manifold scheme, but over 1000 times faster and with a complexity reduction of more than by a factor of six (6). ",Deep Unfolding Hybrid Beamforming Designs for THz Massive MIMO Systems " We present a model that investigates preference evolution with endogenous matching. In the short run, individuals' subjective preferences simultaneously determine who they are matched with and how they behave in the social interactions with their matched partners, which results in material payoffs for them. Material payoffs in turn affect how preferences evolve in the long run. To properly model the ""match-to-interact"" process, we combine stable matching and equilibrium concepts. Our findings emphasize the importance of parochialism, a preference for matching with one's own kind, in shaping our results. Under complete information, the parochial efficient preference type -- characterized by a weak form of parochialism and a preference for efficiency -- stands out in the evolutionary process, because it is able to force positive assortative matching and efficient play among individuals carrying this preference type. Under incomplete information, the exclusionary efficient preference type -- characterized by a stronger form of parochialism and a preference for efficiency -- prevails, as it provides individuals with an incentive to engage in self-sorting through rematching in any matching outcomes that involve incomplete information and inefficient play. ",Preference Evolution under Stable Matching " After a short introduction to the generalized uncertainty principle (GUP), we discuss heuristic derivations of the Casimir effect, first from the usual Heisenberg uncertainty principle (HUP), and then from GUP. Results are compared with those obtained from more standard calculations in Quantum Field Theory (QFT). ",Heuristic derivation of the Casimir effect from Generalized Uncertainty Principle " We develop a high frequency, wide bandwidth radiometer operating at room temperature, which augments the traditional technique of Johnson noise thermometry for nanoscale thermal transport studies. Employing low noise amplifiers and an analog multiplier operating at 2~GHz, auto- and cross-correlated Johnson noise measurements are performed in the temperature range of 3 to 300~K, achieving a sensitivity of 5.5~mK (110 ppm) in 1 second of integration time. This setup allows us to measure the thermal conductance of a boron nitride encapsulated monolayer graphene device over a wide temperature range. Our data shows a high power law (T$^{\sim4}$) deviation from the Wiedemann-Franz law above T$\sim$100~K. ",Development of high frequency and wide bandwidth Johnson noise thermometry " We obtain explicit time dependent brane solutions in M-theory as well as in string theory by solving the reduced equations of motion (which follow from 11-d supergravity) for a class of brane solutions in curved backgrounds. The behaviour of our solutions in both asymptotic and near-horizon limits are studied. It is shown that our time dependent solutions serve as explicit examples of branes in singular, cosmological backgrounds. In some special cases the asymptotic and the boundary AdS solutions can be identified as Milne X R^n spacetime. ",Brane Solutions in Time Dependent Backgrounds in D = 11 Supergravity and in Type II String Theories " We derive the Lax operator for a very large family of classical minimal surface solutions in $AdS_3$ describing Wilson loops in $\mathcal{N}=4$ SYM theory. These solutions, constructed by Ishizeki, Kruczenski and Ziama, are associated with a hyperellictic surface of odd genus. We verify that the algebraic curve derived from the Lax operator is indeed none-other than this hyperelliptic surface. ",From algebraic curve to minimal surface and back " We performed a detailed analysis of extensive photometric observations of a sample of most active dwarf novae, that is SU UMa stars which are characterised by supercycle lengths shorter than 120 days. We found the observational evidence that supercycle lengths for these objects have been constantly increasing over the past decades, which indicates that their mean mass transfer rates have been decreasing during that time. This seems to be a common feature for this type of stars. We present numerical results in each case and estimate time scales of future development of these systems. This study is important in the context of evolution of dwarf novae stars and perhaps other cataclysmic variables. ",On supercycle lengths of active SU UMa stars " Zero-shot learning (ZSL) is a challenging problem that aims to recognize the target categories without seen data, where semantic information is leveraged to transfer knowledge from some source classes. Although ZSL has made great progress in recent years, most existing approaches are easy to overfit the sources classes in generalized zero-shot learning (GZSL) task, which indicates that they learn little knowledge about target classes. To tackle such problem, we propose a novel Transferable Contrastive Network (TCN) that explicitly transfers knowledge from the source classes to the target classes. It automatically contrasts one image with different classes to judge whether they are consistent or not. By exploiting the class similarities to make knowledge transfer from source images to similar target classes, our approach is more robust to recognize the target images. Experiments on five benchmark datasets show the superiority of our approach for GZSL. ",Transferable Contrastive Network for Generalized Zero-Shot Learning " It is suggested that the resonance $\psi(3770)$ may contain a sizeable ($O(10%)$ in terms of the probability weight factor) four-quark component with the up- and down- quarks and antiquarks in addition to the $c {\bar c}$ pair, which component in itself has a substantial part with the isospin I=1. Furthermore such four-quark part of the wave function should also affect the properties of the $\psi'$ charmonium resonance through the $\psi(3770) - \psi'$ mixing previously considered in the literature. It is argued that an admixture of extra light quark pairs can explain a possible discrepancy between the theoretical expectations and the recent data on the non-$D {\bar D}$ decay width of the $\psi(3770)$ and the ratio of the yield of charged and neutral $D$ meson pairs in its decays, as well as on the extra rate of the $\psi'$ direct decay into light hadrons and the rate of the decay $\psi' \to \pi^0 J/\psi$. It is further argued that the suggested four-quark component of the wave function of the $\psi(3770)$ should give rise to a measurable rate of the decays $\psi(3770) \to \eta J/\psi$ and $\psi(3770) \to \pi^0 J/\psi$. ",The ${\bar c} c$ purity of $\psi(3770)$ and $\psi'$ challenged We establish global extendibility (to the domain of outer communications) of locally defined isometries of appropriately regular analytic black holes. This allows us to fill a gap in the Hawking-Ellis proof of black-hole rigidity. ,On rigidity of analytic black holes We extend the result of K. Karlander [Math. Scand. 80 (1997)] regarding finite dimensionality of spaces of absolutely convergent Fourier transforms. ,On the Finite Dimensionality of Spaces of Absolutely Convergent Fourier Transforms " We propose a verified computation method for partial eigenvalues of a Hermitian generalized eigenproblem. The block Sakurai-Sugiura Hankel method, a contour integral-type eigensolver, can reduce a given eigenproblem into a generalized eigenproblem of block Hankel matrices whose entries consist of complex moments. In this study, we evaluate all errors in computing the complex moments. We derive a truncation error bound of the quadrature. Then, we take numerical errors of the quadrature into account and rigorously enclose the entries of the block Hankel matrices. Each quadrature point gives rise to a linear system, and its structure enables us to develop an efficient technique to verify the approximate solution. Numerical experiments show that the proposed method outperforms a standard method and infer that the proposed method is potentially efficient in parallel. ",Verified partial eigenvalue computations using contour integrals for Hermitian generalized eigenproblems " The purpose of this work is to obtain the degree of the exceptional component of the space of holomorphic foliations of degree two and codimension one in P^3. We construct a parameter space as an explicit fiber bundle over the variety of complete flags. Using tools from equivariant intersection theory, especially Bott's formula, the degree is expressed as an integral over our parameter space. ",Degree of the exceptional component of foliations in P3 " The quantum Hall physics of bilayer graphene is extremely rich due to the interplay between a layer degree of freedom and delicate fractional states. Recent experiments show that when an electric field perpendicular to the bilayer causes Landau levels of opposing layers to cross in energy, a even-denominator Hall plateau can coexist with a finite density of inter-layer excitons. We present theoretical and numerical evidence that this observation is due to a new phase of matter -- a Fermi sea of topological excitons. ","Evidence for a topological ""exciton Fermi sea"" in bilayer graphene" In this short note we study the existence and number of solutions in the set of integers ($Z$) and in the set of natural numbers ($N$) of Diopahntine Equations of second degree with two variables of the general form $ax^2-by^2=c$. ,Existence and Number of Solutions of Diophantine Quadratic Equations with Two Unknowns in $Z$ and $N$ " Polynomial quotient rings are fundamental objects in commutative algebra. They play a big role in many areas of study, ranging from the applied sciences (cryptography, coding theory) to higher-level mathematics (algebraic geometry, field theory). Given a polynomial ring $R$ and some ideal $I \subseteq R$, a very natural question is to ask how large is the vector space dimension, or length of the quotient ring $R/I$. Specifically, are there closed-form expressions for the length of $R/I$? How does the length grow with respect to $I$? Studying the lengths of quotient rings provides insight into how polynomials with certain structures generally behave in terms of factorization, solutions to zeros, and more. Algebraists have answered this question when the ideal $I$ is of a particular form: when $I$ is a monomial ideal. However, the question becomes much more difficult and complex when $I$ becomes more general. In this project, we investigate a polynomial quotient ring in 3 variables generated by an ideal of more general form than the monomial ideal: $R/I = \mathbb{Z}_2[x, y, z]/(x^{d_1}, y^{d_2}, z^{d_3}, x + y + z)$, where $d_1, d_2, d_3$ are arbitrary nonnegative integer parameters. ",On Lengths of Quotient Rings In calculations of isoscalar magnetic moments of odd-odd N=Z nuclei it was found that for medium to heavy mass nuclei large scale shell model calculations yielded results which were very close to much simpler single j shell ones. To understand this we compare isoscalar and isovector configuration mixing in first order perturbation theory using a spin dependant delata interaction.The isoscalar corrections are much smaller ,Ratio of Isoscalar to Isovector Core Polarization for Magnetic Moments We define and study logics in the framework of probabilistic team semantics and over metafinite structures. Our work is paralleled by the recent development of novel axiomatizable and tractable logics in team semantics that are closed under the Boolean negation. Our logics employ new probabilistic atoms that resemble so-called extended atoms from the team semantics literature. We also define counterparts of our logics over metafinite structures and show that all of our logics can be translated into functional fixed point logic implying a polynomial time upper bound for data complexity with respect to BSS-computations. ,On elementary logics for quantitative dependencies We study the distributional properties of horizontal visibility graphs associated with random restrictive growth sequences and random set partitions of size $n.$ Our main results are formulas expressing the expected degree of graph nodes in terms of simple explicit functions of a finite collection of Stirling and Bernoulli numbers. ,Horizontal visibility graph of a random restricted growth sequence " Deep learning models have had a great success in disease classifications using large data pools of skin cancer images or lung X-rays. However, data scarcity has been the roadblock of applying deep learning models directly on prostate multiparametric MRI (mpMRI). Although model interpretation has been heavily studied for natural images for the past few years, there has been a lack of interpretation of deep learning models trained on medical images. This work designs a customized workflow for the small and imbalanced data set of prostate mpMRI where features were extracted from a deep learning model and then analyzed by a traditional machine learning classifier. In addition, this work contributes to revealing how deep learning models interpret mpMRI for prostate cancer patients stratification. ",A Deep Dive into Understanding Tumor Foci Classification using Multiparametric MRI Based on Convolutional Neural Network " Deep brain stimulation (DBS) is an established method for treating pathological conditions such as Parkinson's disease, dystonia, Tourette syndrome, and essential tremor. While the precise mechanisms which underly the effectiveness of DBS are not fully understood, theoretical studies of populations of neural oscillators stimulated by periodic pulses suggest that this may be related to clustering, in which subpopulations of the neurons are synchronized, but the subpopulations are desynchronized with respect to each other. The details of the clustering behavior depend on the frequency and amplitude of the stimulation in a complicated way. In the present study, we investigate how the number of clusters, their stability properties, and their basins of attraction can be understood in terms of one-dimensional maps defined on the circle. Moreover, we generalize this analysis to stimuli that consist of pulses with alternating properties, which provide additional degrees of freedom in the design of DBS stimuli. Our results illustrate how the complicated properties of clustering behavior for periodically forced neural oscillator populations can be understood in terms of a much simpler dynamical system. ",Analysis of Neural Clusters due to Deep Brain Stimulation Pulses " A recent thermal Hall conductance experiment [Banerjee et al., Nature {\bf559}, 205 (2018)] for $\nu = 5/2$ fractional quantum Hall system appears to rule out both the Pfaffian and anti-Pfaffian and be in favor of the PH-Pfaffian topological order, while the existing numerical results without disorder have shown otherwise. In this paper we offer a possible resolution by proposing a new state, termed compressed PH-Pfaffian state by ""compressing"" the PH-Pfaffian state with two flux quanta removed to create two abelian Laughlin type quasiparticles of the maximum avoidance from one another (or of the maximum number of zeros). The compressed PH-Pfaffian state is not particle-hole symmetric but possesses the PH-Pfaffian topological order. In spherical geometry, the compressed PH-Pfaffian state has the same magnetic flux number $N_{\phi}= 2N-3$ as the Pfaffian state, allowing a direct numerical comparison between the two states. Results of exact diagonalization of finite disorder-free systems in the second Landau level show that, by increasing the short range component of the Coulomb interaction, the ground state undergoes a phase transition from the Pfaffian state to the compressed PH-Pfaffian state before further entering into a gapless state. The low energy gapped excited states result from the breakup of the abelian Laughlin type quasiparticle into two non-abelian quasiparticles. ",A Compressed Particle-Hole Symmetric Pfaffian State for $\nu = 5/2$ Quantum Hall Effect " A general method for constructing a new class of topological Ramsey spaces is presented. Members of such spaces are infinite sequences of products of Fra\""iss\'e classes of finite relational structures satisfying the Ramsey property. The Product Ramsey Theorem of Soki\v{c} is extended to equivalence relations for finite products of structures from Fra\""iss\'e classes of finite relational structures satisfying the Ramsey property and the Order-Prescribed Free Amalgamation Property. This is essential to proving Ramsey-classification theorems for equivalence relations on fronts, generalizing the Pudl\'ak-R\""odl Theorem to this class of topological Ramsey spaces. To each topological Ramsey space in this framework corresponds an associated ultrafilter satisfying some weak partition property. By using the correct Fra\""iss\'e classes, we construct topological Ramsey spaces which are dense in the partial orders of Baumgartner and Taylor in \cite{Baumgartner/Taylor78} generating p-points which are $k$-arrow but not $k+1$-arrow, and in a partial order of Blass in \cite{Blass73} producing a diamond shape in the Rudin-Keisler structure of p-points. Any space in our framework in which blocks are products of $n$ many structures produces ultrafilters with initial Tukey structure exactly the Boolean algebra $\mathcal{P}(n)$. If the number of Fra\""iss\'e classes on each block grows without bound, then the Tukey types of the p-points below the space's associated ultrafilter have the structure exactly $[\omega]^{<\omega}$. In contrast, the set of isomorphism types of any product of finitely many Fra\""iss\'e classes of finite relational structures satisfying the Ramsey property and the OPFAP, partially ordered by embedding, is realized as the initial Rudin-Keisler structure of some p-point generated by a space constructed from our template. ","Topological Ramsey spaces from Fra\""iss\'e classes, Ramsey-classification theorems, and initial structures in the Tukey types of p-points" " Trivial events are ubiquitous in human to human conversations, e.g., cough, laugh and sniff. Compared to regular speech, these trivial events are usually short and unclear, thus generally regarded as not speaker discriminative and so are largely ignored by present speaker recognition research. However, these trivial events are highly valuable in some particular circumstances such as forensic examination, as they are less subjected to intentional change, so can be used to discover the genuine speaker from disguised speech. In this paper, we collect a trivial event speech database that involves 75 speakers and 6 types of events, and report preliminary speaker recognition results on this database, by both human listeners and machines. Particularly, the deep feature learning technique recently proposed by our group is utilized to analyze and recognize the trivial events, which leads to acceptable equal error rates (EERs) despite the extremely short durations (0.2-0.5 seconds) of these events. Comparing different types of events, 'hmm' seems more speaker discriminative. ",Human and Machine Speaker Recognition Based on Short Trivial Events " The IceCube Neutrino Observatory first observed a diffuse flux of high energy astrophysical neutrinos in 2013. Since then, this observation has been confirmed in multiple detection channels such as high energy starting events, cascades, and through-going muon tracks. Combining these event selections into a high statistics global fit of 10 years of IceCube's neutrino data could strongly improve the understanding of the diffuse astrophysical neutrino flux: challenging or confirming the simple unbroken power-law flux model as well as the astrophysical neutrino flux composition. One key component of such a combined analysis is the consistent modelling of systematic uncertainties of different event selections. This can be achieved using the novel SnowStorm Monte Carlo method which allows constraints to be placed on multiple systematic parameters from a single simulation set. We will report on the status of a new combined analysis of through-going muon tracks and cascades. It is based on a consistent all flavor neutrino signal and background simulation using, for the first time, the SnowStorm method to analyze IceCube's high-energy neutrino data. Estimated sensitivities for the energy spectrum of the diffuse astrophysical neutrino flux will be shown. ",A Combined Fit of the Diffuse Neutrino Spectrum using IceCube Muon Tracks and Cascades " We present calculations of Majorana edge modes in cylindrical nanowires of a semiconductor material with proximity-induced superconductivity. We consider a Rashba field along the transverse direction and an applied magnetic field in arbitrary orientation. Our analysis is based on exact numerical diagonalizations for the finite cylinder and on the complex band structure for the semi-infinite one. Orbital effects are responsible for a strong anisotropy of the critical field for which the effective gap vanishes. Robust Majorana modes are induced by the parallel field component and we find regimes with more than one Majorana mode on the same edge. Experimentally, they would manifest as a specific sequence of zero-bias conductances as a function of magnetic field. In the finite cylinder, a degradation of the Majorana modes due to interference of the two edges leads to oscillating non zero energies for large enough fields. ",Emergence of Majorana modes in cylindrical nanowires " There has been a lot of recent interest in adopting machine learning methods for scientific and engineering applications. This has in large part been inspired by recent successes and advances in the domains of Natural Language Processing (NLP) and Image Classification (IC). However, scientific and engineering problems have their own unique characteristics and requirements raising new challenges for effective design and deployment of machine learning approaches. There is a strong need for further mathematical developments on the foundations of machine learning methods to increase the level of rigor of employed methods and to ensure more reliable and interpretable results. Also as reported in the recent literature on state-of-the-art results and indicated by the No Free Lunch Theorems of statistical learning theory incorporating some form of inductive bias and domain knowledge is essential to success. Consequently, even for existing and widely used methods there is a strong need for further mathematical work to facilitate ways to incorporate prior scientific knowledge and related inductive biases into learning frameworks and algorithms. We briefly discuss these topics and discuss some ideas proceeding in this direction. ",Importance of the Mathematical Foundations of Machine Learning Methods for Scientific and Engineering Applications " We introduce two processes where the BMS equation appears in a context quite different from the original context of non-global jet observables. We note the strong similarities of the BMS equation to the BK and FKPP equations and argue that these, essentially identical equations, can be viewed either in terms of the probability, or amplitude, of something not happening or in terms of the nonlinear terms setting unitarity limits. Mostly analytic solutions are given for (i) the probability that no $c\bar{c}$ pairs be produced in a jet decay and (ii) the probability that no-$c\bar{c}$ pairs be produced in a high energy dipole nucleus scattering. Both these processes obey BMS equations, albeit with very different kernels. ",The BMS Equation and c\bar{c} Production; A Comparison of the BMS and BK Equations " We show how to take a regression function $\hat{f}$ that is appropriately ``multicalibrated'' and efficiently post-process it into an approximately error minimizing classifier satisfying a large variety of fairness constraints. The post-processing requires no labeled data, and only a modest amount of unlabeled data and computation. The computational and sample complexity requirements of computing $\hat f$ are comparable to the requirements for solving a single fair learning task optimally, but it can in fact be used to solve many different downstream fairness-constrained learning problems efficiently. Our post-processing method easily handles intersecting groups, generalizing prior work on post-processing regression functions to satisfy fairness constraints that only applied to disjoint groups. Our work extends recent work showing that multicalibrated regression functions are ``omnipredictors'' (i.e. can be post-processed to optimally solve unconstrained ERM problems) to constrained optimization. ",Multicalibrated Regression for Downstream Fairness " Adversarial attacks on deep neural networks (DNNs) have been found for several years. However, the existing adversarial attacks have high success rates only when the information of the victim DNN is well-known or could be estimated by the structure similarity or massive queries. In this paper, we propose to Attack on Attention (AoA), a semantic property commonly shared by DNNs. AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss. Since AoA alters the loss function only, it could be easily combined with other transferability-enhancement techniques and then achieve SOTA performance. We apply AoA to generate 50000 adversarial samples from ImageNet validation set to defeat many neural networks, and thus name the dataset as DAmageNet. 13 well-trained DNNs are tested on DAmageNet, and all of them have an error rate over 85%. Even with defenses or adversarial training, most models still maintain an error rate over 70% on DAmageNet. DAmageNet is the first universal adversarial dataset. It could be downloaded freely and serve as a benchmark for robustness testing and adversarial training. ",Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet " We propose a method for preparing mixed quantum states of arbitrary dimension $D$ ($D\geq2$) which are codified in the discretized transverse momentum and position of single photons, once they are sent through an aperture with $D$ slits. Following our previous technique we use a programmable single phase-only spatial light modulator (SLM) to define the aperture and set the complex transmission amplitude of each slit, allowing the independent control of the complex coefficients that define the quantum state. Since these SLMs give us the possibility to dynamically varying the complex coefficients of the state during the measurement time, we can generate not only pure states but also quantum states compatible with a mixture of pure quantum states. Therefore, by using these apertures varying on time according to a probability distribution, we have experimentally obtained $D$-dimensional quantum states with purities that depend on the parameters of the distribution through a clear analytical expression. This fact allows us to easily customize the states to be generated. Moreover, the method offer the possibility of working without changing the optical setup between pure and mixed states, or when the dimensionality of the states is increased. The obtained results show a quite good performance of our method at least up to dimension $D=11$, being the fidelity of the prepared states $F > 0.98$ in every case. ",Controlled generation of mixed spatial qudits with arbitrary degree of purity " We present results from 1.4 and 5 GHz observations at matched resolution with the Karl G. Jansky Very Large Array (VLA) of 11 powerful 3C FR II quasars. We examine the 11 quasars along with a sample of 13 narrow-line FR II radio galaxies and find that radio-loud unification largely holds but environmental effects cannot be ignored. The radio core prominence, largest linear size, and axial ratio parameter values indicate that quasars are at relatively smaller angles compared to the radio galaxies and thus probe orientation. Lack of correlation between statistical orientation indicators such as misalignment angle and radio core prominence, and larger lobe distortions in quasars compared to radio galaxies suggest that intrinsic/environment effects are also at play. Some of 150 MHz observations with the TGSS-GMRT reveal peculiar lobe morphologies in these FR II sources, suggesting complex past lives and possibly restarted AGN activity. Using the total 150~MHz flux density we estimate the time-averaged jet kinetic power in these sources and this ranges from (1 - 38)x10^45 erg/s, with 3C 470 having the highest jet kinetic power. ",A VLA-GMRT Look at 11 Powerful FR II Quasars " We perform adiabatic regularization of power spectrum in nonminimally coupled general single-field inflation with varying speed of sound. The subtraction is performed within the framework of earlier study by Urakawa and Starobinsky dealing with the canonical inflation. Inspired by Fakir and Unruh's model on nonminimally coupled chaotic inflation, we find upon imposing near scale-invariant condition, that the subtraction term exponentially decays with the number of $ e $-folds. As in the result for the canonical inflation, the regularized power spectrum tends to the ""bare"" power spectrum as the Universe expands during (and even after) inflation. This work justifies the use of the ""bare"" power spectrum in standard calculation in the most general context of slow-roll single-field inflation involving non-minimal coupling and varying speed of sound. ",Adiabatic regularization of power spectrum in nonminimally coupled general single-field inflation " Aims. We aim to systematically study the properties of the different transitions of the dense molecular gas tracer HC3N in galaxies. Methods. We have conducted single-dish observations of HC3N emission lines towards a sample of nearby gas-rich galaxies. HC3N(J=2-1) was observed in 20 galaxies with Effelsberg 100-m telescope. HC3N(J=24-23) was observed in nine galaxies with the 10-m Submillimeter Telescope (SMT). Results. HC3 N 2-1 is detected in three galaxies: IC 342, M 66 and NGC 660 (> 3 {\sigma}). HC3 N 24-23 is detected in three galaxies: IC 342, NGC 1068 and IC 694. This is the first measurements of HC3N 2-1 in a relatively large sample of external galaxies, although the detection rate is low. For the HC3 N 2-1 non-detections, upper limits (2 {\sigma}) are derived for each galaxy, and stacking the non-detections is attempted to recover the weak signal of HC3N. But the stacked spectrum does not show any significant signs of HC3N 2-1 emission. The results are also compared with other transitions of HC3N observed in galaxies. Conclusions. The low detection rate of both transitions suggests low abundance of HC3N in galaxies, which is consistent with other observational studies. The comparison between HC3N and HCN or HCO+shows a large diversity in the ratios between HC3N and HCN or HCO+. More observations are needed to interpret the behavior of HC3N in different types of galaxies. ",HC3N Observations of Nearby Galaxies " This paper introduces a very challenging dataset of historic German documents and evaluates Fully Convolutional Neural Network (FCNN) based methods to locate handwritten annotations of any kind in these documents. The handwritten annotations can appear in form of underlines and text by using various writing instruments, e.g., the use of pencils makes the data more challenging. We train and evaluate various end-to-end semantic segmentation approaches and report the results. The task is to classify the pixels of documents into two classes: background and handwritten annotation. The best model achieves a mean Intersection over Union (IoU) score of 95.6% on the test documents of the presented dataset. We also present a comparison of different strategies used for data augmentation and training on our presented dataset. For evaluation, we use the Layout Analysis Evaluator for the ICDAR 2017 Competition on Layout Analysis for Challenging Medieval Manuscripts. ",Recognizing Challenging Handwritten Annotations with Fully Convolutional Networks " The paper deals with cosmic no hair conjecture in scalar tensor theory of gravity. Here we have considered both Jordan frame and Einstein frame to examine the conjecture. In Jordan frame, one should restrict both the coupling function of the scalar field and the coupling parameter in addition to the ususal energy conditions for the the matter field for the validity of CNHC while in Einstein frame the restrictions are purely on the energy conditions. ",Scalar tensor theory : validity of Cosmic no hair conjecture We formulate and prove an analogue of Beurling's theorem for the Fourier transform on the Heisenberg group. As a consequence we deduce Hardy and Cowling-Price theorems. ,Beurling's theorem on the Heisenberg group " We investigate the deformations and rigidity of boundary Heisenberg-like algebras. In particular, we focus on the Heisenberg and $\text{Heisenberg}\oplus\mathfrak{witt}$ algebras which arise as symmetry algebras in three-dimensional gravity theories. As a result of the deformation procedure we find a large class of algebras. While some of these algebras are new, some of them have already been obtained as asymptotic and boundary symmetry algebras, supporting the idea that symmetry algebras associated to diverse boundary conditions and spacetime loci are algebraically interconnected through deformation of algebras. The deformation/contraction relationships between the new algebras are investigated. In addition, it is also shown that the deformation procedure reaches new algebras inaccessible to the Sugawara construction. As a byproduct of our analysis, we obtain that $\text{Heisenberg}\oplus\mathfrak{witt}$ and the asymptotic symmetry algebra Weyl-$\mathfrak{bms}_3$ are not connected via single deformation but in a more subtle way. ",Boundary Heisenberg Algebras and Their Deformations " Neural image compression (NIC) has outperformed traditional image codecs in rate-distortion (R-D) performance. However, it usually requires a dedicated encoder-decoder pair for each point on R-D curve, which greatly hinders its practical deployment. While some recent works have enabled bitrate control via conditional coding, they impose strong prior during training and provide limited flexibility. In this paper we propose Code Editing, a highly flexible coding method for NIC based on semi-amortized inference and adaptive quantization. Our work is a new paradigm for variable bitrate NIC. Furthermore, experimental results show that our method surpasses existing variable-rate methods, and achieves ROI coding and multi-distortion trade-off with a single decoder. ",Flexible Neural Image Compression via Code Editing " In this paper we show how the rescattering of CMB photons after cosmic reionization can give a significant linear contribution to the temperature-matter cross-correlation measurements. These anisotropies, which arise via a late time Doppler effect, are on scales much larger than the typical scale of non-linear effects at reionization; they can contribute to degree scale cross-correlations and could affect the interpretation of similar correlations resulting from the integrated Sachs-Wolfe effect. While expected to be small at low redshifts, these correlations can be large given a probe of the density at high redshift, and so could be a useful probe of the cosmic reionization history. ",The effect of reionization on the CMB-density correlation " Planetary embryos form protoplanets via mutual collisions, which can lead to the development of magma oceans. During their solidification, large amounts of the mantles' volatile contents may be outgassed. The resulting H$_2$O/CO$_2$ dominated steam atmospheres may be lost efficiently via hydrodynamic escape due to the low gravity and the high stellar EUV luminosities. Protoplanets forming later from such degassed building blocks could therefore be drier than previously expected. We model the outgassing and subsequent hydrodynamic escape of steam atmospheres from such embryos. The efficient outflow of H drags along heavier species (O, CO$_2$, noble gases). The full range of possible EUV evolution tracks of a solar-mass star is taken into account to investigate the escape from Mars-sized embryos at different orbital distances. The envelopes are typically lost within a few to a few tens of Myr. Furthermore, we study the influence on protoplanetary evolution, exemplified by Venus. We investigate different early evolution scenarios and constrain realistic cases by comparing modeled noble gas isotope ratios with observations. Starting from solar values, consistent isotope ratios (Ne, Ar) can be found for different solar EUV histories, as well as assumptions about the initial atmosphere (either pure steam or a mixture with accreted H). Our results generally favor an early accretion scenario with a small amount of accreted H and a low-activity Sun, because in other cases too much CO$_2$ is lost during evolution, which is inconsistent with Venus' present atmosphere. Important issues are likely the time at which the initial steam atmosphere is outgassed and/or the amount of CO$_2$ which may still be delivered at later evolutionary stages. A late accretion scenario can only reproduce present isotope ratios for a highly active young Sun, but then very massive steam atmospheres would be required. ",Escape and fractionation of volatiles and noble gases from Mars-sized planetary embryos and growing protoplanets " We prove the Jonson-Mahan theorem for the thermopower of the Falicov-Kimball model by solving explicitly for the correlation functions in the large dimensional limit. We prove a similar result for the thermal conductivity. We separate the results for thermal transport into the pieces of the heat current that arise from the kinetic energy and those that arise from the potential energy. Our method of proof is specific to the Falicov-Kimball model, but illustrates the near cancellations between the kinetic-energy and potential-energy pieces of the heat current implied by the Jonson-Mahan theorem. ",Thermal transport in the Falicov-Kimball model " We report on a program of geodetic measurements between 1994 and 2007 which used the Very Long Baseline Array and up to 10 globally distributed antennas. One of the goals of this program was to monitor positions of the array at a 1 millimeter level of accuracy and to tie the VLBA into the International Terrestrial Reference Frame. We describe the analysis of these data and report several interesting geophysical results including measured station displacements due to crustal motion, earthquakes, and antenna tilt. In terms of both formal errors and observed scatter, these sessions are among the very best geodetic VLBI experiments. ",Precise geodesy with the Very Long Baseline Array " Let $E$ be a Bedford-McMullen carpet associated with a set of affine mappings $\{f_{ij}\}_{(i,j)\in G}$ and let $\mu$ be the self-affine measure associated with $\{f_{ij}\}_{(i,j)\in G}$ and a probability vector $(p_{ij})_{(i,j)\in G}$. We study the asymptotics of the geometric mean error in the quantization for $\mu$. Let $s_0$ be the Hausdorff dimension for $\mu$. Assuming a separation condition for $\{f_{ij}\}_{(i,j)\in G}$, we prove that the $n$th geometric error for $\mu$ is of the same order as $n^{-1/s_0}$. ",Asymptotic order of the geometric mean error for self-affine measures on Bedford-McMullen carpets " We generalize Berg's notion of quasi-disjointness to actions of countable groups and prove that every measurably distal system is quasi-disjoint from every measure preserving system. As a corollary we obtain easy to check necessary and sufficient conditions for two systems to be disjoint, provided one of them is measurably distal. We also obtain a Wiener--Wintner type theorem for countable amenable groups with distal weights and applications to weighted multiple ergodic averages and multiple recurrence. ",Disjointness for measurably distal group actions and applications " We investigate the Neel temperature of Sr2CuO3 as a function of the site dilution at the Cu (S=1/2) sites with Pd (S=0), utilizing the muon spin relaxation (muSR) technique. The Neel temperature, which is Tn=5.4K for the undoped system, becomes significantly reduced for less than one percent of doping Pd, giving a support for the previous proposal for the good one-dimensionality. The Pd concentration dependence of the Neel temperature is compared with a recent theoretical study (S. Eggert, I. Affleck and M.D.P. Horton, Phys. Rev. Lett. 89, 47202 (2002)) of weakly coupled one-dimensional antiferromagnetic chains of S=1/2 spins, and a quantitative agreement is found. The inhomogeneity of the ordered moment sizes is characterized by the muSR time spectra. We propose a model that the ordered moment size recovers away from the dopant S=0 sites with a recovery length of \xi = 150-200 sites. The origin of the finite recovery length \xi for the gapless S=1/2 antiferromagnetic chain is compared to the estimate based on the effective staggered magnetic field from the neighboring chains. ",Site-Dilution in quasi one-dimensional antiferromagnet Sr2(Cu1-xPdx)O3: reduction of Neel Temperature and spatial distribution of ordered moment sizes " We study the connection between the highly non-convex loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of: i) variable independence, ii) redundancy in network parametrization, and iii) uniformity. These assumptions enable us to explain the complexity of the fully decoupled neural network through the prism of the results from random matrix theory. We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum. The number of local minima outside that band diminishes exponentially with the size of the network. We empirically verify that the mathematical model exhibits similar behavior as the computer simulations, despite the presence of high dependencies in real networks. We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between large- and small-size networks where for the latter poor quality local minima have non-zero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting. ",The Loss Surfaces of Multilayer Networks " We investigate a simple arrangement of coupled harmonic oscillators which brings out some interesting effects concerning creation of entanglement. It is well known that if each member in a linear chain of coupled harmonic oscillators is prepared in a ``classical state'', such as a pure coherent state or a mixed thermal state, no entanglement is created in the rotating wave approximation. On the other hand, if one of the oscillators is prepared in a nonclassical state (pure squeezed state, for instance), entanglement may be created between members of the chain. In the setup considered here, we found that a great family of nonclassical (squeezed) states can localize entanglement in such a way that distant oscillators never become entangled. We present a detailed study of this particular localization phenomenon. Our results may find application in future solid state implementations of quantum computers, and we suggest an electromechanical system consisting of an array of coupled micromechanical oscillators as a possible implementation. ",Creation and localization of entanglement in a simple configuration of coupled harmonic oscillators A bijection from R to R is called an Erdos-Sierpinski mapping if it maps meager sets onto the first category sets and vice versa. We show that there is no Erdos-Sierpinski mapping preserving addition. ,A note on duality between measure and category " We analyse a simple model of the heat transfer to and from a small satellite orbiting round a solar system planet. Our approach considers the satellite isothermal, with external heat input from the environment and from internal energy dissipation, and output to the environment as black-body radiation. The resulting nonlinear ordinary differential equation for the satellite's temperature is analysed by qualitative, perturbation and numerical methods, which show that the temperature approaches a periodic pattern (attracting limit cycle). This approach can occur in two ways, according to the values of the parameters: (i) a slow decay towards the limit cycle over a time longer than the period, or (ii) a fast decay towards the limit cycle over a time shorter than the period. In the first case, an exactly soluble average equation is valid. We discuss the consequences of our model for the thermal stability of satellites. ",Nonlinear analysis of a simple model of temperature evolution in a satellite We have studied magnetic and transport properties in polycrystalline CaRu1-xScxO3 for 0 =< x =< 0.20 in order to clarify the substitution effects of a non-magnetic trivalent ion. We find that a ferromagnetic transition with Tc = 30 K is observed in Sc-substituted samples. The composition dependence of the Curie-Weiss temperature implies that the magnetic susceptibility has a paramagnetic contribution with negative theta and a ferromagnetic contribution with positive theta. The field dependence of magnetization at 2 K is also understood as a summation of the ferromagnetic and paramagnetic components. These results suggest that CaRu1-xScxO3 is a non-uniform magnetic system. The relationship between the ferromagnetic ordering and the transport properties is also discussed. ,Non-uniform magnetic system driven by non-magnetic ion substitution in CaRu1-xScxO3:Two-component analysis " Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of X-ray radiation exposure attract more and more attention. To lower the X-ray radiation, low-dose CT is often used in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a training method that trained denoising neural networks without any paired clean data. we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a singe 3D thin-layer low-dose CT scanning, simultaneously In other words, with some latent assumptions, we proposed an unsupervised loss function with the integration of the similarity between adjacent CT slices in 3D thin-layer lowdose CT to train the denoising neural network in an unsupervised manner. For 3D thin-slice CT scanning, the proposed virtual supervised loss function was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Further experiments on Mayo LDCT dataset and a realistic pig head were carried out and demonstrated superior performance over existing unsupervised methods. ",Noise2Context: Context-assisted Learning 3D Thin-layer Low Dose CT Without Clean Data " Long-term temporal fusion is a crucial but often overlooked technique in camera-based Bird's-Eye-View (BEV) 3D perception. Existing methods are mostly in a parallel manner. While parallel fusion can benefit from long-term information, it suffers from increasing computational and memory overheads as the fusion window size grows. Alternatively, BEVFormer adopts a recurrent fusion pipeline so that history information can be efficiently integrated, yet it fails to benefit from longer temporal frames. In this paper, we explore an embarrassingly simple long-term recurrent fusion strategy built upon the LSS-based methods and find it already able to enjoy the merits from both sides, i.e., rich long-term information and efficient fusion pipeline. A temporal embedding module is further proposed to improve the model's robustness against occasionally missed frames in practical scenarios. We name this simple but effective fusing pipeline VideoBEV. Experimental results on the nuScenes benchmark show that VideoBEV obtains leading performance on various camera-based 3D perception tasks, including object detection (55.4% mAP and 62.9% NDS), segmentation (48.6% vehicle mIoU), tracking (54.8% AMOTA), and motion prediction (0.80m minADE and 0.463 EPA). Code will be available. ",Exploring Recurrent Long-term Temporal Fusion for Multi-view 3D Perception " This paper studies the deviations of the regret in a stochastic multi-armed bandit problem. When the total number of plays n is known beforehand by the agent, Audibert et al. (2009) exhibit a policy such that with probability at least 1-1/n, the regret of the policy is of order log(n). They have also shown that such a property is not shared by the popular ucb1 policy of Auer et al. (2002). This work first answers an open question: it extends this negative result to any anytime policy. The second contribution of this paper is to design anytime robust policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms. ",Robustness of Anytime Bandit Policies