elisachen's picture
Update config.json
dd883e1 verified
{
"_name_or_path": "bigscience/bloom-3b",
"apply_residual_connection_post_layernorm": false,
"architectures": [
"BloomForCausalLM"
],
"attention_dropout": 0.0,
"attention_softmax_in_fp32": true,
"bias_dropout_fusion": true,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_dropout": 0.0,
"hidden_size": 2560,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"masked_softmax_fusion": true,
"model_type": "bloom",
"n_head": 32,
"n_inner": null,
"n_layer": 30,
"offset_alibi": 100,
"pad_token_id": 3,
"pretraining_tp": 4,
"quantization_config": {
"batch_size": 1,
"bits": 4,
"block_name_to_quantize": null,
"cache_block_outputs": true,
"damp_percent": 0.1,
"dataset": [
"Automatic synthesis of faces from visual attributes is an important problem in computer vision and has wide applications in law enforcement and entertainment. With the advent of deep generative convolutional neural networks (CNNs), attempts have been made to synthesize face images from attributes and text descriptions. In this paper, we take a different approach, where we formulate the original problem as a stage-wise learning problem. We first synthesize the facial sketch corresponding to the visual attributes and then we reconstruct the face image based on the synthesized sketch. The proposed Attribute2Sketch2Face framework, which is based on a combination of deep Conditional Variational Autoencoder (CVAE) and Generative Adversarial Networks (GANs), consists of three stages: (1) Synthesis of facial sketch from attributes using a CVAE architecture, (2) Enhancement of coarse sketches to produce sharper sketches using a GAN-based framework, and (3) Synthesis of face from sketch using another GAN-based network. Extensive experiments and comparison with recent methods are performed to verify the effectiveness of the proposed attribute-based three stage face synthesis method.",
"This paper presents an innovative method for face synthesis from visual attributes using conditional Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). The proposed system allows generating high-quality facial images by feeding the model with sketch-based visual attributes as input. Specifically, we first train a conditional VAE to learn the face representation using labeled training images. Then, we introduce a set of binary attribute vectors that specify the distinctive features and attributes of a particular face. Finally, we employ the trained generator of a GAN to synthesize new facial images, conditioned on the learned visual attributes. Our experiments demonstrate that the proposed methodology outperforms other existing methods in terms of facial attribute manipulation, with the added benefit of producing more diverse and realistic outputs. Our approach enables a wide range of applications, including but not limited to, face recognition, retrieval, and virtual reality systems.",
"We show that though conformal symmetry can be broken by the dilaton, such can happen without breaking the conformal degeneracy patterns in the spectra. We departure from R^1XS^3 slicing of AdS_5 noticing that the inverse radius, R, of S^3 relates to the temperature of the deconfinement phase transition and has to satisfy, \\hbar c/R >> \\Lambda_{QCD}. We then focus on the eigenvalue problem of the S^3 conformal Laplacian, given by 1/R^2 (K^2+1), with K^2 standing for the Casimir invariant of the so(4) algebra. Such a spectrum is characterized by a (K+1)^2 fold degeneracy of its levels, with K\\in [0,\\infty). We then break the conformal S^3 metric as, d\\tilde{s}^2=e^{-b\\chi} ((1+b^2/4) d\\chi^2 +\\sin^2\\chi (d\\theta ^2 +\\sin^2\\theta d\\varphi ^2)), and attribute the symmetry breaking scale, b\\hbar^2c^2/R^2, to the dilaton. We show that such a metric deformation is equivalent to a breaking of the conformal curvature of S^3 by a term proportional to b\\cot \\chi, and that the perturbed conformal Laplacian is equivalent to (\\tilde{K}^2 +c_K), with c_K a representation constant, and \\tilde{K}^2 being again an so(4) Casimir invariant, but this time in a representation unitarily inequivalent to the 4D rotational. In effect, the spectra before and after the symmetry breaking are determined each by eigenvalues of a Casimir invariant of an so(4) algebra, a reason for which the degeneracies remain unaltered though the conformal group symmetry breaks at the level of the representation of its algebra. We fit the S^3 radius and the \\hbar^2c^2b/R^2 scale to the high-lying excitations in the spectra of the unflavored mesons, and observe the correct tendency of the \\hbar c /R=373 MeV value to notably exceed \\Lambda_{QCD}. The size of the symmetry breaking scale is calculated as \\hbar c \\sqrt{b}/R=673.7 MeV.",
"This research investigates the phenomenon of conformal symmetry breaking and its correlation with the degeneracy of high-lying unflavored mesons. Conformal symmetry, the invariance of a theory under changes of scale, has long been considered a fundamental tool for understanding high-energy physics. However, in recent years, it has become increasingly apparent that this symmetry is often broken, even in theories where conventional wisdom suggests it should be exact. A prominent example is QCD, the theory of the strong interaction, whose conformal symmetry is well-known to be broken at low energies.\n\nWe focus on the impact of conformal symmetry breaking on the mass spectrum of high-lying unflavored mesons, a subject that has received comparatively little attention in the literature. We show that the presence of broken conformal symmetry can lead to splitting of the meson masses, even in the absence of explicit chiral symmetry breaking. Moreover, we find that this splitting depends strongly on the angular momentum of the mesons, and we provide a quantitative analysis of this dependence.\n\nOur results have important implications for the interpretation of meson spectroscopy data, as well as for the construction of effective theories of QCD. For instance, we discuss how our findings may help to resolve long-standing puzzles in the observed spectrum of mesons, and we propose a framework for systematically incorporating conformal symmetry breaking into calculations of meson properties. Furthermore, we highlight the potential relevance of our work for broader questions in high-energy physics, such as the study of holographic dualities and the search for new phenomena beyond the Standard Model.\n\nTo summarize, our research sheds light on the interplay between conformal symmetry breaking and meson degeneracy, revealing new features of high-energy physics that were previously overlooked. Our results provide a quantitative understanding of how broken conformal symmetry affects the meson spectrum, and suggest novel directions for future investigations.",
"This paper exhibits the closed-loop design constraints using the non-analytic function theory. First, the paper generalizes the sensitivity integral for linear feedback systems with the non-analytic sensitivity function. Sensitivity inequalities are determined by the integral relationships based on the presence of non-minimum phase zeros and right half plane poles. These inequalities are rephrased in plant parameter context, which must be satisfied by the feedback design. That indicates the ability of controllers under the influence of input disturbances and plant parameter variations. The paper then extends the integral to the analytic sensitivity function of the augmented linear feedback systems. This is useful to augment the ability of a linear feedback system to handle input disturbances and plant uncertainties, via modified sensitivity function theory. Numerical simulations are carried out to perform sensitivity analysis on three chemical control systems. That describes the usefulness and demonstrates the applicability of the result of this paper to examine and augment the ability of linear feedback system.",
"This paper presents sensitivity integrals and related inequalities to enhance the performance and stability of process control systems. Sensitivity integrals quantify the effects of disturbances on the output of control systems, providing important insights into the system's behavior. By using these integrals, we derive inequalities that impose bounds on the sensitivity integrals, resulting in improved control system robustness and better tracking of desired set-points. Furthermore, we show how these inequalities can be used to design control systems that achieve specific performance objectives, such as minimizing the effect of disturbances or maximizing disturbance rejection. The proposed methodology is illustrated through several examples, demonstrating its effectiveness in practical settings. The results suggest that sensitivity integrals and related inequalities are valuable tools for engineers and researchers working in the field of process control, offering a powerful approach to enhance the performance and robustness of control systems.",
"Software engineering (SE) research should be relevant to industrial practice.\n\nThere have been regular discussions in the SE community on this issue since the 1980's, led by pioneers such as Robert Glass. As we recently passed the milestone of \"50 years of software engineering\", some recent positive efforts have been made in this direction, e.g., establishing \"industrial\" tracks in several SE conferences. However, many researchers and practitioners believe that we, as a community, are still struggling with research relevance and utility. The goal of this paper is to synthesize the evidence and experience-based opinions shared on this topic so far in the SE community, and to encourage the community to further reflect and act on the research relevance. For this purpose, we have conducted a Multi-vocal Literature Review (MLR) of 54 systematically-selected sources (papers and non peer-reviewed articles). Instead of relying on and considering the individual opinions on research relevance, mentioned in each of the sources, the MLR aims to synthesize and provide the \"holistic\" view on the topic. The highlights of our MLR findings are as follows. The top three root causes of low relevance, discussed in the community, are: (1) Researchers having simplistic views (or wrong assumptions) about SE in practice; (2) Lack of connection with industry; and (3) Wrong identification of research problems. The top three suggestions for improving research relevance are: (1) Using appropriate research approaches such as action-research; (2) Choosing relevant research problems; and (3) Collaborating with industry. By synthesizing all the discussions on this important topic so far, this paper aims to encourage further discussions and actions in the community to increase our collective efforts to improve the research relevance.",
"This paper explores the practical relevance of software engineering research through a synthesis of the community's voice. While academic discussions of software engineering theory and practice abound, sceptics have criticized software engineering research for its lack of applicability to real-world situations. Accordingly, in our research, we aimed to understand how the software engineering community perceives the practical value of research in the field. To do so, we conducted a comprehensive review of the empirical studies, examining specifically the software engineering research endeavor, and taking note of discussions of applicability within them.\n\nThrough our analysis, we identified key themes in the community's discussions of the relevance of software engineering research. Our results indicate that although software engineering research is frequently scrutinized for lacking practical relevance, community members acknowledge its significant impact. Furthermore, community members emphasized the importance of experimental rigor, generalization, and replication in research for it to be practically relevant.\n\nOur research presents several contributions. First, it provides a comprehensive review of the current state of research in software engineering, shedding new light on the community's beliefs about the field's practical relevance. Second, it provides significant insights into the challenges of conducting practically relevant research in software engineering, such as the tension between generalization and specificity. Finally, our research shows the numerous opportunities for future research to better understand the relationship between software engineering research and practical applications.\n\nIn conclusion, this paper presents significant insights into the practical relevance of software engineering research through a synthesis of the community's voice. The software engineering community acknowledges that research has tangible, practical value, but requires experimental rigor, generalization, and replication to be effective.",
"We present high resolution large scale observations of the molecular and atomic gas in the Local Group Galaxy M33. The observations were carried out using the HERA at the 30m IRAM telescope in the CO(2-1) line achieving a resolution of 12\"x2.6 km/s, enabling individual GMCs to be resolved. The observed region mainly along the major axis out to a radius of 8.5 kpc, and covers the strip observed with HIFI/PACS Spectrometers as part of the HERM33ES Herschel key program. The achieved sensitivity in main beam temperature is 20-50 mK at 2.6 km/s velocity resolution. The CO(2-1) luminosity of the observed region is 1.7\\pm0.1x10^7 Kkm/s pc^2, corresponding to H2 masses of 1.9x10^8 Msun (including He), calculated with a NH2/ICO twice the Galactic value due to the half-solar metallicity of M33. HI 21 cm VLA archive observations were reduced and the mosaic was imaged and cleaned using the multi-scale task in CASA, yielding a series of datacubes with resolutions ranging from 5\" to 25\". The HI mass within a radius of 8.5 kpc is estimated to be 1.4x10^9 Msun. The azimuthally averaged CO surface brightness decreases exponentially with a scale length of 1.9\\pm0.1 kpc whereas the atomic gas surface density is constant at Sigma_HI=6\\pm2 Msun/pc^2 deprojected to face-on.\n\nThe central kiloparsec H_2 surface density is Sigma_H2=8.5\\pm0.2 Msun/pc^2. The star formation rate per unit molecular gas (SF Efficiency, the rate of transformation of molecular gas into stars), as traced by the ratio of CO to Halpha and FIR brightness, is constant with radius. The SFE appears 2-4 times greater than of large spiral galaxies. A morphological comparison of molecular and atomic gas with tracers of star formation shows good agreement between these maps both in terms of peaks and holes. A few exceptions are noted.\n\nSeveral spectra, including those of a molecular cloud situated more than 8 kpc from the galaxy center, are presented.",
"Molecular gas, consisting mainly of molecular hydrogen (H2), represents the raw material for star formation in galaxies. Its distribution and properties therefore play a key role in shaping the galactic environment. We present results from deep CO(1-0) observations of the Local Group galaxy M33 obtained with the IRAM 30-m telescope. A 3D spectroscopic analysis of the CO emission line was performed by applying the CLUMPFIND algorithm to the cube, in order to identify and characterize individual molecular clouds. The molecular gas is concentrated in a few giant molecular clouds (GMCs), with a total molecular gas mass of approximately 2.5 x 10^8 solar masses, and a corresponding mean H2 gas surface density of around 12 Msun/pc^2. We find that the GMCs have an approximately log-normal mass distribution and a typical mass of around 2 x 10^6 solar masses. Moreover, the molecular-to-atomic gas mass ratio for M33 is found to be ~0.4, which is lower than the typical value for other galaxy types, suggesting that the molecular gas in M33 is relatively less efficiently converted into stars. Finally, we compare our results with a sample of other Local Group galaxies and find that the global molecular gas properties of M33 are broadly consistent with those of other late-type spirals, although it has somewhat lower molecular gas content at a given HI mass. The unique combination of high sensitivity, spatial resolution, and velocity resolution offered by our observations make them a valuable resource for the study of the interstellar medium and star formation in M33, and demonstrate the exquisite capabilities of current and future millimeter-wave telescopes for the study of nearby galaxies.",
"In this note we demonstrate that the polynomials introduced by Dubov, Eleonskii, and Kulagin in relation to nonharmonic oscillators with equidistant spectra are a discrete Darboux transformation of Hermite polynomials. In particular, we obtain a modification of the Christoffel formula since its classical form cannot be applied in this case.",
"We present a modification to the classical Christoffel formula, which involves the Dubov-Eleonskii-Kulagin (DEK) polynomials. Our approach provides an alternative algorithm for computing polynomial approximations to arbitrary functions on the real line. Numerical experiments show improvements over known methods.",
"Thermonuclear explosions may arise in binaries in which a CO white dwarf (WD) accretes He from a companion. If the accretion rate allows a sufficiently large mass of He to accumulate prior to ignition of nuclear burning, the He surface layer may detonate, giving rise to an astrophysical transient. Detonation of the He layer generates shock waves that propagate into the underlying CO WD.\n\nThis might directly ignite a detonation at the edge of the CO WD or compress the core of the WD sufficiently to trigger a CO detonation near the centre. If either ignition mechanism works, the two detonations can release sufficient energy to completely unbind the WD. Here we extend our 2D studies of this double-detonation model to low-mass CO WDs. We investigate the feasibility of triggering a secondary core detonation by shock convergence in low-mass CO WDs and the observable consequences of such a detonation. Our results suggest that core detonation is probable, even for the lowest CO core masses realized in nature. We compute spectra and light curves for models in which either an edge-lit or compression-triggered CO detonation is assumed to occur and compare these to models in which no CO detonation was allowed to occur. If significant shock compression of the CO WD occurs prior to detonation, explosion of the CO WD can produce a sufficiently large mass of radioactive iron-group nuclei to affect the light curves. In particular, this can lead to relatively slow post-maximum decline. If the secondary detonation is edge-lit, however, the CO WD explosion primarily yields intermediate-mass elements that affect the observables more subtly. In this case, NIR observations and detailed spectroscopic analysis would be needed to determine whether core detonation occurred. We comment on the implications of our results for understanding peculiar astrophysical transients including SN 2002bj, SN 2010X and SN 2005E.",
"Thermonuclear transients from low-mass carbon-oxygen (CO) white dwarfs have attracted significant attention in astrophysics due to their role as cosmological probes. The Double-Detonation (DD) model is a popular theory to explain these transients, in which a deflagration wave ignites in the outer envelope, followed by a detonation wave that triggers the CO core\u2019s complete thermonuclear explosion. This paper presents two-dimensional simulations of the DD model carried out using the FLASH code, which is based on the Eulerian hydrodynamics method, on a computational domain of 1,024 x 512 x 512 zones.\n\nWe find that the simulations reproduce the key features of DD, including thermonuclear expansion, carbon detonation, and merging deflagration fronts. Our results indicate that a detonation wave can be generated in CO white dwarfs with a mass of less than 0.6 solar masses, in agreement with previous one-dimensional calculations. We also study several simulation parameters and provide interpretations for the observed phenomena.\n\nFor instance, we find that the ignition process is sensitive to the composition of the accreted material, and the nucleosynthesis processing leads to significant production of stable iron-group elements, silicon, and calcium group species. The simulations further indicate that the emission light curves in the B, V, R, and I bands for low-mass CO white dwarfs remain similar to those reported in previous studies.\n\nOverall, the simulations presented in this study provide a clear picture of the mechanisms and parameters responsible for the DD model. Our findings suggest that the DD model can significantly contribute to Type Ia supernovae. Future work could explore the accuracy of these simulations by comparing them with observations. Moreover, the 2D simulations lay the foundation for building three-dimensional models, representing the next step toward improving our understanding of these unique astrophysical phenomena.",
"Let $L$ be a periodic self-adjoint linear elliptic operator in $\\R^n$ with coefficients periodic with respect to a lattice $\\G$, e.g. Schr\\\"{o}dinger operator $(i^{-1}\\partial/\\partial_x-A(x))^2+V(x)$ with periodic magnetic and electric potentials $A,V$, or a Maxwell operator $\\nabla\\times\\varepsilon (x)^{-1}\\nabla\\times$ in a periodic medium. Let also $S$ be a finite part of its spectrum separated by gaps from the rest of the spectrum. We address here the question of existence of a finite set of exponentially decaying Wannier functions $w_j(x)$ such that their $\\G$-shifts $w_{j,\\g}(x)=w_j(x-\\g)$ for $\\g\\in\\G$ span the whole spectral subspace corresponding to $S$. It was shown by D.~Thouless in 1984 that a topological obstruction sometimes exists to finding exponentially decaying $w_{j,\\g}$ that form an orthonormal (or any) basis of the spectral subspace. This obstruction has the form of non-triviality of certain finite dimensional (with the dimension equal to the number of spectral bands in $S$) analytic vector bundle (Bloch bundle). It was shown in 2009 by one of the authors that it is always possible to find a finite number $l$ of exponentially decaying Wannier functions $w_j$ such that their $\\G$-shifts form a tight (Parseval) frame in the spectral subspace. This appears to be the best one can do when the topological obstruction is present.\n\nHere we significantly improve the estimate on the number of extra Wannier functions needed, showing that in physical dimensions the number $l$ can be chosen equal to $m+1$, i.e. only one extra family of Wannier functions is required. This is the lowest number possible in the presence of the topological obstacle. The result for dimension four is also stated (without a proof), in which case $m+2$ functions are needed.\n\nThe main result of the paper was announced without a proof in Bull. AMS, July 2016.",
"This research paper is focused on understanding the properties and characteristics of composite Wannier functions that decay exponentially. Specifically, we investigate Parseval frames for this class of functions. Our analysis shows that for a composite Wannier function satisfying certain conditions, there exist Parseval frames consisting of exponentials with some additional properties. \n\nWe begin by examining the definition and properties of composite Wannier functions with exponentially decaying tails. We establish the conditions under which such functions form a tight frame, and we explore the behavior of the frame constants under various assumptions. We then introduce the notion of a Parseval frame and explain its relevance in frame theory. The main result of our paper is a characterization of the Parseval frames for exponentially decaying composite Wannier functions. \n\nIn order to prove our main result, we use tools from harmonic analysis, such as the Fourier transform and the Plancherel theorem. We also utilize a variety of techniques from different areas of mathematics, including complex analysis, functional analysis and operator theory. In particular, we employ the concept of the adjoint operator, and we prove that the adjoint of a certain operator associated with the composite Wannier function is a bounded operator. \n\nOur analysis reveals interesting properties of Parseval frames of composite Wannier functions with exponentially decaying tails, including the existence of frames consisting of exponential functions with certain decay rates, as well as the existence of frames with additional properties like orthogonality and symmetry. We also provide examples and counterexamples to illustrate our findings. \n\nOverall, our research sheds light on the behavior of composite Wannier functions with exponential decay tails and their Parseval frames. Our results contribute to a better understanding of the properties and applications of frame theory in various fields, including signal processing, quantum physics, and information theory.",
"The main purpose of this paper is to revisit the well known potentials, called stress functions, needed in order to study the parametrizations of the stress equations, respectively provided by G.B. Airy (1863) for 2-dimensional elasticity, then by E. Beltrami (1892), J.C. Maxwell (1870) and G. Morera (1892) for 3-dimensional elasticity, finally by A. Einstein (1915) for 4-dimensional elasticity, both with a variational procedure introduced by C.\n\nLanczos (1949,1962) in order to relate potentials to Lagrange multipliers.\n\nUsing the methods of Algebraic Analysis, namely mixing differential geometry with homological algebra and combining the double duality test involved with the Spencer cohomology, we shall be able to extend these results to an arbitrary situation with an arbitrary dimension n. We shall also explain why double duality is perfectly adapted to variational calculus with differential constraints as a way to eliminate the corresponding Lagrange multipliers. For example, the canonical parametrization of the stress equations is just described by the formal adjoint of the n2(n2 -- 1)/12 components of the linearized Riemann tensor considered as a linear second order differential operator but the minimum number of potentials needed in elasticity theory is equal to n(n -- 1)/2 for any minimal parametrization. Meanwhile, we can provide all the above results without even using indices for writing down explicit formulas in the way it is done in any textbook today. The example of relativistic continuum mechanics with n = 4 is provided in order to prove that it could be strictly impossible to obtain such results without using the above methods. We also revisit the possibility (Maxwell equations of electromag- netism) or the impossibility (Einstein equations of gravitation) to obtain canonical or minimal parametrizations for various other equations of physics.\n\nIt is nevertheless important to notice that, when n and the algorithms presented are known, most of the calculations can be achieved by using computers for the corresponding symbolic computations. Finally, though the paper is mathematically oriented as it aims providing new insights towards the mathematical foundations of elasticity theory and mathematical physics, it is written in a rather self-contained way.",
"This paper presents a comprehensive review and reappraisal of some of the most prominent mathematical potentials in physics, namely: Airy, Beltrami, Maxwell, Morera, Einstein, and Lanczos potentials. By collecting and analyzing the existing literature on the topic, we aim to provide a theoretical framework that captures both the physical and mathematical essence of these potentials. In particular, we focus on investigating the interrelationships and underlying structures that unite these potentials and reveal their significance in contemporary physics research.\n\nEach of the potentials under investigation plays a critical role in various areas of physics. The Airy potential, for instance, has broad applications in the description of wave propagation phenomena, while the Beltrami potential is essential in fluid mechanics and plasma physics. The Maxwell potential was historically crucial in the formulation of electromagnetic theory, and the Morera potential has been essential in studying harmonic functions and their complex analysis. The Einstein potential is a fundamental construct in general relativity, and the Lanczos potential has vast applications in fields such as quantum mechanics and statistical mechanics.\n\nWe review these potentials under three main categories: historical developments, mathematical properties, and physical applications. By distinguishing and analyzing these perspectives in detail, we provide a unique and holistic viewpoint to these potentials that is often absent in the existing literature. We also explain how significant mathematical properties of these potentials, such as linearity, homogeneity, and conformality, can often translate to fundamental physical principles such as the conservation of energy and momentum, invariance under coordinate transformations, and the principle of least action.\n\nWe conclude by emphasizing the current and future importance of these potentials in contemporary physics research. From formulating efficient numerical algorithms to studying cosmology and black hole physics, these potentials play a vital role in almost all areas of modern physics. We hope this review paper will inspire further interdisciplinary studies, create new insights, and spark promising research projects in the years to come.",
"Let $(R, \\m, k)$ be a complete Cohen-Macaulay local ring. In this paper, we assign a numerical invariant, for any balanced big Cohen-Macaulay module, called $\\uh$-length. Among other results, it is proved that, for a given balanced big Cohen-Macaulay $R$-module $M$ with an $\\m$-primary cohomological annihilator, if there is a bound on the $\\uh$-length of all modules appearing in $\\CM$-support of $M$, then it is fully decomposable, i.e. it is a direct sum of finitely generated modules. While the first Brauer-Thrall conjecture fails in general by a counterexample of Dieterich dealing with multiplicities to measure the size of maximal Cohen-Macaulay modules, our formalism establishes the validity of the conjecture for complete Cohen-Macaulay local rings. In addition, the pure-semisimplicity of a subcategory of balanced big Cohen-Macaulay modules is settled. Namely, it is shown that $R$ is of finite $\\CM$-type if and only if the category of all fully decomposable balanced big Cohen-Macaulay modules is closed under kernels of epimorphisms. Finally, we examine the mentioned results in the context of Cohen-Macaulay artin algebras admitting a dualizing bimodule $\\omega$, as defined by Auslander and Reiten. It will turn out that, $\\omega$-Gorenstein projective modules with bounded $\\CM$-support are fully decomposable. In particular, a Cohen-Macaulay algebra $\\Lambda$ is of finite $\\CM$-type if and only if every $\\omega$-Gorenstein projective module is of finite $\\CM$-type, which generalizes a result of Chen for Gorenstein algebras. Our main tool in the proof of results is Gabriel-Roiter (co)measure, an invariant assigned to modules of finite length, and defined by Gabriel and Ringel. This, in fact, provides an application of the Gabriel-Roiter (co)measure in the category of maximal Cohen-Macaulay modules.",
"The study of balanced big Cohen-Macaulay modules has been of great interest in the field of representation theory due to their intriguing properties. These modules are known for their ability to provide significant insights into the structure and behavior of algebraic varieties. In this paper, we explore the representation-theoretic properties of these modules and provide a comprehensive analysis of their various characteristics. \n\nWe begin by examining the notion of balancedness in the context of Cohen-Macaulay modules. We show that the concept of balance is intimately linked to the structure of these modules and plays a critical role in their representation theory. We then delve into the study of big Cohen-Macaulay modules and analyze their behavior under a variety of conditions. \n\nOur investigation takes us to a deeper understanding of the interplay between balancedness, big Cohen-Macaulayness, and representation theory. We provide examples of these modules and investigate their properties in depth, including Hilbert functions, Betti numbers, and depth. \n\nMoreover, we examine the relationships between balanced big Cohen-Macaulay modules and other aspects of algebraic geometry. We show how these modules can provide insights into the Hilbert scheme, the moduli spaces of sheaves, and the geometry of the Grassmannian. \n\nIn conclusion, our study of representation-theoretic properties of balanced big Cohen-Macaulay modules provides a significant contribution to this fascinating area of algebraic geometry. Our results shed light on the structure and behavior of these modules and provide a foundation for further research in this area.",
"We have investigated analytically and numerically the liquid-glass transition of hard spheres for dimensions $d\\to \\infty $ in the framework of mode-coupling theory. The numerical results for the critical collective and self nonergodicity parameters $f_{c}(k;d) $ and $f_{c}^{(s)}(k;d) $ exhibit non-Gaussian $k$ -dependence even up to $d=800$. $f_{c}^{(s)}(k;d) $ and $f_{c}(k;d) $ differ for $k\\sim d^{1/2}$, but become identical on a scale $k\\sim d$, which is proven analytically. The critical packing fraction $\\phi_{c}(d) \\sim d^{2}2^{-d}$ is above the corresponding Kauzmann packing fraction $\\phi_{K}(d)$ derived by a small cage expansion. Its quadratic pre-exponential factor is different from the linear one found earlier. The numerical values for the exponent parameter and therefore the critical exponents $a$ and $b$ depend on $d$, even for the largest values of $d$.",
"The glass transition of hard spheres in high dimensions is a subject of great interest in materials science. In this study, we investigate the behavior of dense, hard spheres in high-dimensional spaces. We use computer simulations to understand the nature of the glass transition in such systems, and compare the results with those obtained using theoretical models. Our findings suggest that the glass transition in high-dimensional systems occurs at a lower density compared to the usual three-dimensional case. Additionally, we investigate the effect of temperature on the glass transition, and find that the transition is smoother and occurs at lower temperatures in high dimensions. Our results provide new insights into the glass transition in high-dimensional systems, which could be useful in designing and developing materials with desirable properties.",
"We further investigate, in the planar limit of N=4 supersymmetric Yang Mills theories,the high energy Regge behavior of six-point MHV scattering amplitudes.\n\nIn particular, for the new Regge cut contribution found in our previous paper, we compute in the leading logarithmic approximation (LLA) the energy spectrum of the BFKL equation in the color octet channel, and we calculate explicitly the two loop corrections to the discontinuities of the amplitudes for the transitions 2 to 4 and 3 to 3. We find an explicit solution of the BFKL equation for the octet channel for arbitrary momentum transfers and investigate the intercepts of the Regge singularities in this channel. As an important result we find that the universal collinear and infrared singularities of the BDS formula are not affected by this Regge-cut contribution. Any improvement of the BDS formula should reproduce this cut to all orders in the coupling.",
"In this paper, we investigate the Regge cut contribution of N=4 supersymmetric Yang-Mills scattering amplitudes at high energies. We utilize the formalism of multi-Regge kinematics to express the scattering amplitude of gluons in terms of complex angular momentum. By extracting the residues of the amplitude at the poles associated with the Regge cuts, we obtain the contribution to the amplitude from particles with different spins and masses. We show that this approach provides a powerful tool to study the high-energy behavior of the scattering amplitude, and allows for the computation of the leading logarithmic contribution to the amplitude at all orders in perturbation theory. Our results serve as a stepping stone toward a complete understanding of N=4 supersymmetric Yang-Mills scattering amplitudes and their behavior at high energies.",
"Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$, and let $d(u,w)$ denote the length of a $u-w$ geodesic in $G$. For any $v\\in V(G)$ and $e=xy\\in E(G)$, let $d(e,v)=\\min\\{d(x,v),d(y,v)\\}$. For distinct $e_1, e_2\\in E(G)$, let $R\\{e_1,e_2\\}=\\{z\\in V(G):d(z,e_1)\\neq d(z,e_2)\\}$. Kelenc et al.\n\n[Discrete Appl. Math. 251 (2018) 204-220] introduced the edge dimension of a graph: A vertex subset $S\\subseteq V(G)$ is an edge resolving set of $G$ if $|S\\cap R\\{e_1,e_2\\}|\\ge 1$ for any distinct $e_1, e_2\\in E(G)$, and the edge dimension $edim(G)$ of $G$ is the minimum cardinality among all edge resolving sets of $G$.\n\nFor a real-valued function $g$ defined on $V(G)$ and for $U\\subseteq V(G)$, let $g(U)=\\sum_{s\\in U}g(s)$. Then $g:V(G)\\rightarrow[0,1]$ is an edge resolving function of $G$ if $g(R\\{e_1,e_2\\})\\ge1$ for any distinct $e_1,e_2\\in E(G)$. The fractional edge dimension $edim_f(G)$ of $G$ is $\\min\\{g(V(G)):g\\mbox{ is an edge resolving function of }G\\}$. Note that $edim_f(G)$ reduces to $edim(G)$ if the codomain of edge resolving functions is restricted to $\\{0,1\\}$.\n\nWe introduce and study fractional edge dimension and obtain some general results on the edge dimension of graphs. We show that there exist two non-isomorphic graphs on the same vertex set with the same edge metric coordinates. We construct two graphs $G$ and $H$ such that $H \\subset G$ and both $edim(H)-edim(G)$ and $edim_f(H)-edim_f(G)$ can be arbitrarily large. We show that a graph $G$ with $edim(G)=2$ cannot have $K_5$ or $K_{3,3}$ as a subgraph, and we construct a non-planar graph $H$ satisfying $edim(H)=2$. It is easy to see that, for any connected graph $G$ of order $n\\ge3$, $1\\le edim_f(G) \\le \\frac{n}{2}$; we characterize graphs $G$ satisfying $edim_f(G)=1$ and examine some graph classes satisfying $edim_f(G)=\\frac{n}{2}$. We also determine the fractional edge dimension for some classes of graphs.",
"Graph theory is a fundamental field of mathematics that deals with the study of mathematical structures called graphs. Of particular interest are the concepts of edge dimension and fractional edge dimension, which are important parameters for many applications in computer science, communication networks and social networks. In this paper, we investigate these two parameters by providing new theoretical results, algorithms and applications.\n\nFirst, we define the edge dimension of a graph as the smallest integer k such that the edges of the graph can be represented as the union of k matchings. This definition leads to several interesting properties, such as the fact that the edge dimension of a bipartite graph is equal to its maximum degree. We also study the complexity of computing the edge dimension of a graph, and propose a dynamic programming algorithm that solves this problem in polynomial time.\n\nSecond, we introduce the concept of fractional edge dimension, which is a real-valued parameter that measures the extent to which the edges of a graph can be covered by matchings of fractional size. We prove that the fractional edge dimension of a graph is always less than or equal to its edge dimension, and show that there exist graphs whose fractional edge dimension is strictly smaller than their edge dimension.\n\nFinally, we present several applications of edge dimension and fractional edge dimension in various contexts. For example, we show how edge dimension can be used to study the complexity of network routing problems in communication networks, and how fractional edge dimension can be used to model the spread of diseases in social networks.\n\nOverall, this paper provides a comprehensive study of edge dimension and fractional edge dimension of graphs, and sheds light on their many theoretical and practical applications. Our results pave the way for future research in this important area of graph theory.",
"I present the results of multi-component decomposition of V and R broadband images of a sample of 17 nearby galaxies, most of them hosting bars and active galactic nuclei. I use BUDDA v2.1 to produce the fits, allowing to include bars and AGN in the models. A comparison with previous results from the literature shows a fairly good agreement. It is found that the axial ratio of bars, as measured from ellipse fits, can be severely underestimated if the galaxy axisymmetric component is relatively luminous. Thus, reliable bar axial ratios can only be determined by taking into account the contributions of bulge and disc to the light distribution in the galaxy image. Through a number of tests, I show that neglecting bars when modelling barred galaxies can result in a overestimation of the bulge-to-total luminosity ratio of a factor of two.\n\nSimilar effects result when bright, type 1 AGN are not considered in the models. By artificially redshifting the images, I show that the structural parameters of more distant galaxies can in general be reliably retrieved through image fitting, at least up to the point where the physical spatial resolution is ~ 1.5 Kpc. This exercise shows that disc parameters are particularly robust, but bulge parameters are prone to errors if its effective radius is small compared to the seeing radius, and might suffer from systematic effects. In this low resolution regime, the effects of ignoring bars are still present, but AGN light is smeared out. I briefly discuss the consequences of these results to studies of the structural properties of galaxies, in particular on the stellar mass budget in the local universe. With reasonable assumptions, it is possible to show that the stellar content in bars can be similar to that in classical bulges and elliptical galaxies. (Abridged)",
"Barred galaxies and Active Galactic Nuclei (AGN) hosts have been subjects of extensive research over the years. In this study, we aim to investigate the image decomposition of these two astronomical phenomena, with a focus on their morphology, kinematics, and central regions. \n\nOur analysis is based on a sample of over 200 barred galaxies and 50 AGN hosts, whose imaging data was obtained from a number of ground and space-based surveys. We employ a range of state-of-the-art techniques such as 2D bulge-disk-bar decompositions, Principal Component Analysis, and Monte Carlo Markov Chain modeling to analyze and quantify the characteristics of these galaxies and AGN hosts. \n\nOur results demonstrate that barred galaxies typically exhibit a central bulge, a bar, and a disk, with various degrees of prominence. Furthermore, we find that these galaxies' central regions can be modeled as elliptical bulges, which can be linked to the mass of the supermassive black hole at their center. As for AGN hosts, we observe a range of morphologies, with some displaying clear evidence of a bar and/or strong tidal interactions with nearby galaxies. \n\nOur study highlights the importance of decomposing the images of barred galaxies and AGN hosts to fully understand their underlying characteristics and structure. The results can have implications for our understanding of galaxy evolution and supermassive black hole growth. Overall, this paper contributes to the ongoing effort in studying galaxy morphology and provides insights into the formation and evolution of barred galaxies and AGN hosts.",
"We present broad-band X-ray spectroscopy of the energetic components that make up the supernova remnant (SNR) Kesteven 75 using concurrent 2017 Aug 17-20 XMM-Newton and NuSTAR observations, during which the pulsar PSR J1846-0258 is found to be in the quiescent state. The young remnant hosts a bright pulsar wind nebula powered by the highly-energetic (Edot = 8.1E36 erg/s) isolated, rotation-powered pulsar, with a spin-down age of only P/2Pdot ~ 728 yr. Its inferred magnetic field (Bs = 4.9E13 G) is the largest known for these objects, and is likely responsible for intervals of flare and burst activity, suggesting a transition between/to a magnetar state. The pulsed emission from PSR J1846-0258 is well-characterized in the 2-50 keV range by a power-law model with photon index Gamma_PSR = 1.24+/-0.09 and a 2-10 keV unabsorbed flux of (2.3+/-0.4)E-12 erg/s/cm^2). We find no evidence for an additional non-thermal component above 10 keV in the current state, as would be typical for a magnetar. Compared to the Chandra pulsar spectrum, the intrinsic pulsed fraction is 71+/-16% in 2-10 keV band. A power-law spectrum for the PWN yields Gamma_PWN = 2.03+/-0.03 in the 1-55 keV band, with no evidence of curvature in this range, and a 2-10 keV unabsorbed flux (2.13+/-0.02)E-11 erg/s/cm^2. The NuSTAR data reveal evidence for a hard X-ray component dominating the SNR spectrum above 10 keV which we attribute to a dust-scattered PWN component. We model the dynamical and radiative evolution of the Kes 75 system to estimate the birth properties of the neutron star, the energetics of its progenitor, and properties of the PWN. This suggests that the progenitor of Kes 75 was originally in a binary system which transferred most its mass to a companion before exploding.",
"The highly magnetized pulsar PSR J1846-0258, its wind nebula and hosting supernova remnant Kes 75 have been the subject of extensive X-ray spectroscopy studies in the past decade. The pulsar is located in the Galactic plane, about 18,000 light years from Earth. It has the highest surface magnetic field strength, about 100 times greater than typical neutron stars. The pulsar's emission, which is observed across the electromagnetic spectrum, is characterized by non-thermal radiation. Its surrounding wind nebula has been studied in detail, revealing the presence of a rapidly moving jet, as well as an X-ray bright spot at the base of the jet. \n\nThe supernova remnant Kes 75, which is associated with PSR J1846-0258, has also been a subject of X-ray spectroscopy studies. The remnant is non-spherical in shape and is believed to have originated from a Type Ia supernova explosion. The remnant's X-ray emission spectrum reveals the presence of a thermal and non-thermal component, indicating the presence of a shock-heated plasma and relativistic particles, respectively. The remnant also exhibits synchrotron radiation at radio wavelengths.\n\nRecent X-ray imaging and spectroscopy observations of Kes 75 have further revealed the complex interaction between the pulsar wind and the remnant's expanding shockwave. The observations show evidence of efficient particle acceleration, which is likely to play a key role in supernova remnants and the surrounding interstellar medium. These observations provide important insights into the complex physics of high-energy astrophysical phenomena.\n\nIn summary, the X-ray spectroscopy of PSR J1846-0258, its wind nebula and hosting supernova remnant Kes 75 have revealed a wealth of information about the physical properties and dynamics of these objects. Further X-ray observations and theoretical modeling will be needed to fully understand the complex interaction between the pulsar wind and the supernova remnant, and its implications for the broader field of high-energy astrophysics.",
"The first purpose of this paper is to point out a curious result announced by Macaulay on the Hilbert function of a differential module in his famous book The Algebraic Theory of Modular Systems published in 1916. Indeed, on page 78/79 of this book, Macaulay is saying the following: \" A polynomial ideal $\\mathfrak{a} \\subset k[{\\chi}\\_1$,..., ${\\chi}\\_n]=k[\\chi]$ is of the {\\it principal class} and thus {\\it unmixed} if it has rank $r$ and is generated by $r$ polynomials. Having in mind this definition, a primary ideal $\\mathfrak{q}$ with associated prime ideal $\\mathfrak{p} = rad(\\mathfrak{q})$ is such that any ideal $\\mathfrak{a}$ of the principal class with $\\mathfrak{a} \\subset \\mathfrak{q}$ determines a primary ideal of greater {\\it multiplicity} over $k$. In particular, we have $dim\\_k(k[\\chi]/({\\chi}\\_1$,...,${\\chi}\\_n)^2)=n+1$ because, passing to a system of PD equations for one unknown $y$, the parametric jets are \\{${y,y\\_1, ...,y\\_n}$\\} but any ideal $\\mathfrak{a}$ of the principal class with $\\mathfrak{a}\\subset ({\\chi}\\_1,{\\^a},{\\chi}\\_n)^2$ is contained into a {\\it simple} ideal, that is a primary ideal $\\mathfrak{q}$ such that $rad(\\mathfrak{q})=\\mathfrak{m}\\in max(k[\\chi])$ is a maximal and thus prime ideal with $dim\\_k(M)=dim\\_k(k[\\chi]/\\mathfrak{q})=2^n$ at least.\n\nAccordingly, any primary ideal $\\mathfrak{q}$ may not be a member of the primary decomposition of an unmixed ideal $\\mathfrak{a} \\subseteq \\mathfrak{q}$ of the principal class. Otherwise, $\\mathfrak{q}$ is said to be of the {\\it principal noetherian class} \". Our aim is to explain this result in a modern language and to illustrate it by providing a similar example for $n=4$. The importance of such an example is that it allows for the first time to exhibit symbols which are $2,3,4$-acyclic without being involutive. Another interest of this example is that it has properties quite similar to the ones held by the system of conformal Killing equations which are still not known. For this reason, we have put all the examples at the end of the paper and each one is presented in a rather independent way though a few among them are quite tricky.\n\nMeanwhile, the second purpose is to prove that the methods developped by Macaulay in order to study {\\it unmixed polynomial ideals} are only particular examples of new formal differential geometric techniques that have been introduced recently in order to study {\\it pure differential modules}. However these procedures are based on the formal theory of systems of ordinary differential (OD) or partial differential (PD) equations, in particular on a systematic use of the Spencer operator, and are still not acknowledged by the algebraic community.",
"This paper investigates the properties of pure differential modules and their relationship to unmixed polynomial ideals. We begin by defining pure differential modules and exploring their basic algebraic properties. We then present a result of Macaulay which characterizes unmixed polynomial ideals in terms of their associated graded rings. Specifically, Macaulay showed that an ideal is unmixed if and only if its associated graded ring is Cohen-Macaulay.\n\nUsing this result, we show that certain classes of pure differential modules are closely related to the Cohen-Macaulay property. In particular, we prove that a pure differential module is Cohen-Macaulay if and only if its associated graded ring is Cohen-Macaulay. This provides a powerful tool for studying pure differential modules and their associated polynomial ideals.\n\nWe then turn our attention to a specific example of an unmixed polynomial ideal: the ideal of minors of a matrix. We show that this ideal is Cohen-Macaulay, and use this fact to derive some interesting consequences regarding the geometry of the set of singular matrices. In particular, we show that the set of singular matrices is a union of Zariski closed subsets of strictly smaller dimension. This provides a new perspective on the geometry of matrix singularities, and opens up new avenues for research.\n\nFinally, we apply our results to the study of certain special classes of algebraic varieties, known as Schubert varieties. We show that the ideal of a Schubert variety is unmixed, and hence Cohen-Macaulay. This allows us to compute the Hilbert series of Schubert varieties in terms of certain combinatorial data, known as Schubert polynomials. We also derive some interesting consequences regarding the cohomology of Schubert varieties, showing that it can be expressed in terms of the cohomology of certain Schubert cells.\n\nIn summary, this paper provides a detailed study of pure differential modules and their relationship to unmixed polynomial ideals. Using a result of Macaulay, we show that certain classes of pure differential modules are closely related to the Cohen-Macaulay property. We then apply these results to the study of the geometry of singular matrices and the cohomology of Schubert varieties. This work provides a valuable contribution to the theory of algebraic geometry and opens up new avenues for research.",
"We perform a reflection study on a new observation of the neutron star low-mass X-ray binary Aquila X-1 taken with NuSTAR during the August 2016 outburst and compare with the July 2014 outburst. The source was captured at $\\sim32\\%\\ L_{\\mathrm{Edd}}$, which is over four times more luminous than the previous observation during the 2014 outburst. Both observations exhibit a broadened Fe line profile. Through reflection modeling, we determine that the inner disk is truncated $R_{in,\\ 2016}=11_{-1}^{+2}\\ R_{g}$ (where $R_{g}=GM/c^{2}$) and $R_{in,\\ 2014}=14\\pm2\\ R_{g}$ (errors quoted at the 90% confidence level). Fiducial neutron star parameters (M$_{NS}=1.4$ M$_{\\odot}$, $R_{NS}=10$ km) give a stellar radius of $R_{NS}=4.85\\ R_{g}$; our measurements rule out a disk extending to that radius at more than the $6\\sigma$ level of confidence. We are able to place an upper limit on the magnetic field strength of $B\\leq3.0-4.5\\times10^{9}$ G at the magnetic poles, assuming that the disk is truncated at the magnetospheric radius in each case. This is consistent with previous estimates of the magnetic field strength for Aquila X-1. However, if the magnetosphere is not responsible for truncating the disk prior to the neutron star surface, we estimate a boundary layer with a maximum extent of $R_{BL,\\ 2016}\\sim10\\ R_{g}$ and $R_{BL,\\ 2014}\\sim6\\ R_{g}$. Additionally, we compare the magnetic field strength inferred from the Fe line profile of Aquila X-1 and other neutron star low-mass X-ray binaries to known accreting millisecond X-ray pulsars.",
"In the study of neutron star low-mass X-ray binaries (LMXBs), the Aquila X-1 system is particularly intriguing because of the unusual truncation of its accretion disk. Previous studies have detailed a scenario wherein the accretion disk becomes truncated at a high state transition in the system, resulting in a sharp drop in X-ray luminosity. In this paper, we investigate the nature of the truncation behavior using a combination of observational and theoretical analyses. Our results show that the disk truncation in Aquila X-1 occurs at one-third of the Eddington limit, a finding which has significant implications for our understanding of accretion physics in LMXBs. We analyze the X-ray spectral and timing properties of the system during its high state transition, finding evidence for stable accretion disk truncation and reduced disk emission. Utilizing a simple model of the accretion disk structure, we show that the observed truncation behavior can be explained through the sustained cooling of the disk. Our analysis suggests that the truncation behavior is not anomalous, as previously thought, but rather a natural consequence of the accretion physics of LMXBs. This result provides important insight into the accretion physics of neutron star LMXBs and serves as an important stepping stone for future theoretical studies. In conclusion, our study provides new observational evidence for the truncation of accretion disks in LMXBs and highlights the need for further research into these intriguing systems.",
"It has been observed in [Park 2014] that the physical states of the ADM formulation of 4D Einstein gravity holographically reduce and can be described by a 3D language. Obviously the approach poses the 4D covariance issue; it turns out that there are two covariance issues whose address is the main theme of the present work. Although the unphysical character of the trace piece of the fluctuation metric has been long known, it has not been taken care of in a manner suitable for the Feynman diagram computations; a proper handling of the trace piece through gauge-fixing is the key to more subtler of the covariance issues. As for the second covariance issue, a renormalization program can be carried out covariantly to any loop order at intermediate steps, thereby maintaining the 4D covariance; it is only at the final stage that one should consider the 3D physical external states. With the physical external states, the 1PI effective action reduces to 3D and renormalizability is restored just as in the entirely-3D approach of [Park 2014]. We revisit the one-loop two-point renormalization with careful attention to the trace piece of the fluctuation metric and in particular outline one-loop renormalization of the Newton's constant.",
"In this paper, we investigate the 4D covariance of holographic quantization of Einstein gravity. Specifically, we explore the correspondence between the anti-de Sitter (AdS) supergravity and the boundary conformal field theory (CFT) under the holographic principle. Utilizing the holographic renormalization technique, we show that the AdS supergravity admits a consistent holographic description in terms of the CFT data. This correspondence is tested for a 4D pure gravity theory in AdS space, where the boundary is a 3D conformal field theory. We find that the regulated on-shell AdS gravitational action matches with a certain functional of the boundary metric, which satisfies the CFT Ward identities. In doing so, we establish the 4D covariance of the holographic quantization of Einstein gravity. Our results suggest a deeper understanding of the geometric and algebraic structures underlying the holographic duality. Furthermore, we discuss the potential implications of our findings on the study of black holes and the AdS/CFT correspondence in general.",
"We construct a two dimensional Cellular Automata based model for the description of pedestrian dynamics. Wide range of complicated pattern formation phenomena in pedestrian dynamics are described in the model, e.g. lane formation, jams in a counterflow and egress behavior. Mean-field solution of the densely populated case and numerical solution of the sparsely populated case are provided. This model has the potential to describe more flow phenomena.",
"This research paper proposes a cellular automata-based model for simulating pedestrian dynamics. The model takes into account collective behavior and individual characteristics of pedestrians to simulate realistic pedestrian movements. The model is developed through an iterative process of calibration and validation against empirical data. The results show that the proposed model is capable of reproducing a range of pedestrian phenomena, including pedestrian flow, lane formation, and density distributions. The model provides a useful tool for predicting pedestrian behavior in various scenarios.",
"A search for a heavy Standard Model Higgs boson decaying via H->ZZ->llqq, where l=e,mu, is presented. The search is performed using a data set of pp collisions at sqrt(s)=7 TeV, corresponding to an integrated luminosity of 1.04 fb^-1 collected in 2011 by the ATLAS detector at the CERN LHC collider. No significant excess of events above the estimated background is found. Upper limits at 95% confidence level on the production cross section (relative to that expected from the Standard Model) of a Higgs boson with a mass in the range between 200 and 600 GeV are derived. Within this mass range, there is at present insufficient sensitivity to exclude a Standard Model Higgs boson. For a Higgs boson with a mass of 360 GeV, where the sensitivity is maximal, the observed and expected cross section upper limits are factors of 1.7 and 2.7, respectively, larger than the Standard Model prediction.",
"This paper presents the results of a search for a heavy Standard Model Higgs boson in the channel H->ZZ->llqq using the ATLAS detector. The search was conducted at the Large Hadron Collider, where proton-proton collisions were studied at a center-of-mass energy of 13 TeV. The data collected correspond to an integrated luminosity of 36.1 fb^-1. No significant excess over the background expectation was observed in the analyzed data, which leads to the establishment of upper limits on the production cross section times branching ratio for this process. The analysis is based on the reconstruction and identification of leptons and jets produced in the final state of the selected events. This study enhances our understanding of the Higgs mechanism and provides clues for exploring beyond the Standard Model physics phenomena.",
"Current operating systems are complex systems that were designed before today's computing environments. This makes it difficult for them to meet the scalability, heterogeneity, availability, and security challenges in current cloud and parallel computing environments. To address these problems, we propose a radically new OS design based on data-centric architecture: all operating system state should be represented uniformly as database tables, and operations on this state should be made via queries from otherwise stateless tasks. This design makes it easy to scale and evolve the OS without whole-system refactoring, inspect and debug system state, upgrade components without downtime, manage decisions using machine learning, and implement sophisticated security features. We discuss how a database OS (DBOS) can improve the programmability and performance of many of today's most important applications and propose a plan for the development of a DBOS proof of concept.",
"DBOS is a proposed operating system that aims to enhance data-centric computing by providing a seamless experience for data management. The system operates by persistently managing data as a first-class citizen, centralizing it as a core resource. This approach allows for applications to easily interact with large and complex data sets, enabling developers to focus on business logic rather than data management. Moreover, DBOS seamlessly integrates with existing file systems on a computer, making it easier for end-users to access their data. The system also supports a highly modular architecture, allowing it to be extended with new data management capabilities as needed. In summary, DBOS prioritizes providing a powerful data management experience and enables developers to create data-rich applications without needing to worry about the complexities of data storage and access.",
"Introduction: Individuals with chronic musculoskeletal pain show impairments in their pain-modulatory capacity. Stress-induced analgesia (SIA) is a paradigm of endogenous pain inhibition mainly tested in animals. It has not been tested in patients with chronic pain despite the important role of stress in pain modulation and the chronicity process. Methods: SIA was tested in 22 patients with chronic musculoskeletal pain and 18 healthy participants matched for age and gender. Pain thresholds, pain tolerance and suprathreshold pain sensitivity were examined before and after a cognitive stressor. Additionally, chronic stress levels, pain catastrophizing and pain characteristics were assessed as potential modulating factors. Results: Patients with chronic musculoskeletal pain compared to healthy controls showed significantly impaired SIA (F(1,37)=5.63, p=.02) for pain thresholds, but not pain tolerance (F(1,37)=0.05, p=.83) and stress-induced hyperalgesia (SIH) to suprathreshold pain ratings (F(1,37)=7.76, p=.008). Patients (r(22)=-0.50, p=.05) but not controls (r(18)=-0.39, p=.13) with high catastrophizing had low SIA as assessed by pain thresholds. In controls suprathreshold pain ratings were significantly positively correlated with catastrophizing (r(18)=0.57, p=.03) and life-time stress exposure (r(18)=0.54, p=.03). In patients neither catastrophizing (r(22)=0.21, p=.34) nor stress exposure (r(22)=0.34, p=.34) were associated with suprathreshold SIH. Discussion: Our data suggest impairments of SIA and SIH in patients with chronic musculoskeletal pain. Catastrophizing was associated with deficient SIA in the patients and higher pain ratings in controls. High life time stress also increased pain ratings in the controls.",
"Chronic musculoskeletal pain is a common affliction that can have debilitating effects on daily life. Stress-induced analgesia (SIA) has been shown to be a potential modulator of pain in both healthy individuals and those with chronic pain conditions. The purpose of this study was to investigate the presence of SIA in patients with chronic musculoskeletal pain and healthy controls.\n\nParticipants were recruited and assigned to either a stress or control group. The stress group underwent a cold pressor test and their pain thresholds were measured before and after the test. The control group underwent a non-stressful task and their pain thresholds were measured in the same manner. Results showed a significant increase in pain threshold in the stress group, but not in the control group, indicating the presence of SIA in patients with chronic musculoskeletal pain.\n\nThese findings suggest that stress-induced analgesia may play a role in modulating pain in patients with chronic musculoskeletal pain. Future studies should investigate the underlying mechanisms of SIA and explore the potential therapeutic applications of this phenomenon in chronic pain conditions. Overall, these results contribute to a better understanding of the complex interplay between stress and pain perception, and may inform the development of novel pain management strategies.",
"We study certain physically-relevant subgeometries of binary symplectic polar spaces $W(2N-1,2)$ of small rank $N$, when the points of these spaces canonically encode $N$-qubit observables. Key characteristics of a subspace of such a space $W(2N-1,2)$ are: the number of its negative lines, the distribution of types of observables, the character of the geometric hyperplane the subspace shares with the distinguished (non-singular) quadric of $W(2N-1,2)$ and the structure of its Veldkamp space. In particular, we classify and count polar subspaces of $W(2N-1,2)$ whose rank is $N-1$. $W(3,2)$ features three negative lines of the same type and its $W(1,2)$'s are of five different types. $W(5,2)$ is endowed with 90 negative lines of two types and its $W(3,2)$'s split into 13 types. 279 out of 480 $W(3,2)$'s with three negative lines are composite, i.\\,e. they all originate from the two-qubit $W(3,2)$.\n\nGiven a three-qubit $W(3,2)$ and any of its geometric hyperplanes, there are three other $W(3,2)$'s possessing the same hyperplane. The same holds if a geometric hyperplane is replaced by a `planar' tricentric triad. A hyperbolic quadric of $W(5,2)$ is found to host particular sets of seven $W(3,2)$'s, each of them being uniquely tied to a Conwell heptad with respect to the quadric.\n\nThere is also a particular type of $W(3,2)$'s, a representative of which features a point each line through which is negative. Finally, $W(7,2)$ is found to possess 1908 negative lines of five types and its $W(5,2)$'s fall into as many as 29 types. 1524 out of 1560 $W(5,2)$'s with 90 negative lines originate from the three-qubit $W(5,2)$. Remarkably, the difference in the number of negative lines for any two distinct types of four-qubit $W(5,2)$'s is a multiple of four.",
"This paper presents a taxonomy of polar subspaces of multi-qubit symplectic polar spaces of small rank. Symplectic polar spaces are geometries that exhibit a rich interplay between generalized quadrangles, classical groups and their Kac-Moody analogues. These spaces have been the subject of intense study due to their many applications in areas such as quantum information theory, coding theory and finite geometries. A multi-qubit symplectic polar space is a symplectic polar space that arises from a set of qubits in quantum information theory. \n\nThe taxonomy presented in this paper characterizes polar subspaces of multi-qubit symplectic polar spaces using a combination of techniques from algebraic geometry, representation theory and linear algebra. We classify polar subspaces into several families, each of which is characterized by a sequence of invariants. These invariants include the dimension of the polar subspace, the number of qubits involved, the rank of the subspaces, and other quantities related to the symplectic geometry of the spaces. \n\nWe apply our classification to a number of examples and provide explicit constructions of polar subspaces for each of the families. In particular, we show that there are families of polar subspaces that exhibit interesting geometric properties such as maximal isotropy and strict maximality. We also investigate the relationship between our classification and the classification of polar spaces in the classical setting, and show that there are close connections between the two classifications. \n\nThe results presented in this paper have important implications for quantum computing and cryptography. Our taxonomy of polar subspaces provides a framework for the construction of quantum error-correcting codes and other quantum information processing protocols. Moreover, the techniques developed in this paper can be applied to other areas of mathematics, such as the study of algebraic groups and their representations. Overall, this work is a significant contribution to the theory of multi-qubit symplectic polar spaces and has wide-ranging implications for the study of quantum information theory and related areas.",
"We construct a mapping of Bell and bipartite Leggett-Garg experiments for microscopic qubits onto a gedanken experiment for macroscopic qubits based on two macroscopically distinct coherent states. This provides an unusual situation where the dichotomic measurements (and associated hidden variables) involved in the Bell tests need only discriminate between two macroscopically distinct states of a system i.e. correspond to coarse-grained measurements that do not specify values to a level of precision of order $\\sim\\hbar$. Violations of macro-realism and macroscopic local realism are predicted. We show how one may obtain consistency with a weak form of macroscopic realism (wMR): that for a system prepared in a superposition of macroscopically distinct pointer eigenstates, the outcome of the coarse-grained pointer measurement $\\hat{M}$ is predetermined. Macroscopic realism does not however hold in a deterministic fashion, where one assumes the predetermination of outcomes prior to the unitary rotations that define the choice of measurement setting in the Bell experiment. We illustrate an analogy with the Einstein-Podolsky-Rosen (EPR) argument, showing how wMR can be regarded as inconsistent with the completeness of quantum mechanics.",
"The debate over macroscopic realism has been ongoing for many years. In this paper, we explore the dichotomy between weak and deterministic macroscopic realism. Weak macroscopic realism suggests that there is a fundamental limit to our ability to know the exact state of a macroscopic system, even in principle. Deterministic macroscopic realism, on the other hand, posits that there is a definite, knowable state for all macroscopic systems at all times. We examine the philosophical and scientific underpinnings of these two viewpoints and their implications for our understanding of the physical world. We discuss the role of quantum mechanics in the debate, as well as various experimental results that have been used to support one view or the other. Ultimately, we conclude that neither view is inherently more correct than the other, and that the nature of macroscopic reality may be more complex than either weak or deterministic macroscopic realism can fully capture.",
"Autism spectrum disorder is a neurodevelopmental condition that includes issues with communication and social interactions. People with ASD also often have restricted interests and repetitive behaviors. In this paper we build preliminary bricks of an automated gesture imitation game that will aim at improving social interactions with teenagers with ASD. The structure of the game is presented, as well as support tools and methods for skeleton detection and imitation learning. The game shall later be implemented using an interactive robot.",
"This paper presents the development of an automated gesture imitation game for teenagers with Autism Spectrum Disorder (ASD). The game aims to target social skills training and provide individuals with ASD a fun way to improve their nonverbal communication abilities. An initial usability study has been conducted with a small group of participants and preliminary findings show promising results for the game's potential to engage users. Further research is needed to evaluate the efficacy of this tool as a means of supporting social skills development in individuals with ASD."
],
"desc_act": false,
"exllama_config": {
"version": 1
},
"group_size": 128,
"max_input_length": null,
"model_seqlen": null,
"module_name_preceding_first_block": null,
"modules_in_block_to_quantize": null,
"pad_token_id": null,
"quant_method": "gptq",
"sym": true,
"tokenizer": null,
"true_sequential": true,
"use_cuda_fp16": false,
"use_exllama": false
},
"skip_bias_add": true,
"skip_bias_add_qkv": false,
"slow_but_exact": false,
"torch_dtype": "float16",
"transformers_version": "4.37.2",
"unk_token_id": 0,
"use_cache": true,
"vocab_size": 250880
}