text
stringlengths
59
500k
subset
stringclasses
6 values
\begin{document} \title{Tilting modules for Lie superalgebras} \author{\sc Jonathan Brundan} \address {Department of Mathematics\\ University of Oregon\\ Eugene\\ OR~97403, USA} \email{[email protected]} \thanks{Work partially supported by the NSF (grant no. DMS-0139019).} \maketitle \section{Introduction} The notion of a tilting module first emerged in Lie theory in the 1980s, see for instance \cite{CI} where Collingwood and Irving classified the {self-dual modules with a Verma flag} in category $\mathcal O$ for a semisimple Lie algebra, generalizing earlier work of Enright and Shelton \cite{ES}. Similar looking objects were also considered by Donkin \cite{Do1} in the representation theory of reductive algebraic groups in positive characteristic. The terminology ``tilting module'' comes instead from the representation theory of finite dimensional algebras, via an article of Ringel \cite{Ri} which gives an elegant construction of tilting modules in the setting of quasi-hereditary algebras \cite{CPS, DR}. Ringel's argument was subsequently adapted to algebraic groups by Donkin \cite{Do2} and to Lie algebras by Soergel \cite{so2}. The goal of the present article is to extend Soergel's framework to Lie superalgebras. Our interest in doing this arose from the papers \cite{B1,B2} in which we conjectured that the coefficients of certain canonical bases should compute multiplicities in ${\Delta}$-flags of indecomposable tilting modules over the Lie superalgebras $\mathfrak{gl}(m|n)$ and $\mathfrak{q}(n)$ respectively. Thus the present article should be viewed as a companion to \cite{B1,B2}, since we provide the general theory needed to construct the tilting modules in the first place. We stress that the development here is very similar to Soergel's work: most of the proofs carry over unchanged to the Lie superalgebra setting. Like in \cite{so2}, we have also included in the first few sections some other well-known generalities, most of which have their origins in the classic work of Bernstein, Gelfand and Gelfand \cite{BGG}. The main result of the article is best understood from Corollary~\ref{maineq2}, which roughly speaking gives a duality between indecomposable projective and indecomposable tilting modules. The proof of this involves the construction of the ``semi-regular bimodule'', see Lemma~\ref{icky}. At the end of the article, we have given several examples involving the Lie superalgebras $\mathfrak{gl}(m|n)$ and $\mathfrak{q}(n)$ to illustrate the usefulness of the theory. The results may also prove useful in studying the representation theory of the other classical Lie superalgebras and affine Lie superalgebras. \noindent {\em Notation.} Throughout the article, we will work over the ground field ${\mathbb C}$. Suppose $V = \bigoplus_{d \in {\mathbb Z}} V_d = \bigoplus_{d \in {\mathbb Z}} V_{d,{\bar 0}} \oplus V_{d,{\bar 1}}$ is a {\em graded vector superspace}, i.e. a ${\mathbb Z} \times {\mathbb Z}_2$-graded vector space. To avoid confusion between the two different gradings, we use the word {\em degree} to refer to the ${\mathbb Z}$-grading, and {\em parity} to refer to the ${\mathbb Z}_2$-grading. Write $\deg(v) \in {\mathbb Z}$ (resp. $\bar v \in {\mathbb Z}_2$) for the degree (resp. the parity) of a homogeneous vector. Given two graded vector superspaces $V, W$, ${\operatorname{\bf{Hom}}}_{{\mathbb C}}(V, W)$ denotes the graded vector superspace with $$ {\operatorname{\bf{Hom}}}_{{\mathbb C}}(V, W)_{d, p} = \{f:V \rightarrow W\:|\:f(V_{d',p'}) \subseteq W_{d+d',p+p'}\hbox{ for all }(d',p')\in{\mathbb Z}\times{\mathbb Z}_2\} $$ for each $(d,p)\in{\mathbb Z}\times{\mathbb Z}_2$. \section{Graded category $\mathcal O$} For basic notions regarding Lie superalgebras, see \cite{Kac}. Let us recall in particular that for a Lie superalgebra $\mathfrak g = \mathfrak g_{{\bar 0}}\oplus\mathfrak g_{{\bar 1}}$ and $\mathfrak g$-supermodules $M, N$, a {homomorphism} $f:M \rightarrow N$ means a (not necessarily even) linear map such that $f(X m) = (-1)^{\bar f \bar X} X f(m)$ for all $X \in \mathfrak g, m\in M$. This formula needs to be interpreted additively in the case that $f, X$ are not homogeneous! We will use the notation $M \simeq N$ as opposed to the usual $M \cong N$ to indicate that there is an {\em even} isomorphism between $M$ and $N$. The category of all $\mathfrak g$-supermodules is not an abelian category, but the {\em underlying even category} consisting of the same objects and only even morphisms is abelian. This, and the existence of the parity change functor $\Pi$, allows us to appeal to all the usual notions of homological algebra. Similar remarks apply to the various other categories of $\mathfrak g$-supermodules that we shall meet. We will be concerned here instead with a {graded} Lie superalgebra, i.e. a Lie superalgebra $\mathfrak g$ with an additional ${\mathbb Z}$-grading $\mathfrak g = \bigoplus_{d \in {\mathbb Z}} \mathfrak g_d = \bigoplus_{d \in {\mathbb Z}} \mathfrak g_{d,{\bar 0}}\oplus \mathfrak g_{d,{\bar 1}}$ such that $[\mathfrak g_d, \mathfrak g_e] \subseteq \mathfrak g_{d+e}$ for all $d,e \in {\mathbb Z}$. A {\em graded} $\mathfrak g$-supermodule means a $\mathfrak g$-supermodule $M$ with an additional ${\mathbb Z}$-grading $M = \bigoplus_{d \in {\mathbb Z}} M_d = \bigoplus_{d \in {\mathbb Z}} M_{d,{\bar 0}}\oplus M_{d,{\bar 1}}$ such that $\mathfrak g_d M_e \subseteq M_{d+e}$ for all $d,e \in {\mathbb Z}$. Homomorphisms $f:M \rightarrow N$ between graded $\mathfrak g$-supermodules are always assumed to satisfy $f(M_d) \subseteq N_d$ for each $d \in {\mathbb Z}$. Assume from now on that we are given a graded Lie superalgebra $\mathfrak g$. Let $\mathfrak h = \mathfrak g_0, \mathfrak b = \mathfrak g_{\geq 0} = \bigoplus_{d \geq 0} \mathfrak g_d$, and $\mathfrak n = \mathfrak g_{< 0} = \bigoplus_{d < 0} \mathfrak g_d$. We write $U(\mathfrak g), U(\mathfrak b)$ and $U(\mathfrak n)$ for the corresponding universal enveloping superalgebras, all of which inherit a ${\mathbb Z}$-grading from $\mathfrak g$. We assume: \begin{itemize} \item[(A1)] $\dim \mathfrak g_d < \infty$ for each $d \in {\mathbb Z}$; \item[(A2)] $\mathfrak h_{{\bar 0}}$ is a reductive Lie algebra. \end{itemize} Fix in addition a maximal toral subalgebra $\mathfrak t$ of $\mathfrak h_{{\bar 0}}$ and an abelian subgroup $X$ of $\mathfrak t^*$. By an {\em admissible} representation of $\mathfrak h_{{\bar 0}}$, we mean a locally finite dimensional $\mathfrak h_{{\bar 0}}$-supermodule such that $M = \bigoplus_{{\lambda} \in X} M_{\lambda}$, where $$ M_{\lambda} = \{m \in M\:|\:tm={\lambda}(t)m\hbox{ for all }t \in \mathfrak t\}. $$ More generally, for any graded subalgebra $\mathfrak m$ of $\mathfrak g$ containing $\mathfrak h_{{\bar 0}}$, we will say that an $\mathfrak m$-supermodule is admissible if it is admissible on restriction to $\mathfrak h_{{\bar 0}}$. We must also assume: \begin{itemize} \item[(A3)] the adjoint representation $\mathfrak g$ is admissible. \end{itemize} For any graded subalgebra $\mathfrak m$ of $\mathfrak g$ containing $\mathfrak h_{{\bar 0}}$, let $\mathcal C_{\mathfrak m}$ denote the category of all admissible graded $\mathfrak m$-supermodules. Finally let $\mathcal O$ be the category of all admissible graded $\mathfrak g$-supermodules that are locally finite dimensional over $\mathfrak b$. This is a graded analogue of the category $\mathcal O$ of \cite{BGG}. \begin{Lemma}\label{enuf} Category $\mathcal O$ and all the categories $\mathcal C_{\mathfrak m}$ have enough injectives. \end{Lemma} \begin{proof} We explain the argument for $\mathcal O$; the same argument works for each $\mathcal C_{\mathfrak m}$. Let $\operatorname{Fin}$ be the functor from the category of all graded $\mathfrak g$-supermodules to $\mathcal O$ sending an object to its largest graded submodule belonging to $\mathcal O$. This is right adjoint to an exact functor, so sends injectives to injectives. Moreover, the category of all graded $\mathfrak g$-supermodules has enough injectives since it is isomorphic to the category of graded supermodules over the universal enveloping superalgebra $U(\mathfrak g)$. Now given any $M \in \mathcal O$, we embed $M$ into an injective graded $\mathfrak g$-supermodule, then apply the functor $\operatorname{Fin}$. \end{proof} In view of Lemma~\ref{enuf}, we can compute ${\operatorname{Ext}}^i(M,N)$ in category $\mathcal O$ or any of the categories $\mathcal C_{\mathfrak m}$ using an injective resolution of $N$. In the sequel, we are often going to make use of the functors $U(\mathfrak g) \otimes_{U(\mathfrak b)} ?$ and ${\operatorname{\bf{Hom}}}_{\mathfrak g_{\leq 0}}(U(\mathfrak g), ?)$. In the latter case, for a graded $\mathfrak g_{\leq 0}$-supermodule $M$, ${\operatorname{\bf{Hom}}}_{\mathfrak g_{\leq 0}}(U(\mathfrak g), M)$ is viewed as a graded $\mathfrak g$-supermodule with action $(u f)(u') = (-1)^{\bar u \bar f + \bar u \bar u'} f(u'u)$, for $u, u' \in U(\mathfrak g), f:U(\mathfrak g) \rightarrow M$. The next lemma is a consequence of the PBW theorem. \begin{Lemma} \label{for} For graded $\mathfrak b$-, $\mathfrak g_{\leq 0}$- and $\mathfrak h$-supermodules $L, M$ and $N$, \begin{align*} U(\mathfrak g) \otimes_{U(\mathfrak b)} L&\simeq U(\mathfrak g_{\leq 0}) \otimes_{U(\mathfrak h)}L,\\ U(\mathfrak g_{\leq 0}) \otimes_{U(\mathfrak h)} N&\simeq S(\mathfrak n)\otimes N,\\\intertext{as graded $\mathfrak g_{\leq 0}$- resp. $\mathfrak h$-supermodules, and} {\operatorname{\bf{Hom}}}_{\mathfrak g_{\leq 0}}(U(\mathfrak g), M)&\simeq {\operatorname{\bf{Hom}}}_{\mathfrak h}(U(\mathfrak b), M)\\ {\operatorname{\bf{Hom}}}_{\mathfrak h}(U(\mathfrak b), N)&\simeq {\operatorname{\bf{Hom}}}_{{\mathbb C}}(S(\mathfrak g_{> 0}), N) \end{align*} as graded $\mathfrak b$- resp. $\mathfrak h$-supermodules. (Here $S(\mathfrak n), S(\mathfrak g_{> 0})$ denote the symmetric superalgebras viewed as modules via ${\operatorname{ad}}$). \end{Lemma} Applying the lemma and (A3), $U(\mathfrak g) \otimes_{U(\mathfrak b)} ?$ (resp. $U(\mathfrak g_{\leq 0}) \otimes_{U(\mathfrak h)} ?$) is an exact functor from $\mathcal C_{\mathfrak b}$ to $\mathcal C_{\mathfrak g}$ (resp. from $\mathcal C_{\mathfrak h}$ to $\mathcal C_{\mathfrak g_{\leq 0}}$), which is obviously left adjoint to the natural restriction functor. Similarly, ${\operatorname{\bf{Hom}}}_{\mathfrak g_{\leq 0}}(U(\mathfrak g), ?)$ is an exact functor from $\mathcal C_{\mathfrak g_{\leq 0}}$ to $\mathcal C_{\mathfrak g}$ that is right adjoint to restriction. \begin{Lemma}\label{gfr} For $i \geq 0$, $L \in \mathcal C_{\mathfrak g}$, $M \in \mathcal C_{\mathfrak h}$ and $N \in \mathcal C_{\mathfrak g_{\leq 0}}$, we have that \begin{align*} {\operatorname{Ext}}^i_{\mathcal C_{\mathfrak g}}(L, {\operatorname{\bf{Hom}}}_{\mathfrak g_{\leq 0}}(U(\mathfrak g), N)) &\simeq {\operatorname{Ext}}^i_{\mathcal C_{\mathfrak g_{\leq 0}}}(L, N),\\ {\operatorname{Ext}}^i_{\mathcal C_{\mathfrak g_{\leq 0}}}(U(\mathfrak g_{\leq 0}) \otimes_{U(\mathfrak h)} M, N) &\simeq {\operatorname{Ext}}^i_{\mathcal C_{\mathfrak h}}(M, N). \end{align*} \end{Lemma} \begin{proof} Argue by induction on $i$ using the long exact sequence. \end{proof} \section{Standard and costandard modules} Let $\Lambda$ be a complete set of pairwise non-isomorphic irreducible admissible graded $\mathfrak h$-supermodules. Each $E \in \Lambda$ is necessarily concentrated in a single degree, denoted $|E| \in {\mathbb Z}$. Moreover, by the superalgebra analogue of Schur's lemma, the number \begin{equation} d_E := \dim {\operatorname{End}}_{\mathcal C_{\mathfrak h}}(E) \end{equation} is either $1$ or $2$. \begin{Lemma}\label{tobegin} Every $E \in \Lambda$ has a finite dimensional projective cover $\widehat E$ in $\mathcal C_{\mathfrak h}$, with ${\operatorname{cosoc}}_{\mathfrak h} \widehat E \simeq E$. Moreover, given objects $M, P \in \mathcal C_{\mathfrak h}$ with $P$ projective, $M \otimes P$ is also projective. \end{Lemma} \begin{proof} For a graded $\mathfrak h_{{\bar 0}}$-supermodule $M$, we observe that \begin{equation*} U(\mathfrak h) \otimes_{U(\mathfrak h_{{\bar 0}})} M \simeq S(\mathfrak h_{{\bar 1}}) \otimes M \end{equation*} as graded $\mathfrak h_{{\bar 0}}$-supermodules. Combining this with (A3) shows that the functor $U(\mathfrak h) \otimes_{U(\mathfrak h_{{\bar 0}})} ?$ maps $\mathcal C_{\mathfrak h_{{\bar 0}}}$ to $\mathcal C_{\mathfrak h}$. Since it is left adjoint to an exact functor, it maps projectives to projectives. By (A2) and Weyl's theorem on complete reducibility, every object in $\mathcal C(\mathfrak h_{{\bar 0}})$ is projective. Now take $E \in \Lambda$. Let $\widehat E$ be any indecomposable summand of $U(\mathfrak h) \otimes_{U(\mathfrak h_{{\bar 0}})} E$ that maps surjectively onto $E$ under the natural multiplication map. By the preceeding paragraph, $\widehat E$ is a finite dimensional indecomposable projective object in $\mathcal C_{\mathfrak h}$ mapping surjectively onto $E$. Now the usual arguments via Fitting's lemma show that $\widehat E$ is actually a projective cover of $E$ in the category $\mathcal C_{\mathfrak h}$ and that ${\operatorname{cosoc}}_{\mathfrak h} \widehat{E} \simeq E$. Finally let $P \in \mathcal C_{\mathfrak h}$ be an arbitrary projective object. Then, we can find $Q \in \mathcal C_{\mathfrak h}$ and $R \in \mathcal C_{\mathfrak h_{\bar 0}}$ such that $P \oplus Q \cong U(\mathfrak h) \otimes_{U(\mathfrak h_{{\bar 0}})} R$. By the tensor identity, $(P \oplus Q) \otimes M \cong U(\mathfrak h) \otimes_{U(\mathfrak h_{{\bar 0}})} (R \otimes M)$. The latter is projective and $P \otimes M$ is isomorphic to a summand of it, so $P \otimes M$ is projective too. \end{proof} Define the {\em standard} and {\em costandard $\mathfrak g$-supermodules} corresponding to $E\in \Lambda$: \begin{equation} \Delta(E) := U(\mathfrak g) \otimes_{U(\mathfrak b)} \widehat{E}, \qquad \nabla(E) := {\operatorname{\bf{Hom}}}_{\mathfrak g_{\leq 0}}(U(\mathfrak g), E). \end{equation} By Lemma~\ref{for}, both $\Delta(E)$ and $\nabla(E)$ are admissible, and clearly they are locally finite dimensional over $\mathfrak b$, hence they belong to $\mathcal O$. Indeed, letting $\mathcal O_{\leq d}$ denote the full subcategory of $\mathcal O$ consisting of all objects that are zero in degrees $> d$, both $\Delta(E)$ and $\nabla(E)$ belong to $\mathcal O_{\leq |E|}$, with $\Delta(E)_{|E|} \simeq \widehat{E}$, $\nabla(E)_{|E|} \simeq E$. We define \begin{equation} L(E) := {\operatorname{cosoc}}_{\mathfrak g} \Delta(E) \end{equation} for each $E \in \Lambda$. The following well-known lemma shows in particular that these are irreducible. \begin{Lemma}\label{class} The $\{L(E)\}_{E \in \Lambda}$ form a complete set of pairwise non-isomorphic irreducibles in $\mathcal O$. Moreover, $L(E) \simeq \operatorname{soc}_{\mathfrak g} \nabla(E)$. \end{Lemma} \begin{proof} Over $\mathfrak g_{\leq 0}$, $\Delta(E) \simeq U(\mathfrak g_{\leq 0}) \otimes_{U(\mathfrak h)} \widehat{E}$, hence ${\operatorname{cosoc}}_{\mathfrak g_{\leq 0}} \Delta(E) \simeq {\operatorname{cosoc}}_{\mathfrak h} \widehat{E} \simeq E$. This immediately implies that $L(E)$ is irreducible in $\mathcal O$ and ${\operatorname{cosoc}}_{\mathfrak g_{\leq 0}} L(E) \simeq E$. Hence the $\{L(E)\}_{E \in \Lambda}$ are pairwise non-isomorphic irreducibles. Now take any irreducible $M \in \mathcal O$. There exists a non-zero $\mathfrak b$-homomorphism $E \rightarrow M$ for some $E \in \Lambda$. This induces by Frobenius reciprocity a non-zero $\mathfrak g$-homomorphism $\Delta(E) \rightarrow M$, hence $M \cong L(E)$. The same argument shows that ${\operatorname{soc}}_{\mathfrak b} L(E) \simeq E$. Finally, over $\mathfrak b$, $\nabla(E) \simeq {\operatorname{\bf{Hom}}}_{\mathfrak h}(U(\mathfrak b), E)$, so ${\operatorname{soc}}_{\mathfrak b} \nabla(E) \simeq E$. Hence ${\operatorname{soc}}_{\mathfrak g} \nabla(E) \simeq L(E)$ too. \end{proof} \begin{Lemma}\label{sl1} Let $E, F \in \Lambda$. \begin{itemize} \item[(i)] $\Delta(E)$ is the projective cover of $L(E)$ in $\mathcal O_{\leq |E|}$. \item[(ii)] $\dim {\operatorname{Hom}}_{\mathcal O}(\Delta(E), \nabla(F)) = 0$ if $E \neq F$, $d_E$ if $E = F$. \item[(iii)] ${\operatorname{Ext}}^1_{\mathcal O}(\Delta(E), \nabla(F)) = 0$. \end{itemize} \end{Lemma} \begin{proof} For (i), take $M \in \mathcal O_{\leq |E|}$. We have the following sequence of isomorphisms natural in $M$: $$ {\operatorname{Hom}}_{\mathcal O_{\leq |E|}}(\Delta(E), M) \simeq {\operatorname{Hom}}_{\mathcal C_{\mathfrak g}}(\Delta(E), M) \simeq {\operatorname{Hom}}_{\mathcal C_{\mathfrak b}}(\widehat{E}, M) \simeq {\operatorname{Hom}}_{\mathcal C_{\mathfrak h}}(\widehat{E}, M). $$ Since $\widehat{E}$ is projective in $\mathcal C_{\mathfrak h}$, this shows that $\Delta(E)$ is projective in $\mathcal O_{\leq |E|}$. The same argument with $M = \Delta(E)$ shows that $\dim {\operatorname{End}}_{\mathcal O_{\leq|E|}}(\Delta(E))$ is finite dimensional, so we get that $\Delta(E)$ is actually the projective cover of $L(E)$ in $\mathcal O_{\leq |E|}$ from Fitting's lemma. For (ii), (iii), Lemma~\ref{gfr} implies for every $i \geq 0$ that $$ {\operatorname{Ext}}^i_{\mathcal C_{\mathfrak g}}(\Delta(E), \nabla(F)) \simeq {\operatorname{Ext}}^i_{\mathcal C_{\mathfrak h}}(\widehat{E}, F). $$ Since $\widehat{E}$ is projective with ${\operatorname{cosoc}}_{\mathfrak h} \widehat{E} \simeq E$, the right hand side is zero if $i > 0$ or if $E \neq F$, and is of dimension $d_E$ otherwise. Now we are done since $\mathcal O$ is a full subcategory of $\mathcal C_{\mathfrak g}$. \end{proof} \section{Projective modules and blocks} Let $M \in \mathcal O$. A {\em $\Delta$-flag} of $M$ means a filtration $$ 0 = M_0 \subseteq M_1 \subseteq M_2 \dots $$ such that $M = \bigcup_{i\geq 0} M_i$ and each factor $M_i / M_{i-1}$ is either zero or $\cong \Delta(E_i)$ for $E_i \in \Lambda$. If the filtration stabilizes after finitely many terms we will call it a {\em finite $\Delta$-flag}. Arguing as in \cite[Lemma 5.10]{so2}, one shows: \begin{Lemma}\label{indstep} Suppose we have that ${\operatorname{Ext}}^1_{\mathcal O}(\Delta(F), N) = 0$ for all $F \in \Lambda$. Then, ${\operatorname{Ext}}^1_{\mathcal O}(M, N) = 0$ for every $M \in \mathcal O$ admitting a $\Delta$-flag. \end{Lemma} Applying the lemma to $N = \nabla(E)$, one easily deduces that the multiplicity of $\Delta(E)$ as a subquotient of a $\Delta$-flag of $M$ is equal to $\dim {\operatorname{Hom}}_{\mathcal O}(M, \nabla(E)) / d_E$, for every $M \in \mathcal O$ admitting a $\Delta$-flag. In particular, this multiplicity does not depend on the choice of the $\Delta$-flag. We will denote it by $(M:\Delta(E))$. \begin{Lemma}\label{dep} A graded $\mathfrak g$-supermodule $M$ admits a finite $\Delta$-flag if and only if $M$ is a graded free $U(\mathfrak n)$-supermodule of finite rank and its restriction to $\mathfrak h$ is a projective object in $\mathcal C_{\mathfrak h}$. \end{Lemma} \begin{proof} ($\Rightarrow$) It suffices to prove this for $M = \Delta(E)$. Obviously this is a graded free $U(\mathfrak n)$-supermodule of rank $\dim \widehat{E}$. Moreover, over $\mathfrak h$, we have by Lemma~\ref{for} that $M \simeq S(\mathfrak n) \otimes \widehat{E}$. This is projective in $\mathcal C_{\mathfrak h}$ by Lemma~\ref{tobegin}. ($\Leftarrow$) We may assume that $M = \bigoplus_{i=1}^n U(\mathfrak n) \otimes V_i$ is a decomposition of $M$ as a graded free $U(\mathfrak n)$-supermodule, where $V_i$ is a finite dimensional vector superspace concentrated in degree $d_i$ with trivial action of $\mathfrak n$, and $d_1 > \dots > d_n$. Note then that $1 \otimes V_1$ must be invariant under the action of $\mathfrak b$, and $\mathfrak g_{> 0}$ acts trivially. Hence by the projectivity assumption it decomposes as a direct sum of finitely many $\widehat{E}$'s as a $\mathfrak b$-supermodule. Each $U(\mathfrak n) \otimes \widehat{E}$ in this decomposition is isomorphic as a graded $\mathfrak g$-supermodule to $\Delta(E)$, and the quotient of $M$ by $U(\mathfrak n) \otimes V_1$ is graded free of strictly smaller rank and is still projective over $\mathfrak h$, so we are done by induction. \end{proof} \begin{Corollary}\label{fp} If $M$ admits a finite $\Delta$-flag, so does any summand of $M$. \end{Corollary} \begin{proof} Any summand of a graded free $U(\mathfrak n)$-supermodule of finite rank is again graded free of finite rank, see \cite[Remark 2.4(2)]{so2}. \end{proof} We now come to the basic result on projective objects in category $\mathcal O$. \begin{Theorem}\label{pct} Every simple object $L(E) \in \mathcal O_{\leq n}$ admits a projective cover $P_{\leq n}(E)$ in $\mathcal O_{\leq n}$ with ${\operatorname{cosoc}}_{\mathfrak g} P_{\leq n}(E) \simeq L(E)$. Moreover, \begin{itemize} \item[(i)] $P_{\leq n}(E)$ admits a finite $\Delta$-flag with ${\Delta}(E)$ at the top; \item[(ii)] for $m > n$, the kernel of any surjection $P_{\leq m}(E) \twoheadrightarrow P_{\leq n}(E)$ admits a finite $\Delta$-flag with subquotients of the form $\Delta(F)$ for $m \geq |F| > n$; \item[(iii)] $L(E)$ admits a projective cover $P(E)$ in $\mathcal O$ if and only if there exists $n \gg 0$ with $P_{\leq n}(E) = P_{\leq n+1}(E) = \dots$, in which case $P(E) = P_{\leq n}(E)$. \end{itemize} \end{Theorem} \begin{proof} The proof is essentially the same as \cite[Theorem 3.2]{so2}, so we just sketch the construction of $P_{\leq n}(E)$ and refer the reader to {\em loc. cit.} for everything else. For a graded $\mathfrak b$-supermodule $M$, let $\tau_{\leq n} M$ denote the quotient of $M$ by the submodule $\bigoplus_{d > n} M_d$ of all homomogeneous parts of degree $> n$. For $E \in \Lambda$, $$ Q := U(\mathfrak g) \otimes_{U(\mathfrak b)} \tau_{\leq n}(U(\mathfrak b) \otimes_{U(\mathfrak h)} \widehat{E}) $$ is projective in $\mathcal O_{\leq n}$ as in the proof of \cite[Theorem 3.2(1)]{so2}, it is graded free over $U(\mathfrak n)$ of finite rank, and it is projective viewed as an object of $\mathcal C_{\mathfrak h}$ by Lemma~\ref{tobegin}. So Lemma~\ref{dep} shows that $Q$ has a finite $\Delta$-flag. Now $Q$ clearly maps surjectively onto $L(E)$. Let $P_{\leq n}(E)$ be an indecomposable summand of $Q$ that also maps surjectively onto $L(E)$. This has a finite $\Delta$-flag too by Corollary~\ref{fp}, and it is a projective cover of $L(E)$ in $\mathcal O_{\leq n}$ by a Fitting's lemma argument, see \cite[Lemma 3.3]{so2}. \end{proof} For $M \in \mathcal O$, we write $[M:L(E)]$ for the composition multiplicity of $L(E)$ in $M$, i.e. the supremum of $\#\{i\:|\:M_i / M_{i-1} \cong L(E)\}$ over all finite filtrations $M = (M_i)_i$ of $M$. This multiplicity is additive on short exact sequences. Now we get ``BGG reciprocity'': \begin{Corollary}\label{bggrec} $(P_{\leq n}(E) : \Delta(F)) = [\nabla(F):L(E)]$ for all $E, F \in \Lambda$ and $n \geq |E|, |F|$. \end{Corollary} \begin{proof} In $\mathcal O_{\leq n}$, we have that $[\nabla(F):L(E)] = \dim {\operatorname{Hom}}_{\mathcal O}(P_{\leq n}(E), \nabla(F)) / d_E$. This equals $(P_{\leq n}(E):\Delta(F))$ by the definition of the latter multiplicity. \end{proof} \iffalse \begin{Corollary} The simple object $L(E)$ admits a projective cover $P(E)$ in $\mathcal O$ if and only if there exists $n \in {\mathbb Z}$ such that $[\nabla(F):L(E)] = 0$ for all $F \in \Lambda$ with $|F| > n$. In that case, we have that $P(E) = P_{\leq n}(E)$. \end{Corollary} \begin{proof} The second condition is equivalent by BGG reciprocity to the statement that $P_{\leq n}(E) = P_{\leq m}(E)$ for all $m > n$. Now argue like in the proof of \cite[Theorem 3.2(3)]{so2}. \end{proof} \fi Suppose finally in this section that $\sim$ is an equivalence relation on $\Lambda$ with the property that \begin{equation*} [\Delta(F):L(E)] \neq 0 \hbox{ or } [\nabla(F):L(E)] \neq 0 \Rightarrow F \sim E \end{equation*} for each $E, F \in \Lambda$. For an equivalence class $\theta \in \Lambda / \sim$, let $\mathcal O_\theta$ be the full subcategory of $\mathcal O$ consisting of the objects $M \in \mathcal O$ all of whose irreducible subquotients are of the form $L(E)$ for $E \in \theta$. We refer to $\mathcal O_\theta$ as a {\em block} of $\mathcal O$, in view of the following theorem which is proved exactly as in \cite[Theorem 4.2]{so2}. \begin{Theorem}\label{blockdec} The functor $$ \prod_{\theta \in \Lambda / \sim} \mathcal O_\theta \rightarrow \mathcal O,\quad (M_\theta)_{\theta} \mapsto \bigoplus_{\theta \in \Lambda / \sim} M_\theta $$ is an equivalence of categories. \end{Theorem} \section{Tilting modules and Arkhipov-Soergel duality} Next, we discuss the classification of {tilting modules} in $\mathcal O$. The first main result is the analogue of \cite[Theorem 5.2]{so2}. \begin{Theorem}\label{tiltthm} For any $E \in \Lambda$, there exists a unique up to isomorphism indecomposable object $T(E) \in \mathcal O$ such that \begin{itemize} \item[(i)] ${\operatorname{Ext}}^1_{\mathcal O}(\Delta(F), T(E)) = 0$ for all $F \in \Lambda$; \item[(ii)] $T(E)$ admits a $\Delta$-flag starting with $\Delta(E)$ at the bottom. \end{itemize} \end{Theorem} We call $T(E)$ the {\em indecomposable tilting module} corresponding to $E \in \Lambda$. The proof given by Soergel is a variation on an argument of Ringel \cite{Ri}, and carries over to the present setting virtually unchanged. The main step is to show that for any $E \in \Lambda$ with $|E| \geq n$, there exists a unique up to isomorphism indecomposable object $T_{\geq n}(E)$ in $\mathcal O$ such that \begin{itemize} \item[(i)$'$] ${\operatorname{Ext}}^1_{\mathcal O}(\Delta(F), T_{\geq n}(E)) = 0$ for all $F \in \Lambda$ with $|F| \geq n$; \item[(ii)$'$] $T_{\geq n}(E)$ admits a finite $\Delta$-flag starting with $\Delta(E)$ at the bottom and with all other subquotients of the form $\Delta(F)$ for $F$'s with $|E| > |F| \geq n$. \end{itemize} Moreover, given $|E| \geq m \geq n$, there exists an inclusion $T_{\geq m}(E) \hookrightarrow T_{\geq n}(E)$, and the cokernel of any such inclusion admits a finite $\Delta$-flag with subquotients $\Delta(F)$ for $m > |F| \geq n$. Given these results, a candidate for the desired module $T(E)$ can then be constructed as a direct limit of the $T_{\geq n}(E)$'s as $n \rightarrow -\infty$. Uniqueness then needs to be established separately. To proceed, we need to make two additional assumptions (see \cite[Remark 1.2]{so2} for remarks on the first one): \begin{itemize} \item[(A4)] $\mathfrak g$ is generated as a Lie superalgebra by $\mathfrak g_0, \mathfrak g_1$ and $\mathfrak g_{-1}$; \item[(A5)] for $E \in \Lambda$, $(\widehat E)^* \cong \widehat{E^\#}$ for some $E^\# \in \Lambda$. \end{itemize} Under the assumption (A4), an {\em admissible semi-infinite character} $\gamma$ for $\mathfrak g$ is defined to be a Lie superalgebra homomorphism $\gamma:\mathfrak h \rightarrow {\mathbb C}$ such that $\gamma|_{\mathfrak t}\in X$ and \begin{equation} \gamma([X, Y]) = {\operatorname{str}}_{\mathfrak h} ({\operatorname{ad}} X \circ {\operatorname{ad}} Y) \end{equation} for all $X \in \mathfrak g_1, Y \in \mathfrak g_{-1}$. (We recall the {\em supertrace} of an endomorphism $f=f_{{\bar 0}}+f_{{\bar 1}}:V \rightarrow V$ of a vector superspace is defined by ${\operatorname{str}}_V f := {\operatorname{tr}}_{V_{{\bar 0}}} f_{{\bar 0}} - {\operatorname{tr}}_{V_{{\bar 1}}} f_{{\bar 0}}$.) In the next lemma, we write $U(\mathfrak n)^{\circledast}$ for the graded dual ${\operatorname{\bf{Hom}}}_{{\mathbb C}}(U(\mathfrak n), {\mathbb C})$ (where ${\mathbb C} = {\mathbb C}_{0,{\bar 0}}$) viewed as a $U(\mathfrak n),U(\mathfrak n)$-bimodule with left and right actions defined by $(nf)(n') = (-1)^{\bar n \bar f + \bar n \bar n'} f(n'n)$ and $(fn)(n') = f(nn')$ respectively, for $n, n' \in U(\mathfrak n), f \in U(\mathfrak n)^\circledast$. \begin{Lemma}\label{icky} Let $\gamma:\mathfrak h \rightarrow {\mathbb C}$ be an admissible semi-infinite character for $\mathfrak g$. Then there exists a graded $U(\mathfrak g), U(\mathfrak g)$-bimodule $S_\gamma$ and an even monomorphism $\iota:U(\mathfrak n)^\circledast \hookrightarrow S_\gamma$ of graded $U(\mathfrak n),U(\mathfrak n)$-bimodules such that \begin{itemize} \item[(i)] the map $U(\mathfrak g) \otimes_{U(\mathfrak n)} U(\mathfrak n)^\circledast \rightarrow S_\gamma, u \otimes f \mapsto u \iota(f)$ is a bijection; \item[(ii)] the map $U(\mathfrak n)^\circledast \otimes_{U(\mathfrak n)} U(\mathfrak g) \rightarrow S_\gamma, f \otimes u \mapsto \iota(f) u$ is a bijection; \item[(iii)] $[H, \iota(f)] = \iota(f) \gamma(H) - (-1)^{\bar H \bar f} \iota(f \circ {\operatorname{ad}} H)$ for all $H \in \mathfrak h$ and $f \in U(\mathfrak n)^\circledast$. \end{itemize} \end{Lemma} \begin{proof} This is proved in almost exactly the same way as \cite[Theorem 1.3]{so2}. However, the signs are rather delicate in the super case. So we describe explicitly the construction of $S_\gamma$, referring to the proof of \cite[Theorem 1.3]{so2} for a fuller account of the other steps that need to be made. As a graded vector superspace, we have that $$ S_\gamma = U(\mathfrak n)^\circledast \otimes_{{\mathbb C}} U(\mathfrak b), $$ and the map $\iota:U(\mathfrak n)^\circledast \rightarrow S_\gamma$ is defined by $\iota(f) = f\otimes 1$. Note $S_\gamma$ is a $U(\mathfrak n), U(\mathfrak b)$-bimodule in the usual way. We now extend this structure to make $S_\gamma$ into $U(\mathfrak g), U(\mathfrak g)$-bimodule. First, there is a natural isomorphism of $U(\mathfrak n), U(\mathfrak b)$-bimodules $$ S_\gamma = U(\mathfrak n)^\circledast \otimes_{{\mathbb C}} U(\mathfrak b) \stackrel{\sim}{\longrightarrow} U(\mathfrak n)^\circledast \otimes_{U(\mathfrak n)} U(\mathfrak g) $$ mapping $u \otimes v$ to $u \otimes v$; we get the right action of $U(\mathfrak g)$ on $S_\gamma$ via this isomorphism. To obtain the left action, we use the natural isomorphisms $$ S_\gamma = U(\mathfrak n)^\circledast \otimes_{{\mathbb C}} U(\mathfrak b) \stackrel{\sim}{\longrightarrow} {\operatorname{\bf{Hom}}}_{{\mathbb C}}(U(\mathfrak n), U(\mathfrak b)) \stackrel{\sim}{\longleftarrow} {\operatorname{\bf{Hom}}}_{U(\mathfrak b)}(U(\mathfrak g), {\mathbb C}_\gamma \otimes_{\mathbb C} U(\mathfrak b)). $$ For the right hand space, the action of $U(\mathfrak b)$ is the natural left action on $U(\mathfrak g)$, and the tensor product of the action on ${\mathbb C}_\gamma = {\mathbb C}_{0,{\bar 0}}$ affording the character $\gamma$ and the natural left action on $U(\mathfrak b)$. The first isomorphism maps $f \otimes b$ to the function $\widehat{f \otimes b}:n \mapsto (-1)^{\bar b \bar n}f(n) b$. The second isomorphism is given by restriction of functions from $U(\mathfrak g)$ to $U(\mathfrak n)$, identifying ${\mathbb C}_\gamma \otimes_{{\mathbb C}} U(\mathfrak b)$ with $U(\mathfrak b)$ via $1 \otimes u \mapsto u$. Now, $U(\mathfrak g)$ acts naturally on the left on the right hand space, by $(uf)(u') = (-1)^{\bar u \bar f + \bar u \bar u'} f(u'u)$, for $u,u' \in U(\mathfrak g)$ and $f:U(\mathfrak g) \rightarrow {\mathbb C}_\gamma \otimes_{{\mathbb C}} U(\mathfrak b)$. Transferring this to $S_\gamma$ via the isomorphisms gives the left $U(\mathfrak g)$-module structure on $S_\gamma$. Now we have to check that the left and right actions of $U(\mathfrak g)$ on $S_\gamma$ just defined commute with one another, so that $S_\gamma$ is a $U(\mathfrak g), U(\mathfrak g)$-bimodule. This is done by brutal calculation relying on the assumption that $\gamma$ is a semi-infinite character, see the proof of \cite[Theorem 1.3]{so2} for the detailed argument which generalizes routinely to our setting. Once that is done, (i)--(iii) are relatively easy to check to complete the proof. \iffalse Obviously, the left action of $U(\mathfrak n)$ commutes with the right action of $U(\mathfrak g)$ and the right action of $U(\mathfrak b)$ commutes with the left action of $U(\mathfrak g)$. So we just have to show that \begin{align*} H((f \otimes b) Y) &= (H(f \otimes b))Y,\\ X((f \otimes b) Y) &= (X(f\otimes b))Y \end{align*} for all $H \in \mathfrak h, X \in \mathfrak g_1, Y \in \mathfrak g_{-1}, f \in U(\mathfrak n)^\circledast$ and $b \in U(\mathfrak b)$. Multiplying these equations on the right by $X_1 \in \mathfrak g_1$ or by $H_1 \in \mathfrak h$ one easily deduces the analogous equations with $b$ replaced by $bX_1$ or by $b H_1$. Hence it suffices to prove the above equations in the special case that $b = 1$. The hardest step is to show that \begin{equation}\label{tododo} X((f \otimes 1) Y) = (X(f \otimes 1)) Y \end{equation} for all homogeneous $X \in \mathfrak g_1, Y \in \mathfrak g_{-1}, f \in U(\mathfrak n)^\circledast$. Pick a homogeneous basis $H_1,\dots,H_r$ for $\mathfrak h$. Write $[Y, X] = \sum_{i=1}^r c_{Y, X}^i H_i$ for $c_{Y, X}^i \in {\mathbb C}$, so $c_{Y,X}^i = 0$ if $\bar Y +\bar X \neq \bar{H_i}$. Define functions $H_X^i, F_X:U(\mathfrak n) \rightarrow U(\mathfrak n)$ by \begin{equation}\label{e1} nX = (-1)^{\bar n \bar X} Xn + \sum_{i=1}^r H_i H_X^i(n) + F_X(n). \end{equation} Also let $L_Y:U(\mathfrak n) \rightarrow U(\mathfrak n)$ be the operator given by left multiplication by $Y$, and $s:U(\mathfrak n) \rightarrow U(\mathfrak n)$ be the involution $n \mapsto (-1)^{\bar n} n$. One calculates using (\ref{e1}) that \begin{align}\label{e3} H_X^i \circ L_Y &= (-1)^{\bar Y \bar H_i} L_Y \circ H_X^i + c_{Y, X}^i s^{\bar X},\\ F_X \circ L_Y &= L_Y \circ F_X + \sum_{i=1}^r L_{[Y, H_i]} \circ H_X^i.\label{e4} \end{align} By the definitions of the actions and (\ref{e1}), one checks \begin{align*}\label{e2} X(f \otimes 1) &= (-1)^{\bar f \bar X} f \otimes X + \sum_{i} (-1)^{\bar X+\bar H_i} f \circ H_X^i \otimes (\gamma(H_i)+H_i)\\ &\qquad+ (-1)^{\bar X} f \circ F_X \otimes 1,\\ (f\otimes 1) Y &= f \circ L_Y \otimes 1. \end{align*} Hence, simplifying with (\ref{e3})--(\ref{e4}), one shows \begin{align*} (X(f \otimes 1))Y &=(-1)^{\bar f \bar X + \bar X \bar Y} f \circ L_Y \otimes X - \sum_i (-1)^{\bar f \bar X + \bar X \bar Y}c_{Y, X}^i f \otimes H_i\\ &\qquad+ \sum_i (-1)^{\bar X+\bar H_i} f \circ L_Y \circ H_X^i \otimes (\gamma(H_i)+H_i)\\ &\qquad+ \sum_i (-1)^{\bar X+\bar H_i + \bar H_i \bar Y} c_{Y, X}^i f \circ s^{\bar X} \otimes (\gamma(H_i)+H_i)\\ &\qquad+\sum_i (-1)^{\bar X+\bar Y \bar H_i} f \circ L_{[H_i, Y]} \circ H_X^i \otimes 1\\&\qquad + \sum_i (-1)^{\bar X+\bar H_i} c_{[H_i, Y], X}^i f \circ s^{\bar X} \otimes 1\\ &\qquad+ \sum_i (-1)^{\bar X} f \circ L_{[Y, H_i]} \circ H_X^i \otimes 1\\ &\qquad+ (-1)^{\bar X} f \circ L_Y \circ F_X \otimes 1\\\intertext{and} X((f \otimes 1) Y) &= \sum_i (-1)^{\bar X+\bar H_i} f \circ L_Y \circ H_X^i \otimes (\gamma(H_i)+H_i)\\ &\qquad +(-1)^{\bar f \bar X + \bar X \bar Y} f \circ L_Y \otimes X + (-1)^{\bar X} f \circ L_Y \circ F_X \otimes 1. \end{align*} Now we observe that $$ (-1)^{\bar f \bar X + \bar X \bar Y} c_{Y, X}^i f = (-1)^{\bar X+\bar H_i + \bar H_i \bar Y} c_{Y, X}^i f \circ s^{\bar X} $$ for each $i$, while the assumption that $\gamma$ is a semi-infinite character gives that $$ \sum_i \left(c_{Y, X}^i \gamma(H_i) + (-1)^{\bar H_i} c_{[H_i, Y], X}^i\right) = 0. $$ Putting everything together proves (\ref{tododo}). \fi \end{proof} For the remainder of the section, we fix an admissible semi-infinite character $\gamma$ for $\mathfrak g$ and let $S_\gamma$ be the {\em semi-regular bimodule} constructed in Lemma~\ref{icky}. Let $\mathcal M$ resp. $\mathcal K$ be the category of all admissible graded $\mathfrak g$-supermodules that are free resp. cofree of finite rank as graded $U(\mathfrak n)$-supermodules, i.e. isomorphic to direct sums of maybe graded shifted copies of $U(\mathfrak n)$ resp. $U(\mathfrak n)^{\circledast}$. The following theorem is the super analogue of \cite[Theorem 2.1]{so2}, which Soergel attributes originally to Arkhipov \cite{Ar}. \begin{Theorem}\label{at} The functors $\mathcal M \rightarrow \mathcal K, M \mapsto S_\gamma \otimes_{U(\mathfrak g)} M$ and $\mathcal K \rightarrow \mathcal M, M \mapsto {\operatorname{\bf{Hom}}}_{U(\mathfrak g)}(S_\gamma, M)$ are mutually inverse equivalences between the categories $\mathcal M$ and $\mathcal K$, such that short exact sequences correspond to short exact sequences. \end{Theorem} \begin{proof} Take $M \in \mathcal M$. Recalling Lemma~\ref{icky}, the map $f \otimes m \mapsto \iota(f) \otimes m$ is a $U(\mathfrak n)$-isomorphism $U(\mathfrak n)^\circledast \otimes_{U(\mathfrak n)} M \rightarrow S_\gamma \otimes_{U(\mathfrak g)} M$. Hence $S_\gamma \otimes_{U(\mathfrak g)} M$ is graded cofree of finite rank, so in particular it is finite dimensional in each degree. Moreover, for $f \in U(\mathfrak n)^\circledast, m \in M$ and $H \in \mathfrak h$, we have by Lemma~\ref{icky}(iii) that \begin{equation}\label{act} H (\iota(f) \otimes m) = (-1)^{\bar H \bar f} \iota(f) \otimes (H + \gamma(H)) m - (-1)^{\bar H \bar f} \iota(f \circ {\operatorname{ad}} H) \otimes m. \end{equation} It follows from this and (A3) that $S_\gamma \otimes_{U(\mathfrak g)} M$ is admissible. Hence $S_\gamma \otimes_{U(\mathfrak g)} ?$ is a well-defined functor from $\mathcal M$ to $\mathcal K$. For the other direction, we note that ${\operatorname{\bf{Hom}}}_{U(\mathfrak n)}(U(\mathfrak n)^\circledast, U(\mathfrak n)^{\circledast}) \simeq U(\mathfrak n)$ as a $U(\mathfrak n), U(\mathfrak n)$-bimodule; an isomorphism maps $u \in U(\mathfrak n)$ to $\widehat u \in {\operatorname{\bf{Hom}}}_{U(\mathfrak n)}(U(\mathfrak n)^\circledast, U(\mathfrak n)^{\circledast})$ where $(\widehat u f)(n) = (-1)^{\bar u \bar f} f(un)$ for each $f \in U(\mathfrak n)^{\circledast}, n \in U(\mathfrak n)$. So for $N \in \mathcal K$, we deduce that ${\operatorname{\bf{Hom}}}_{U(\mathfrak g)}(S_\gamma, N) \simeq {\operatorname{\bf{Hom}}}_{U(\mathfrak n)}(U(\mathfrak n)^{\circledast}, N)$ is graded free of finite rank over $U(\mathfrak n)$. Moreover, given $\theta \in {\operatorname{\bf{Hom}}}_{U(\mathfrak g)}(S_\gamma, N)$, \begin{equation}\label{tip} (H \theta)(\iota(f)) = (H-\gamma(H))\theta(\iota(f)) + (-1)^{\bar H \bar \theta+\bar H \bar f} \theta(\iota(f \circ {\operatorname{ad}} H)) \end{equation} for each $H \in \mathfrak h$ and $f \in U(\mathfrak n)^{\circledast}$. Using this and (A3) one can check that ${\operatorname{\bf{Hom}}}_{U(\mathfrak g)}(S_\gamma, N)$ is admissible. Hence, ${\operatorname{\bf{Hom}}}_{U(\mathfrak g)}(S_\gamma, ?)$ is a well-defined functor from $\mathcal K$ to $\mathcal M$. The remainder of the proof is exactly as in the proof of \cite[Theorem 2.1]{so2}. \end{proof} Finally, let $\mathcal O^\Delta$ be the full subcategory of $\mathcal O$ consisting of all objects admitting a finite $\Delta$-flag. We recall from Corollary~\ref{fp} that $\mathcal O^\Delta$ is closed under taking direct summands. For a graded $\mathfrak g$-supermodule $M$, we let $M^\star$ denote its graded dual, namely, the space ${\operatorname{\bf{Hom}}}_{{\mathbb C}}(M, {\mathbb C})$, where ${\mathbb C} = {\mathbb C}_{0,{\bar 0}}$, with action defined by $(Xf)(m) = -(-1)^{\bar X \bar f}f(X m)$ for each $X \in \mathfrak g, m \in M$ and $f:M \rightarrow {\mathbb C}$. Recalling the assumption (A5), the theorem has the following corollary: \begin{Corollary}\label{maineq2} The functor $M \mapsto (S_\gamma \otimes_{U(\mathfrak g)} M)^\star$ defines a contravariant equivalence of categories $\mathcal O^\Delta \rightarrow \mathcal O^\Delta$ under which short exact sequences correspond to short exact sequences, $\Delta({\mathbb C}_{-\gamma} \otimes E^\#)$ maps to $\Delta(E)$ and $P_{\leq -n}({\mathbb C}_{-\gamma} \otimes E^\#)$ maps to $T_{\geq n}(E)$, for every $E \in \Lambda$ and $n \leq |E|$. \end{Corollary} \begin{proof} It is easy to see using (\ref{act}) and (A5) that the degree $-|E|$ piece of $(S_\gamma \otimes_{U(\mathfrak g)} \Delta(E))^\star \simeq (U(\mathfrak n)^\circledast \otimes \widehat{E})^\star$ is isomorphic to ${\mathbb C}_{-\gamma} \otimes \widehat{E^\#}$ as an $\mathfrak h$-supermodule. Moreover, this generates $(S_\gamma \otimes_{U(\mathfrak g)} \Delta(E))^\star$ freely as a $U(\mathfrak n)$-supermodule, hence $(S_\gamma \otimes_{U(\mathfrak g)} \Delta(E))^\star \cong \Delta({\mathbb C}_{-\gamma} \otimes E^\#)$. It follows from this and Theorem~\ref{at} that the functor $(S_\gamma \otimes_{U(\mathfrak g)} ?)^\star$ maps $\mathcal O^\Delta$ to $\mathcal O^\Delta$ and sends short exact sequences to short exact sequences. Similarly, one shows using (\ref{tip}) that ${\operatorname{\bf{Hom}}}_{U(\mathfrak g)}(S_\gamma, \Delta(E)^\star) \cong \Delta({\mathbb C}_{-\gamma} \otimes E^\#)$. Hence the functor ${\operatorname{\bf{Hom}}}_{U(\mathfrak g)}(S_\gamma, ?^\star)$ maps $\mathcal O^\Delta$ to $\mathcal O^\Delta$. Now it is immediate from Theorem~\ref{at} that our two functors are mutually inverse equivalences. It just remains to show that $(S_\gamma \otimes_{U(\mathfrak g)} P_{\leq -n}({\mathbb C}_{-\gamma} \otimes E^\#))^\star \cong T_{\geq n}(E)$, for $n \leq |E|$, for which one uses the characterization of $T_{\geq n}(E)$ given in (i)$'$, (ii)$'$ above. \end{proof} \begin{Corollary}\label{dde} For $E, F \in \Lambda$, we have that $$ (T(E):\Delta(F)) = [\nabla({\mathbb C}_{-\gamma} \otimes F^\#) : L({\mathbb C}_{-\gamma} \otimes E^\#)]. $$ \end{Corollary} \begin{proof} We have for $n \leq |E|, |F|$ that \begin{align*} (T(E):\Delta(F)) &= (T_{\geq n}(E): \Delta(F)) = (P_{\leq -n}({\mathbb C}_{-\gamma} \otimes E^\#):\Delta({\mathbb C}_{-\gamma}\otimes F^\#))\\ &= [\nabla({\mathbb C}_{-\gamma} \otimes F^\#):L({\mathbb C}_{-\gamma} \otimes E^\#)], \end{align*} using Corollary~\ref{maineq2} and Lemma~\ref{bggrec}. \end{proof} \section{Some variations}\label{var} We now mention some variations to the general framework considered so far. First of all, we recall from \cite[$\S$6]{so2} how to deduce results about ungraded $\mathfrak g$-supermodules from the graded theory above. To do this, one needs to require in addition that \begin{itemize} \item[(A6)] there is an element $D \in \mathfrak h_{{\bar 0}}$ such that $[D, X] = \deg(X) X$ for all homogeneous $X \in \mathfrak g$. \end{itemize} Let $\overline{\mathcal O}$ be the category of all admissible (but no longer graded!) $\mathfrak g$-supermodules that are locally finite dimensional over $\mathfrak b$. Since $D$ necessarily belongs to $\mathfrak t$, every $M \in {\mathcal O}$ resp. $M\in \overline{\mathcal O}$ decomposes into eigenspaces $M = \bigoplus_{a \in {\mathbb C}} M^{(a)}$ with respect to the action of $D$. For $a \in {\mathbb C}$, let $\mathcal O_a$ denote the full subcategory of $\mathcal O$ consisting of all $M\in\mathcal O$ such that $M^{(a+i)} = M_i$ for all $i \in {\mathbb Z}$. For $\bar a \in {\mathbb C} / {\mathbb Z}$, let $\overline{\mathcal O}_{\bar a}$ denote the full subcategory of $\overline{\mathcal O}$ consisting of all $M\in\overline{\mathcal O}$ such that $M^{(b)} = 0$ for all $b \notin \bar a$. Then, \begin{equation*}\label{bldec} \mathcal O = \prod_{a \in {\mathbb C}} \mathcal O_a, \qquad \overline{\mathcal O} = \prod_{\bar a \in {\mathbb C} / {\mathbb Z}} \overline{\mathcal O}_{\bar a}. \end{equation*} Forgetting the grading gives an isomorphism of categories $\mathcal O_a \rightarrow \overline{\mathcal O}_{\bar a}$, the inverse functor being defined on $M \in \overline{\mathcal O}_{\bar a}$ by introducing a ${\mathbb Z}$-grading according to the rule $M_i = M^{(a+i)}$. In this way, we can transfer results from $\mathcal O$ to $\overline{\mathcal O}$. To describe some of the things that can be obtained in this way, let $\overline{\Lambda}$ denote a set of representatives for the equivalence classes of $E \in \Lambda$ viewed up to degree shifts, so that $\overline{\Lambda}$ is a complete set of pairwise non-isomorphic irreducible admissible $\mathfrak h$-supermodules. Also let $\widehat{E}$ denote the projective cover of $E \in \overline{\Lambda}$ in the category of admissible $\mathfrak h$-supermodules. We have the objects $L(E), {\Delta}(E)$ and $\nabla(E) \in \overline{\mathcal O}$ obtained from the ones defined before by forgetting the grading. Intrinsically, ${\Delta}(E) = U(\mathfrak g) \otimes_{U(\mathfrak b)} \widehat{E}$, $\nabla(E)$ is the largest submodule of ${\operatorname{Hom}}_{U(\mathfrak g_{\leq 0})}(U(\mathfrak g), E)$ that belongs to $\overline{\mathcal O}$, and $L(E) = {\operatorname{cosoc}}_{\mathfrak g} {\Delta}(E) \simeq {\operatorname{soc}}_{\mathfrak g} \nabla(E)$. In particular, $\{L(E)\}_{E \in \overline{\Lambda}}$ is a complete set of pairwise non-isomorphic irreducible objects in $\overline{\mathcal O}$. The notion of a ${\Delta}$-flag of an object of $\overline{\mathcal O}$ is defined as before. The multiplicity $(M:{\Delta}(E))$ of ${\Delta}(E)$ as a subquotient of a ${\Delta}$-flag of an object $M \in \overline{\mathcal O}$ is independent of the choice of flag. We have that \begin{equation} (M:{\Delta}(E)) = \dim {\operatorname{Hom}}_{\overline{\mathcal O}}(M, \nabla(E)) / d_E. \end{equation} We also note from Corollary~\ref{fp} that summands of objects with finite ${\Delta}$-flags have finite ${\Delta}$-flags. We can always choose a partial ordering $\preceq$ on $\overline{\Lambda}$ such that \begin{equation*} [\Delta(F):L(E)] \neq 0 \hbox{ or } [\nabla(F):L(E)] \neq 0 \Rightarrow E \preceq F. \end{equation*} Let $\sim$ be the equivalence relation on $\overline{\Lambda}$ generated by the partial order $\preceq$. For $\overline{\theta} \in \overline{\mathcal O} / \sim$, let $\overline{\mathcal O}_{\overline{\theta}}$ be the full subcategory of $\overline{\mathcal O}$ consisting of the objects $M \in \overline{\mathcal O}$ all of whose irreducible subquotients are of the form $L(E)$ for $E \in \overline{\theta}$. Then Theorem~\ref{blockdec} gives us the block decomposition of $\overline{\mathcal O}$: \begin{Theorem} The functor $$ \prod_{\overline{\theta} \in \overline{\Lambda} / \sim} \overline{\mathcal O}_{\overline{\theta}} \rightarrow \overline{\mathcal O}, \quad (M_{\overline{\theta}})_{\overline{\theta}} \mapsto \bigoplus_{\overline{\theta} \in \overline{\Lambda} / \sim} M_{\overline{\theta}} $$ is an equivalence of categories. \end{Theorem} Next, we use Theorem~\ref{tiltthm} to define the {\em indecomposable tilting module} $T(E) \in \overline{\mathcal O}$ for each $E \in \overline{\Lambda}$: \begin{Theorem} For each $E \in \overline{\Lambda}$ there exists a unique up to isomorphism indecomposable object $T(E) \in \overline{\mathcal O}$ such that \begin{itemize} \item[(i)] ${\operatorname{Ext}}^1_{\overline{\mathcal O}}(\Delta(F), T(E)) = 0$ for all $F \in \overline{\Lambda}$; \item[(ii)] $T(E)$ admits a $\Delta$-flag starting with $\Delta(E)$ at the bottom. \end{itemize} \end{Theorem} Let $\gamma$ be an admissible semi-infinite character for $\mathfrak g$ and construct the semi-regular bimodule $S_\gamma$ as in Lemma~\ref{icky}. Let $\overline{\mathcal O}^{\Delta}$ be the full subcategory of $\overline{\mathcal O}$ consisting of the objects that admit a finite $\Delta$-flag. Then Corollaries~\ref{maineq2} and \ref{dde} give us: \begin{Theorem}\label{maineq} The functor $M \mapsto (S_\gamma \otimes_{U(\mathfrak g)} M)^\star$ defines a contravariant equivalence of categories $\overline{\mathcal O}^\Delta \rightarrow \overline{\mathcal O}^\Delta$ under which short exact sequences correspond to short exact sequences and $\Delta(E)$ maps to $\Delta({\mathbb C}_{-\gamma} \otimes E^\#)$ for every $E \in \overline{\Lambda}$. Moreover, \begin{equation}\label{ds} (T(E):\Delta(F)) = [\nabla({\mathbb C}_{-\gamma} \otimes F^\#):L({\mathbb C}_{-\gamma}\otimes E^\#)] \end{equation} for all $E, F \in \overline{\Lambda}$. \end{Theorem} Still assuming that (A6) holds, we now impose some finiteness conditions. First, assume \begin{itemize} \item[(A7)] for each $E \in \overline{\Lambda}$, $\nabla(E)$ has a composition series. \end{itemize} Given (A7), it is not hard to show that every object $M$ in the category $\overline{\mathcal O}^{\text{fin}}$ of all {\em finitely generated} admissible $\mathfrak g$-supermodules that are locally finite dimensional over $\mathfrak b$ has a composition series. We remark that (A7) holds automatically if the partial ordering $\preceq$ chosen above has the property that for each $E \in \overline{\Lambda}$, there are only finitely many $F \in \overline{\Lambda}$ with $F \preceq E$. Next assume \begin{itemize} \item[(A8)] the category $\overline{\mathcal O}^{\text{fin}}$ has enough projectives. \end{itemize} By Theorem~\ref{pct}(iii), (A8) holds automatically if the partial ordering $\preceq$ has the property that for each $E \in \overline{\Lambda}$, there are only finitely many $F \in \overline{\Lambda}$ with $E \preceq F$. Using (A7), (A8) and Fitting's lemma, one deduces that each $L(E)$ has a projective cover denoted $P(E)$ in the category $\overline{\mathcal O}^{\text{fin}}$. Moreover, Theorem \ref{pct} and Corollary~\ref{bggrec} imply in the present setting that $P(E)$ has a finite $\Delta$-flag satisfying BGG reciprocity \begin{equation}\label{bgg} (P(E):\Delta(F)) = [\nabla(F):L(E)] \end{equation} for all $E, F \in \overline\Lambda$. Under the equivalence of categories from Theorem~\ref{maineq}, $P(E)$ gets mapped to $T({\mathbb C}_{-\gamma} \otimes E^\#)$, so the tilting modules $T(E)$ also all have {\em finite} $\Delta$-flags, i.e. they belong to the category $\overline{\mathcal O}^{\text{fin}}$ too. \section{Examples} We now give some examples, beginning with the classical ones to set the scene. \begin{Example}\rm\label{eg1} Let $\mathfrak g$ be a finite dimensional semisimple Lie algebra. Let $\mathfrak t \subset \mathfrak g$ be a maximal toral subalgebra, and $\Delta\subset \mathfrak t^*$ be a choice of simple roots. Let $\rho \in \mathfrak t^*$ be half the sum of the corresponding positive roots. We take the ${\mathbb Z}$-grading on $\mathfrak g$ defined so that $\mathfrak g_{\alpha}$ is in degree $1$ and $\mathfrak g_{-\alpha}$ is in degree $-1$ for each $\alpha \in \Delta$. Clearly this grading is induced by the adjoint action of some $D \in \mathfrak t$, and $\mathfrak h := \mathfrak g_{0} = \mathfrak t$. Taking the group $X$ of admissible weights to be all of $\mathfrak t^*$, the category $\overline{\mathcal O}^{\text{fin}}$ is exactly the category introduced in \cite{BGG}. It is easy to see that our assumptions (A1)--(A6) are all satisfied. Moreover, by Harish-Chandra's theorem on central characters, we can choose the equivalence relation $\sim$ so that the equivalence classes are the orbits of the finite Weyl group $W$ under the dot action. Hence the equivalence classes are finite, so (A7) and (A8) automatically hold too. We also note that the usual Verma modules $M({\lambda})$ for ${\lambda} \in \mathfrak h^*$ are the standard modules here, and their duals under the duality of \cite[$\S$4, Remark]{BGG} are the costandard modules. The indecomposable tilting modules $T({\lambda})$ are the modules defined originally by Collingwood and Irving in \cite{CI}. This setup is generalized to an arbitrary symmetrizable Kac-Moody algebra in \cite[$\S$7]{so2}, see also \cite{DGK, RC}. In general, (A7) and (A8) do not hold, so it becomes important to work in category $\overline{\mathcal O}$ rather than $\overline{\mathcal O}^{\text{fin}}$. Soergel also discusses certain parabolic analogues. \end{Example} \begin{Example}\rm\label{eg2} In the next two examples, we take $\mathfrak g$ to be the Lie superalgebra $\mathfrak{gl}(m|n)$. We recall that $\mathfrak g$ consists of $(m+n)\times(m+n)$ matrices over ${\mathbb C}$, where we label rows and columns of such matrices by the ordered index set $\{-m,\dots,-1,1,\dots,n\}$. Writing $\bar i = {\bar 0}$ if $i > 0$ and ${\bar 1}$ if $i < 0$, the parity of the $ij$-matrix unit $e_{i,j} \in \mathfrak g$ is $\bar i + \bar j$, and the superbracket satisfies $[e_{i,j}, e_{k,l}] = \delta_{j, k} e_{i, l} - (-1)^{(\bar i + \bar j)(\bar k + \bar l)} \delta_{i,l} e_{k,j}.$ The subalgebra $\mathfrak g_{{\bar 0}}$ of $\mathfrak g$ is isomorphic to $\mathfrak{gl}(m) \oplus \mathfrak{gl}(n)$. We will always take the maximal toral subalgebra $\mathfrak t$ to be the subalgebra consisting of all diagonal matrices, and the group $X$ of admissible weights to be all of $\mathfrak t^*$. Let $\delta_{-m},\dots,\delta_{-1},\delta_1,\dots,\delta_n$ be the basis for $\mathfrak t^*$ dual to the basis $e_{-m,-m},\dots,e_{-1,-1}, e_{1,1},\dots,e_{n,n}$ of $\mathfrak t$. Now there are two natural ${\mathbb Z}$-gradings to consider. First, we discuss the {\em principal grading} induced by the adjoint action of the matrix $D = \operatorname{diag}(m+n,m+n-1,\dots,2,1) \in \mathfrak h$, so the degree of $e_{i,j}$ is defined by the equation $[D, e_{i,j}] = \deg(e_{i,j}) e_{i,j}$. For this grading, $\mathfrak h := \mathfrak g_0$ coincides with the subalgebra $\mathfrak t$ of diagonal matrices and $\mathfrak b := \mathfrak g_{\geq 0}$ is the subalgebra of all upper triangular matrices. Let $\overline{\mathcal O}^{\text{fin}}$ be the resulting category as in section \ref{var}. We should check that the assumptions (A1)--(A8) all hold, the only difficult ones being (A7) and (A8): \begin{Lemma}\label{eop} Every object $M \in \overline{\mathcal O}^{\text{\rm fin}}$ has a composition series, and $\overline{\mathcal O}^{\text{\rm fin}}$ has enough projectives. \end{Lemma} \begin{proof} Let $\mathcal E$ be the category of all finitely generated $\mathfrak g_{{\bar 0}}$-supermodules that are locally finite dimensional over $\mathfrak b_{{\bar 0}}$ and semisimple over $\mathfrak h$. By the PBW theorem, $U(\mathfrak g)$ is free of finite rank as a (left or right) $U(\mathfrak g_{{\bar 0}})$-module. Hence, the functor $U(\mathfrak g) \otimes_{U(\mathfrak g_{{\bar 0}})} ?$ maps objects in $\mathcal E$ to objects in $\overline{\mathcal O}^{\text{fin}}$, and it is left adjoint to the natural restriction functor from $\overline{\mathcal O}^{\text{fin}}$ to $\mathcal E$. So it sends projectives to projectives. By Example~\ref{eg1}, $\mathcal E$ has enough projectives, so we deduce that $\overline{\mathcal O}^{\text{fin}}$ does too. Finally, to see that every object $M \in \overline{\mathcal O}^{\text{fin}}$ has a composition series, note that $U(\mathfrak g)$ is Noetherian, so $M$ has a descending filtration $M = M_0 \geq M_1 \geq \dots$ such that each $M_i / M_{i+1}$ is irreducible. We just need to show that this filtration stabilizes after finitely many terms. But every object in $\mathcal E$ has a composition series by Example~\ref{eg1} so this follows immediately on restricting $M$ to $\mathfrak g_{{\bar 0}}$. \end{proof} The standard modules ${\Delta}({\lambda})$ in this case are the {\em Verma modules} $M(\lambda) := U(\mathfrak g) \otimes_{U(\mathfrak b)} {\mathbb C}_\lambda$, where ${\mathbb C}_\lambda$ is the one dimensional $\mathfrak b$-module with character $\lambda \in \mathfrak h^*$. The costandard modules $\nabla({\lambda})$ are the {\em dual Verma modules} $M(\lambda)^\tau$, where $\tau$ is the duality defined using the ``supertranspose'' antiautomorphism $e_{i,j} \mapsto (-1)^{\bar i(\bar i + \bar j)} e_{j,i}$ of $\mathfrak g$. Finally the indecomposable tilting modules are denoted $T({\lambda})$ and the irreducible modules are denoted $L({\lambda})$, for ${\lambda} \in \mathfrak h^*$. Like in Example~\ref{eg1}, an admissible semi-infinite character for $\mathfrak g$ with respect to the principal grading is given by the character $2\rho$, where $\rho = m \delta_{-m}+\dots + 2 \delta_{-2} + \delta_{-1} - \delta_1 - 2\delta_2 - \dots - n \delta_n.$ Now we get from (\ref{ds}) that \begin{equation}\label{mdt} (T(\lambda): M(\mu)) = [M(-\mu - 2\rho):L(-\lambda-2\rho)], \end{equation} for $\lambda, \mu \in \mathfrak h^*$. A precise conjecture for these multiplicities in the case that ${\lambda}, \mu$ are integral linear combinations of the $\delta_i$ can be found in \cite{B1}. It is interesting to note in this example that both (A7) and (A8) hold, despite the fact (as seen in \cite{B1}) that the partial ordering $\preceq$ of section \ref{var} always has infinite chains. \end{Example} \begin{Example}\rm\label{eg3} Continuing with $\mathfrak g = \mathfrak{gl}(m|n)$, we now discuss the second natural ${\mathbb Z}$-grading, namely, the {\em compatible grading}. This is induced by the adjoint action of the matrix $D = \operatorname{diag}(1/2,1/2,\dots,1/2;-1/2,-1/2,\dots,-1/2)$. Note this time that $\mathfrak h := \mathfrak g_0 = \mathfrak g_{{\bar 0}}$, and $\mathfrak g_{-1} \oplus \mathfrak g_1 = \mathfrak g_{{\bar 1}}$. This time, as is easy to show, the category $\overline{\mathcal O}^{\text{fin}}$ is precisely the category of all finite dimensional $\mathfrak g$-supermodules that are semisimple over $\mathfrak t$. The hypothesis (A1)--(A8) are all satisfied, arguing as in Lemma~\ref{eop} for (A7) and (A8). Recalling $\mathfrak h \cong \mathfrak{gl}(m)\oplus\mathfrak{gl}(n)$, the irreducible finite dimensional $\mathfrak h$-supermodules are parametrized by the set $X^+$ of {dominant weights}, namely, the ${\lambda} = {\lambda}_{-m} \delta_{-m}+\dots+{\lambda}_{-1}\delta_{-1}+{\lambda}_1\delta_1+\dots+ {\lambda}_n \delta_n \in \mathfrak h^*$ with each ${\lambda}_{-m}-{\lambda}_{1-m},\dots,{\lambda}_{-2}-{\lambda}_{-1}, {\lambda}_1 - {\lambda}_2,\dots,{\lambda}_{n-1}-{\lambda}_n$ being non-negative integers. Given ${\lambda} \in X^+$, we denote the corresponding standard module $\Delta({\lambda})$ instead by $K({\lambda})$ and call it the {\em Kac module} of highest weight ${\lambda}$, since it was first defined by Kac in \cite{Kac2}. The costandard modules are the {\em dual Kac modules} $K({\lambda})^\tau$. We also write $L({\lambda})$ for the unique irreducible quotient of $K({\lambda})$, $P({\lambda})$ for its projective cover, and $U({\lambda})$ for the indecomposable tilting module of highest weight ${\lambda}$ in this finite dimensional setting. By (\ref{bgg}), $P({\lambda})$ has a finite Kac flag with $K({\lambda})$ at the top, satisfying the BGG reciprocity \begin{equation} (P({\lambda}):K(\mu)) = [K(\mu):L({\lambda})], \end{equation} as was also proved in \cite[Proposition 2.5]{Zou}. Now let $\beta = n(\delta_{-m}+\dots+\delta_{-1}) - m (\delta_1+\dots+\delta_n)$ be the sum of the positive odd roots. It is easy to check that the unique $1$-dimensional representation $\gamma:\mathfrak h\rightarrow{\mathbb C}$ of weight $-\beta$ is an admissible semi-infinite character for $\mathfrak g$ with respect to the compatible grading. In fact in this case, there is an even isomorphism of $U(\mathfrak g), U(\mathfrak g)$-bimodules between the semi-regular bimodule $S_{\gamma}$ from Lemma~\ref{icky} and the regular bimodule $\Pi^{mn} U(\mathfrak g)$. In the notation of Lemma~\ref{icky}, an isomorphism maps $1 \in U(\mathfrak g)$ to the element $\iota(\delta) \in S_\gamma$, where $\delta \in U(\mathfrak n)^{\circledast}$ is the function mapping $\prod_{-m \leq i < 0 < j \leq n} e_{j,i}$ to $1$ (product taken in some fixed order) and all other monomials in the $e_{j,i}$ of strictly smaller length to $0$. So in this case the duality in Theorem~\ref{maineq} is (up to parity change and degree shift) just the usual duality $*$ on finite dimensional $\mathfrak g$-supermodules. In particular, \begin{align} K(\beta - w_0{\lambda})^* \cong K({\lambda}),\label{kdual}\\ P(\beta-w_0{\lambda})^* \cong U({\lambda}),\label{pdual} \end{align} where $w_0$ denotes the longest element of the Weyl group $W \cong S_m \times S_n$ of $\mathfrak h$ acting on $\mathfrak t^*$ in the obvious way. The statement (\ref{ds}) says \begin{equation}\label{kdt} (U(\lambda): K(\mu)) = [K(\beta-w_0 \mu): L(\beta-w_0 \lambda)], \end{equation} for $\lambda, \mu \in X^+$. The numbers on the left hand side of this equation are computed in \cite{B1}. \end{Example} \begin{Example}\rm\label{eg4} In the final example, we take $\mathfrak g = \mathfrak{q}(n)$. Thus, $\mathfrak g$ is the subalgebra of $\mathfrak{gl}(n|n)$ consisting of all matrices of the form $\left( \begin{array}{l|l}X&Y\\\hline Y&X\end{array} \right)$. For $1 \leq i,j\leq n$, we will let $e_{i,j}$ resp. $e_{i,j}'$ denote the even resp. odd matrix unit, i.e. the matrix of the above form with the $ij$-entry of $X$ resp. $Y$ equal to $1$ and all other entries equal to zero. The ${\mathbb Z}$-grading on $\mathfrak g$ is defined by $\deg(e_{i,j}) = \deg(e_{i,j}') = (j - i)$. For this grading, $\mathfrak h := \mathfrak g_0$ is spanned by $\{e_{i,i}, e_{i,i}'\:|\:1 \leq i \leq n\}$, and $\mathfrak b := \mathfrak g_{\geq 0}$ is spanned by $\{e_{i,j}, e_{i,j}'\:|\:1 \leq i \leq j \leq n\}$. We also let $\mathfrak t = \mathfrak h_{{\bar 0}}$ and take the group $X$ of admissible weights to be all of $\mathfrak t^*$. As explained in \cite[$\S$3]{penkov}, the finite dimensional irreducible $\mathfrak h$-supermodules are parametrized by the set $\mathfrak t^*$. For ${\lambda} \in \mathfrak t^*$, we write ${\mathfrak u}({\lambda})$ for the corresponding irreducible $\mathfrak h$-supermodule. It is constructed in \cite{penkov} as a certain Clifford module, of dimension a power of $2$. The assumption (A5) can be checked from this construction and the fact that Clifford algebras are symmetric: one gets that \begin{equation} \widehat{{\mathfrak u}({\lambda})}^* \cong \widehat{{\mathfrak u}(-{\lambda})}. \end{equation} The remaining assumptions (A1)--(A4) and (A6) are easy, and one argues like in Lemma~\ref{eop} to verify (A7) and (A8). So now we can consider the category $\overline{\mathcal O}^{\text{fin}}$ as in section~\ref{var}. Let \begin{align*} M({\lambda}) := U(\mathfrak g) \otimes_{U(\mathfrak b)} {\mathfrak u}(\lambda),\qquad N({\lambda}) := U(\mathfrak g) \otimes_{U(\mathfrak b)} \widehat{{\mathfrak u}({\lambda})}, \end{align*} for each ${\lambda} \in \mathfrak t^*$. Then $N({\lambda})$ is the standard module ${\Delta}({\lambda})$ in $\overline{\mathcal O}^{\text{fin}}$, while $M({\lambda})$ is dual to the costandard module $\nabla({\lambda})$ under the duality $\tau$ induced by the (unsigned) antiautomorphism $$ \left( \begin{array}{l|l}X&Y\\\hline Y&X\end{array} \right) \mapsto\left( \begin{array}{l|l}X^T&Y^T\\\hline Y^T&X^T\end{array} \right). $$ One checks that the trivial character $0:\mathfrak h \rightarrow {\mathbb C}$ is an admissible semi-infinite character for $\mathfrak g$. So, writing $T({\lambda})$ resp. $L({\lambda})$ for the indecomposable tilting module resp. the irreducible module corresponding to ${\lambda} \in \mathfrak t^*$, (\ref{ds}) shows that \begin{equation}\label{mc1} (T({\lambda}):N(\mu)) = [M(-\mu):L(-{\lambda})]. \end{equation} A precise conjecture for these decomposition numbers in case ${\lambda},\mu$ are integral weights is formulated in \cite{B2}. \end{Example} \end{document}
arXiv
1345 paper(s) uploaded by arkivadmin. Non-conjugacy of maximal abelian diagonalizable subalgebras in extended affine Lie algebras Uladzimir Yahorau Department of Mathematics, University of Ottawa Rings and Algebras mathscidoc:1701.01009 Arkiv for Matematik, 1-11, 2015.2 [ Download ] [ 2017-01-08 20:34:06 uploaded by arkivadmin ] [ 344 downloads ] [ 0 comments ] We construct a counterexample to the conjugacy of maximal abelian diagonalizable subalgebras in extended affine Lie algebras. The Hartogs extension theorem for holomorphic vector bundles and sprays Rafael B. Andrist Fakultät für Mathematik und Naturwissenschaften, Bergische Universität Wuppertal Nikolay Shcherbina Fakultät für Mathematik und Naturwissenschaften, Bergische Universität Wuppertal Erlend F. Wold Matematisk Institutt, Universitet i Oslo Functional Analysis Geometric Analysis and Geometric Topology mathscidoc:1701.01010 Arkiv for Matematik, 1-21, 2014.12 We give a detailed proof of Siu's theorem on extendibility of holomorphic vector bundles of rank larger than one, and prove a corresponding extension theorem for holomorphic sprays. We apply this result to study ellipticity properties of complements of compact subsets in Stein manifolds. In particular we show that the complement of a closed ball in $\mathbb{C}^{n}, n \geq3$ , is not subelliptic. On improved fractional Sobolev–Poincaré inequalities Bartłomiej Dyda Faculty of Pure and Applied Mathematics, Wrocław University of Technology Lizaveta Ihnatsyeva Department of Mathematics, Kansas State University Antti V. Vähäkangas Department of Mathematics and Statistics, University of Jyvaskyla Analysis of PDEs Functional Analysis mathscidoc:1701.03014 We study a certain improved fractional Sobolev–Poincaré inequality on domains, which can be considered as a fractional counterpart of the classical Sobolev–Poincaré inequality. We prove the equivalence of the corresponding weak and strong type inequalities; this leads to a simple proof of a strong type inequality on John domains. We also give necessary conditions for the validity of an improved fractional Sobolev–Poincaré inequality, in particular, we show that a domain of finite measure, satisfying this inequality and a 'separation property', is a John domain. On the Szegö kernel of Cartan–Hartogs domains Andrea Loi Dipartimento di Matematica e Informatica, Università di Cagliari Daria Uccheddu Dipartimento di Matematica e Informatica, Università di Cagliari Michela Zedda Dipartimento di Matematica "G. Peano", Università di Torino Differential Geometry mathscidoc:1701.10007 Inspired by the work of Z. Lu and G. Tian (Duke Math. J. 125:351–387,2004) in the compact setting, in this paper we address the problem of studying the Szegö kernel of the disk bundle over a noncompact Kähler manifold. In particular we compute the Szegö kernel of the disk bundle over a Cartan–Hartogs domain based on a bounded symmetric domain. The main ingredients in our analysis are the fact that every Cartan–Hartogs domain can be viewed as an "iterated" disk bundle over its base and the ideas given in (Arezzo, Loi and Zuddas in Math. Z. 275:1207–1216,2013) for the computation of the Szegö kernel of the disk bundle over an Hermitian symmetric space of compact type. Reducibility of invertible tuples to the principal component in commutative Banach algebras Raymond Mortini Département de Mathématiques et Institut Élie Cartan de Lorraine, UMR 7502, Université de Lorraine Rudolf Rupp Fakultät für Angewandte Mathematik, Physik und Allgemeinwissenschaften, TH-Nürnberg Functional Analysis Spectral Theory and Operator Algebra mathscidoc:1701.12009 Let $A$ be a complex, commutative unital Banach algebra. We introduce two notions of exponential reducibility of Banach algebra tuples and present an analogue to the Corach-Suárez result on the connection between reducibility in $A$ and in $C(M(A))$ . Our methods are of an analytical nature. Necessary and sufficient geometric/topological conditions are given for reducibility (respectively reducibility to the principal component of $U_{n}(A)$ ) whenever the spectrum of $A$ is homeomorphic to a subset of $\mathbb{C}^{n}$ . The explicit formulae for scaling limits in the ergodic decomposition of infinite Pickrell measures Alexander I. Bufetov Aix-Marseille Université, Centrale Marseille, CNRS, I2M, UMR7373, 39 Rue F. Joliot Curie, Marseille Cedex 13, France Yanqi Qiu Aix-Marseille Université, Centrale Marseille, CNRS, I2M, UMR7373, 39 Rue F. Joliot Curie, Marseille Cedex 13, France The main result of this paper, Theorem 1.5, gives explicit formulae for the kernels of the ergodic decomposition measures for infinite Pickrell measures on the space of infinite complex matrices. The kernels are obtained as the scaling limits of Christoffel-Uvarov deformations of Jacobi orthogonal polynomial ensembles. The Loewner equation for multiple slits, multiply connected domains and branch points Christoph Böhm Institut für Mathematik, Universität Würzburg Sebastian Schleißinger Dipartimento di Matematica, Università di Roma "Tor Vergata" Classical Analysis and ODEs mathscidoc:1701.05002 Let $\mathbb {D}\subset \mathbb {C}$ be the unit disk and let $\gamma_{1},\gamma _{2}:[0,T]\to\overline{\mathbb {D}}\setminus\{0\}$ be parametrizations of two slits $\Gamma_{1}:=\gamma(0,T], \Gamma_{2}:=\gamma_{2}(0,T]$ such that $\Gamma_{1}$ and $\Gamma_{2}$ are disjoint. Stable hypersurfaces with zero scalar curvature in Euclidean space Hilário Alencar Instituto de Matemática, Universidade Federal de Alagoas Manfredo do Carmo Instituto Nacional de Matemática Pura e Aplicada, Rio de Janeiro, RJ, Brazil Gregório Silva Neto Instituto de Matemática, Universidade Federal de Alagoas Analysis of PDEs Differential Geometry mathscidoc:1701.03016 Arkiv for Matematik, 1-9, 2015.9 In this paper we prove some results concerning stability of hypersurfaces in the four dimensional Euclidean space with zero scalar curvature. First we prove there is no complete stable hypersurface with zero scalar curvature, polynomial growth of integral of the mean curvature, and with the Gauss-Kronecker curvature bounded away from zero. We conclude this paper giving a sufficient condition for a regular domain to be stable in terms of the mean and the Gauss-Kronecker curvatures of the hypersurface and the radius of the smallest extrinsic ball which contains the domain. A geometric interpretation of the Schützenberger group of a minimal subshift Jorge Almeida CMUP, Departamento de Matemática, Faculdade de Ciências, Universidade do Porto Alfredo Costa CMUC, Department of Mathematics, University of Coimbra Geometric Analysis and Geometric Topology Group Theory and Lie Theory mathscidoc:1701.01011 The first author has associated in a natural way a profinite group to each irreducible subshift. The group in question was initially obtained as a maximal subgroup of a free profinite semigroup. In the case of minimal subshifts, the same group is shown in the present paper to also arise from geometric considerations involving the Rauzy graphs of the subshift. Indeed, the group is shown to be isomorphic to the inverse limit of the profinite completions of the fundamental groups of the Rauzy graphs of the subshift. A further result involving geometric arguments on Rauzy graphs is a criterion for freeness of the profinite group of a minimal subshift based on the Return Theorem of Berthé et al. Stiefel-Whitney classes of curve covers Björn Selander Tomtebogatan 18, Stockholm, Sweden K-Theory and Homology mathscidoc:1701.01012 Let $D$ be a Dedekind scheme with the characteristic of all residue fields not equal to 2. To every tame cover $C\to D$ with only odd ramification we associate a second Stiefel-Whitney class in the second cohomology with mod 2 coefficients of a certain tame orbicurve $[D]$ associated to $D$ . This class is then related to the pull-back of the second Stiefel-Whitney class of the push-forward of the line bundle of half of the ramification divisor. This shows (indirectly) that our Stiefel-Whitney class is the pull-back of a sum of cohomology classes considered by Esnault, Kahn and Viehweg in 'Coverings with odd ramification and Stiefel-Whitney classes'. Perhaps more importantly, in the case of a proper and smooth curve over an algebraically closed field, our Stiefel-Whitney class is shown to be the pull-back of an invariant considered by Serre in 'Revêtements à ramification impaire et thêta-caractéristiques', and in this case our arguments give a new proof of the main result of that article. Visit the profile of arkivadmin
CommonCrawl
Monoid ring In abstract algebra, a monoid ring is a ring constructed from a ring and a monoid, just as a group ring is constructed from a ring and a group. Definition Let R be a ring and let G be a monoid. The monoid ring or monoid algebra of G over R, denoted R[G] or RG, is the set of formal sums $\sum _{g\in G}r_{g}g$, where $r_{g}\in R$ for each $g\in G$ and rg = 0 for all but finitely many g, equipped with coefficient-wise addition, and the multiplication in which the elements of R commute with the elements of G. More formally, R[G] is the set of functions φ: G → R such that {g : φ(g) ≠ 0} is finite, equipped with addition of functions, and with multiplication defined by $(\phi \psi )(g)=\sum _{k\ell =g}\phi (k)\psi (\ell )$. If G is a group, then R[G] is also called the group ring of G over R. Universal property Given R and G, there is a ring homomorphism α: R → R[G] sending each r to r1 (where 1 is the identity element of G), and a monoid homomorphism β: G → R[G] (where the latter is viewed as a monoid under multiplication) sending each g to 1g (where 1 is the multiplicative identity of R). We have that α(r) commutes with β(g) for all r in R and g in G. The universal property of the monoid ring states that given a ring S, a ring homomorphism α': R → S, and a monoid homomorphism β': G → S to the multiplicative monoid of S, such that α'(r) commutes with β'(g) for all r in R and g in G, there is a unique ring homomorphism γ: R[G] → S such that composing α and β with γ produces α' and β '. Augmentation The augmentation is the ring homomorphism η: R[G] → R defined by $\eta \left(\sum _{g\in G}r_{g}g\right)=\sum _{g\in G}r_{g}.$ The kernel of η is called the augmentation ideal. It is a free R-module with basis consisting of 1 – g for all g in G not equal to 1. Examples Given a ring R and the (additive) monoid of natural numbers N (or {xn} viewed multiplicatively), we obtain the ring R[{xn}] =: R[x] of polynomials over R. The monoid Nn (with the addition) gives the polynomial ring with n variables: R[Nn] =: R[X1, ..., Xn]. Generalization If G is a semigroup, the same construction yields a semigroup ring R[G]. See also • Free algebra • Puiseux series References • Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Vol. 211 (Rev. 3rd ed.). New York: Springer-Verlag. ISBN 0-387-95385-X. Further reading • R.Gilmer. Commutative semigroup rings. University of Chicago Press, Chicago–London, 1984
Wikipedia
A community intervention to reduce alcohol consumption and drunkenness among adolescents in Sweden: a quasi-experiment Robert Svensson ORCID: orcid.org/0000-0002-6080-27801, Björn Johnson2 & Karl Kronkvist1 Several studies have examined the effect of community interventions on youth alcohol consumption, and the results have often been mixed. The aim of this study is to evaluate the effectiveness of a community intervention known as the Öckerö Method on adolescent alcohol consumption and perceived parental attitudes towards adolescent drinking. The study is based on a quasi-experimental design, using matched controls. Self-report studies were conducted among adolescents in grades 7–9 of compulsory education in four control and four intervention communities in the south of Sweden in 2016–2018. Baseline measures were collected in autumn 2016 before the intervention was implemented in the intervention communities. Outcomes were the adolescents' alcohol consumption, past-year drunkenness, past-month drunkenness and perceived parental attitudes towards alcohol. Estimating Difference-in-Difference models using Linear Probability Models, we found no empirical evidence that the intervention has any effect on adolescents' drinking habits, or on their perceptions of their parents' attitudes towards adolescent drinking. This is the first evaluation of this method, and we found no evidence that the intervention had any effect on the level of either young people's alcohol consumption or their past-year or past-month drunkenness, nor on their parents' perceived attitudes toward adolescent drinking. A further improvement would be to employ a follow-up period that is longer than the three-year period employed in this study. ISRCTN registry: Study ID: 51635778, 31th March 2021 (Retrospectively registered). Alcohol use among young people is a risky behavior that is associated with several negative consequences, including an increased risk for accidents, health-related problems and involvement in crime [1,2,3,4,5]. The risk of developing alcohol dependence later in life is also greater for individuals with an early alcohol debut [2, 6]. It is therefore important to introduce effective prevention strategies to reduce the use of alcohol and other drugs among young people. While there are many methods to achieve this aim, one approach involves the use of whole-of-community interventions. This approach typically targets a well-defined population within a delimited geographical region and includes the implementation of several simultaneously coordinated interventions across various community settings (e.g., schools, sports clubs, social services, law enforcement, etc.) [7, 8]. By activating multiple community stakeholders in the intervention process, the ultimate goal for these types of interventions is, as with many other types of interventions, to delay the onset of alcohol use among adolescents and decrease their general alcohol consumption during adolescence [8]. While the whole-of-community approach usually includes a multi-component strategy to achieve these goals, e.g., by focusing on demand-, harm- and supply-reduction strategies [8], one common and arguably important component relates to altering social norms and attitudes related to drinking [9, 10]. Several studies have examined the effect of community interventions on youth alcohol consumption across different international contexts, including for example Australia [9], Canada [11], Iceland [12], the Netherlands [13], Sweden [14], and the USA [15]. Here, a strong focus has been directed towards changing social norms and attitudes relating to underage drinking among adolescents [9, 11, 14]. Strategies has included youth social skills training [11, 14], information and communication with both youths, parents and other community actors regarding risks and harms with underage drinking [9, 11, 13, 14], mass media campaigns to increase public awareness of issues relating to underage drinking [9, 13,14,15], and youth access to alcohol [14, 15]. In some of these studies it has been found that the intervention had an impact on youth alcohol consumption [12, 13], whereas other studies found none or only minor impact [11, 14, 15]. According to systematic reviews and meta-analyses the rather mixed findings in previous research could be a consequence of specific components included in these interventions not being evidence-based [8, 16]. Another shortcoming discussed in relation to community interventions often involve methodological problems, with local organizations initiating the intervention, for example, leaving researchers with no other options than ex post facto research designs [8, 13]. Despite a growing number of studies on the effect of community interventions on youth alcohol consumption, the current knowledge base would benefit from additional research using robust methodological designs, including prospective evaluations. One community intervention which has recently received attention in Sweden is the Öckerö Method. This is a community-based alcohol and drug prevention method which was developed in Sweden at the beginning of the 2000s and is used in approximately 30 municipalities in Sweden and Finland. The method is designed for use in relatively small communities and may be described as employing a whole-of-community approach with the goal of delaying the onset of alcohol use and reducing alcohol consumption among youths by strengthening restrictive attitudes and approaches to youth alcohol consumption among parents and other adults. The aim of this study is to evaluate the effectiveness of the Öckerö Method. More specifically, we will focus on two research questions: (1) Is it possible to identify effects of the Öckerö Method on youths' alcohol consumption and drunkenness? (2) Is it possible to identify effects of the Öckerö Method on parental attitudes towards alcohol consumption and drunkenness, based on the youths' perceptions? The Öckerö Method is a community intervention that aims to change social norms of adolescents with regards to alcohol consumption, by providing information to parents, other adults, local associations and local media, with the intent of influencing their attitudes towards alcohol consumption by adolescents. The method is implemented by local prevention workers, and is followed up by means of self-report surveys that are conducted once each year with adolescents in secondary school. The results from these surveys are reported to parents, school staff, social services administrations and the public. The results are also employed continuously in the work conducted in the intervention municipalities. In this way, parents, responsible agencies and other local actors are given continuous information on the alcohol and drug situation among the municipality's secondary school youth. The different activities used in the intervention are presented in detail below: Information at school parent meetings in grades 7, 8 and 9. Information is provided at parents' meetings once per year, directly subsequent to the completion of the self-report survey. The information provided at these parents' meetings includes (1) up to date information on alcohol, tobacco and drug use among students in the municipality (e.g., the prevalence of alcohol consumption and drunkenness, what types of beverages the adolescents usually drink and how they get hold of the alcohol) (2) information on risks associated with youth drinking in the form of risks for immediate (acute) harms, risks for long-term harms (e.g., that an early alcohol debut is associated with an increased risk of developing alcohol problems), and the links between alcohol and other drugs, and (3) advice to parents on what they can do to prevent and discourage their children from drinking alcohol. Examples of such advice include (a) talk to your adolescent about alcohol, establish restrictive rules and ensure that the adolescent knows that these rules are protective measures, (b) be informed about your adolescent's whereabouts, activities, and friendships, (c) prepare your adolescent for peer pressure (d) make sure that one parent is always awake if the adolescent comes home late. The parents are also informed that it is possible to teach their adolescents how to drink responsibly without allowing them to drink. Finally, the parents are encouraged to talk to the other class-parents and make it clear that they want to be contacted if another parent notices that their adolescent has been drinking. Newsletters to parents and other adults. Newsletters are sent via e-mail 3–6 times per year to parents and to other adults who register to receive them. These newsletters present results and more in-depth analyses from the self-report surveys, and other forms of up to date information on the alcohol and drug situation among youths in the municipality, for example information in connection with public holidays and similar occasions that may be associated with partying among young people. The information contained in the newsletters is also communicated via the Facebook accounts of the local drug prevention coordinators. Information work directed at the local community. Information is provided to sports clubs and other associations in the local community, mainly through meetings with youth leaders. The objective here is to influence these clubs and associations to ensure that their youth activities are always alcohol free. The contacts with clubs and associations also involve providing information from the local drug self-report survey, similar to the information provided at parent meetings in schools. Information via local media. Information is also disseminated via local news media outlets. This information is normally disseminated in connection with the annual self-report surveys, but can also be communicated at other times, for example in connection with public holidays. The main aim of this work is to implement public health education campaigns about the harms of risky drinking via local media outlets. Press releases to local radio stations and local newspapers include information on the local alcohol and drug situation among youths, and on local prevention work. For a summary of the interventions see Table 1. Table 1 Summary of interventions Design, sample and matching To examine the effect of the Öckerö Method, this study employs a prospective quasi-experimental research design. Eight small municipalities located in the southern region of Sweden (Skåne) have been sampled. Skåne was chosen as the evaluation area in part because the Öckerö Method has not previously been implemented in Skåne, but also because the level of alcohol consumption among youth in Skåne lies approximately 10–20% above the national average for Sweden [17]. The eight municipalities were included in the sample because, according to the regional coordinators for drug and alcohol prevention at the County Administration Board, they needed to develop their drug prevention work and were not working with any of the components included in the Öckerö Method. The sampled municipalities were paired on the basis of a number of matching variables, including average school results, average educational level within the municipality, the economic situation of households within the municipality and the proportion of municipal residents of non-Swedish background. One municipality in each pair was then randomly assigned to either intervention (Öckerö Method) or control conditions (treatment as usual). Data on the adolescents' alcohol use and parental attitudes towards alcohol have been collected anonymously by means of an annual self-report survey among the approximately 3500 secondary school students who are enrolled each year in the municipalities' schools (7th through 9th grade). These students are distributed across 17 schools in the eight municipalities included in the study (nine in the intervention municipalities, eight in the control municipalities). The data collection was administered by project assistants who visited the schools in the participating municipalities once a year during the period 2016–2019 at the beginning of the autumn term. The baseline survey was conducted between August 17 and September 1, 2016, and the follow-up surveys were conducted during the same period during the years 2017–2019. Since the study was conducted in the entire secondary schools (grades 7–9) each year, it is possible to use different study designs. In the current study we will conduct longitudinal comparisons of the intervention and control groups, following a cohort of youths who were in grade 7 in 2016 during the period 2016—2018 (from grade 7 to grade 9). A sensitivity analysis is also conducted for the subsequent grade 7 cohort, i.e. youths who were in grade 7 in 2017, who are followed until 2019 (when they were in grade 9). The level of external missing data was 10–12% per year, giving a total of 12,486 completed questionnaires. This represents a response rate of 88.2%, with no marked difference between the intervention and control municipalities. In this article we analyze data from the first cohort, i.e. those who were in grade 7 in 2016, a group comprising approximately 1000 participants per year, giving us a total of 3035 observations for the period 2016–2018. Adolescents who moved to the municipalities during the study period were identified through a screening question and were excluded from the analyses. The sample is presented in more detail in Table 2. For a flow diagram of the participants and inclusion, see Supplemental appendix. Table 2 Participants in control and intervention groups According to the Act concerning the Ethical Review of Research Involving Humans (Act 2003:460), parents must be informed and must consent to research that includes children under the age of 15. The study is based on the passive consent of the parents, i.e. we informed the parents that their children would be invited to participate in the study, and asked those parents who did not want their children to participate to inform us of this via e-mail, post or telephone. A non-response on the part of the parents was interpreted as indicating consent.Footnote 1 All students in grades 7–9 (aged 13–15 at the start of the autumn term) were informed about the study both verbally and in writing prior to the initiation of the data collection process. Among the students themselves, the study is based on active consent, with the participating students showing their consent by completing and sending in the questionnaire. The study has been assessed and approved by the Regional Ethics Review Board in Lund (application no. 2016/88). Primary outcome: drinking Three measures of drinking will be used in this study. Alcohol consumption is measured using the following item: "Have you ever drunk alcohol (by alcohol we mean medium-strength or strong beer, cider, alcopop, wine or spirits)?" Response options: no (0), yes, 1 time (1), and yes, many times (2). We have dichotomized the variable in the following way: no or yes, one time (0) and yes, many times (1). Drunkenness past year is measured using the following item: "How many times during the past 12 months have you drunk alcohol so that you have felt intoxicated?" Response options: never (0) and 1 time or more (1). Drunkenness past month is measured using the following item: "How many times during the past month have you drunk alcohol so that you have felt intoxicated?" Response options: never (0) and 1 time or more (1). Secondary outcome: parental attitudes towards alcohol use as perceived by the adolescents Perceived parental attitudes towards alcohol are measured by an additive index comprised of two items: For my parents, it's okay (1) if I drink alcohol (2) if I get drunk. Response options: neither agree nor disagree, somewhat agree and strongly agree (0) and strongly disagree and somewhat disagree (1). The correlation between the two dichotomized items varies between r = .42 and r = .54 over the 3 years. We combined the two dichotomized items into a single item (with a range of: 0–2). Then this variable was dichotomized in the following way: 0–1 is coded as (0) and 2 is coded as (1), were the latter indicates restrictive attitudes towards alcohol. Control variables Gender is coded 0 for girls and 1 for boys. Born in Sweden is coded as 0 if the respondent is born abroad and 1 if the respondent is born in Sweden. Split family is coded as 0 if the respondent is living with both biological parents and 1 if this is not the case. First, we compare differences between the control and intervention groups with regard to our control variables, using chi-square tests. Secondly, we compare differences between control and intervention groups in relation to our primary outcome (drinking variables) and our secondary outcome (perceived parental attitudes) across the 3 years. Finally, to examine whether there is an intervention effect we estimate a Difference-in-Difference (DD) model. The DD model is a quasi-experimental design and have been used in studies with similar designs as the present study [18], and more widely in public health policy research when randomized controlled trials are not applicable [19]. The basic idea behind the DD model is to compare the change in any given outcome in an intervention group before and after a hypothesized intervention is introduced while accounting for any concurrent change in a control group not receiving that particular intervention [19,20,21]. As the outcome variables are binary, we decided to estimate our model using Linear Probability Model (LPM) [22, 23]. The LPM is described in eq. (1), where Yit represents the four different outcome variables: alcohol consumption, past-year drunkenness, past-month drunkenness and parental attitudes toward alcohol for adolescent i in grade t: $$ {Y}_{it}={\beta}_0+{\beta}_1{Intervention}_i+{\beta}_2{Post}_t+{\beta}_3\left({Intervention}_i\times {Post}_t\right)+{\beta}_4{X}_{it}+{\varepsilon}_{it} $$ Intervention is a dummy variable indicating whether an adolescent attends an intervention community school (equal to 1 if so, otherwise 0). Post (time) is a dummy variable for post-treatment data (equal to 1 for grade 8 or 9, and equal to 0 for grade 7). The interaction term Intervention × Post is the causal effect of the method on the outcome variables, and β3 is the coefficient of main interest. This variable will indicate whether there are any differences in the outcome variables between the intervention and control groups, i.e. this is our measure of the effect of the intervention. Finally, Xit is a vector of individual-specific control variables and εit is the error term. The models have been estimated in the following stages. Two DD models were estimated for each of our four outcome variables. In Model 1 we included the intervention variable, the post variable and the intervention × post interaction variable. In Model 2 we adjusted for our three control variables, gender, born in Sweden and split family. As the data are based on respondents who are clustered in schools, robust clustered standard errors are presented for the LPM models. The use of robust clustered standard errors takes account of the fact that the observations may be correlated within schools.Footnote 2 All analyses were conducted in Stata/SE version 13.1. One assumption for the DD analysis is that the intervention and control groups follow a common trend in relation to the studied outcome prior to the intervention [21]. Since the current study only includes one pre-intervention time period, we are not able to examine pre-intervention trends among adolescents in either of the groups. However, since intervention and control municipalities have been matched pairwise, based on a number of relevant variables, and were then randomly assigned to intervention or control conditions, we have no reason to suspect different pre-intervention trends. Comparisons between control and intervention groups at baseline for control variables (i.e. gender, ethnicity and living in a split family) are made to assess the possibility of different trajectories in the two groups prior to the intervention. Table 3 presents descriptive statistics for the control variables over baseline, Time 1 and Time 2. The results for baseline show no significant differences in control variables between the intervention and control groups for gender (51.9% cf. 49.9% boys), birthplace (88.7% cf. 88.6% born in Sweden), and family structure (29.1% cf. 28.5% living in a split family) for the baseline. The results are stable over Time 1 and Time 2. Table 3 Descriptive statistics for the control variables Programme effects on drinking Figure 1 presents the three-year trend for alcohol consumption, past-year and past-month drunkenness for control and intervention groups. At the baseline, 9.6% of the adolescents in the control group reported alcohol consumption, compared with 8.1% for the intervention group. Thereafter there was an increase in alcohol consumption at time 1 and time 2, with a similar gradient across the control and intervention groups. For past-year and past-month drunkenness, the control and intervention groups reported drunkenness on a similar level at baseline. In the following 2 years, the proportion of adolescents reporting that they had been drunk during the past-year and past-month was also similar for the control group and the intervention group. Trend in proportion reporting alcohol consumption (a), past-year drunkenness (b), and past-month drunkenness (c) in the control and intervention groups Results from the DD model are presented in Table 4. In the first model, our intervention variable is not significantly associated with our drinking measures. The results also show that our post variable is significant and positively associated with all three drinking measures. This indicates that our three measures of alcohol use increase over time and that this increase is significant for both the control and the intervention group. Finally, our interaction term intervention × post is not significantly associated with our drinking measures. This indicates that the program has not had any significant effect over time. All these results are stable after adjusting for gender, being born in Sweden and living in a split family, as shown in Model 2. Table 4 Difference-in-Difference models, LPM estimates for drinking Programme effects on parental attitudes toward alcohol use as perceived by their adolescents Figure 2 shows the trend for perceived parental attitudes towards alcohol use, as reported by the adolescents. The results show that the baseline, i.e. grade 7, level of parents who thought it was not okay to use alcohol and to be drunk was very high, after which we can see a decrease over time. This means that the older the children become, the higher the proportion of parents with a less restrictive approach to drinking. The pattern is similar for both the control and the intervention group for the two first years, whereas the lines separate somewhat between time 1 and time 2. Trend in proportion of parents with restrictive attitudes towards alcohol use as perceived by the adolescents in the control and intervention groups Finally, Table 5 presents the results from the DD model focused on parental attitudes towards alcohol use. In the first model, only our post variable is significant and positively associated with parental attitudes towards alcohol use. This indicates that parents become less restrictive about alcohol use over time and that this change is significant for both the control and the intervention group. The interaction term intervention × post is not significantly associated with parental attitudes towards alcohol. This indicates that the program has not had any effect on the parents' attitudes over time. The results are stable after adjusting for the three control variables in Model 2. Table 5 Difference-in-Difference models, LPM estimates for parental attitudes towards alcohol use as perceived by the adolescents Sensitivity analyses We ran a number of alternative models in order to test the robustness of our findings. First, we repeated our series of models using logistic regression and focusing on estimates of marginal effects. The results followed a pattern similar to that obtained using LPM. Second, we estimated our LPM models for girls and boys, with the results showing no indication of an effect of the interaction term. Third, analyses have also been estimated using the second wave cohort, i.e. those adolescents who started grade 7 in 2017 and went on to grade 9 in 2019. In these additional models, we treated the year 2017 as the baseline. Although this group may have been affected by the intervention to some extent at baseline, we wanted to examine whether the results are also stable in relation to this cohort. The results show no indication of an intervention effect. Fourth, we also estimated comparisons between baseline vs. Time 1, baseline vs. Time 2 and Time 1 vs. Time 2, and the results are the same as those presented in Tables 3 and 4. Fifth, the project's research design also allowed us to compare the intervention and control groups by year group, i.e. following the trends within each grade (7, 8, and 9) over a period of 4 years, 2016–2019. The trends were very similar for all grades. The aim of this study was to conduct an independent evaluation of the effectiveness of the community intervention known as the Öckerö Method. Using a prospective quasi-experimental design, we examined two questions: First, is it possible to identify effects on youths' alcohol consumption and drunkenness? Second, is it possible to identify effects on parental attitudes towards alcohol consumption and drunkenness, based on the youths' perceptions? The empirical results from our Difference-in-Difference analyses are clear. We found no evidence that the intervention had any effect on the level of either young people's alcohol consumption or their past-year or past-month drunkenness, nor on their parents' perceived attitudes toward adolescent drinking. A number of sensitivity models were also estimated, producing stable results; no significant effect of the program was found. Our finding of no effects is in line with the results of a number of previous studies that have been unable to identify effects of community interventions in relation to alcohol use [10, 18, 26], although some other studies have found empirical support for an effect of broader community interventions in relation to alcohol use [12, 13, 27]. There could be two possible interpretations for these results. First, the method does not have the expected effect (by comparison with municipalities that have not implemented the method), or that the method is not sufficient to produce such an effect, i.e., some parts of the method may work while other parts are not effective, or it is possible that none of the various interventions included in the method produce an effect. Since we lack dose-response measures, however, we are unable to say which of these is the case in this study. Second, the follow-up period may be too short, and the effect of the intervention may not become measurable until later. The importance of having an observation period that is sufficiently long has been discussed in the research literature, particularly in relation to community interventions [13, 28]. Although this study employs a well-founded research design with a large-scale sample and low levels of non-response, there are a few limitations that need to be addressed. Firstly, this is a quasi-experimental study which lacks randomization at the school and individual level (RCT). Although a number of community interventions studies are based on quasi-experimental designs [6, 8], further evaluations of the Öckerö Method need to use an RCT design, such as the cluster RCT for example [8, 29]. Secondly, in this study we do not examine how different components of the community intervention work in isolation; that would be something for further research to examine. Thirdly, parental attitudes are described by the youths, whereas other studies have also included data collected from the parents themselves [30, 31]. This is the first evaluation of this method, and we have found no evidence that the intervention had any effect on the level of either young people's alcohol consumption or their past-year or past-month drunkenness, nor on their parents' perceived attitudes toward adolescent drinking. A possible improvement would be to employ a follow-up period that is longer than the three-year period employed in this study. Finally, the different components of the method need to be revised, and thereafter, more systematic and formal evaluations are needed. The datasets used in the current study are not publicly available due to restrictions made by the Regional Ethical Review Board in Lund, Sweden, but are available from the corresponding author on reasonable request. A total of 15 parents contacted the researchers to say that their children would not be participating in the study. Since the number of clusters are rather few [21, 24] we also estimated the models using cluster bootstrap standard errors that gives the same results as the one published in the results section [24, 25]. DD: Difference-in-Difference LPM: Linear Probability Model AME: Average marginal effects Babor T, Caetano R, Casswell S, Edwards G, Giesbrect N, Graham, K, et al. Alcohol: no ordinary commodity. Oxford: Oxford University Press; 2010. Emmers E, Bekkering GE, Hannes K. Prevention of alcohol and drug misuse in adolescents: an overview of systematic reviews. Nordic Stud Alcohol Drugs. 2015;32(2):183–98. https://doi.org/10.1515/nsad-2015-0019. Room R, Babor T, Rehm J. Alcohol and public health. Lancet. 2005;365(9458):519–30. https://doi.org/10.1016/S0140-6736(05)17870-2. Viner R. Co-occurrence of adolescent health risk behaviors and outcomes in adult life: findings from a national birth cohort. J Adolesc Health. 2005;36(2):98–9. https://doi.org/10.1016/j.jadohealth.2004.11.012. Viner RM, Taylor B. Adult outcomes of binge drinking in adolescence: findings from a UK national birth cohort. J Epidemiol Community Health. 2007;61(10):902–7. https://doi.org/10.1136/jech.2005.038117. McCambridge J, McAlaney J, Rowe R. Adult consequences of late adolescent alcohol consumption: a systematic review of cohort studies. PLoS Med. 2011;8:1–13. Czech S, Shakeshaft AP, Breen C, Sanson-Fisher RW. Whole-of community approaches to reducing alcohol-related harm: what do communities think? J Public Health. 2010;18(6):543–51. https://doi.org/10.1007/s10389-010-0339-5. Stockings E, Bartlem K, Hall A, Hodder R, Gilligan C, Wiggers J, et al. Whole-of-community interventions to reduce population-level harms arising from alcohol and other drug use: a systematic review and meta-analysis. Addiction. 2018;113(11):1984–2018. https://doi.org/10.1111/add.14277. Jones SC, Andrews K, Francis K. Combining social norms and social marketing to address underage drinking: development and process evaluation of a whole-of-community intervention. PLoS One. 2017;12(1):e0169872. https://doi.org/10.1371/journal.pone.0169872. Komro KA, Perry CL, Veblen-Mortenson S, Farbakhsh K, Toomey TL, Stigler MH, et al. Outcomes from a randomized controlled trial of a multi-component alcohol use preventive intervention for urban youth: project Northland Chicago. Addiction. 2008;103(4):606–18. https://doi.org/10.1111/j.1360-0443.2007.02110.x. Dedobbeleer N, Desjardins S. Outcomes of an ecological and participatory approach to prevent alcohol and other drug "abuse" among multiethnic adolescents. Subst Use Misuse. 2001;36(13):1959–91. https://doi.org/10.1081/JA-100108434. Kristjansson AL, James JE, Allegrante JP, Sigfusdottir ID, Helgason AR. Adolescent substance use, parental monitoring, and leisure-time activities: 12-year outcomes of primary prevention in Iceland. Prev Med. 2010;51(2):168–71. https://doi.org/10.1016/j.ypmed.2010.05.001. Jansen SC, Haveman-Nies A, Bos-Oude Groeniger I, Izeboud C, de Rover C, van't Veer P. Effectiveness of a Dutch community-based alcohol intervention: changes in alcohol use of adolescents after 1 and 5 years. Drug Alcohol Depend. 2016;159:125–32. https://doi.org/10.1016/j.drugalcdep.2015.11.032. Hallgren M, Andreasson S. The Swedish six community alcohol and drug prevention trial: effects on youth drinking. Drug Alcohol Rev. 2013;32(5):504–11. https://doi.org/10.1111/dar.12057. Flewelling RL, Grube JB, Paschall MJ, Biglan A, Kraft A, Black C, et al. Reducing youth access to alcohol: findings from a community-based randomized trial. Am J Community Psychol. 2013;51(1-2):264–77. https://doi.org/10.1007/s10464-012-9529-3. Das JK, Salam RA, Arshad A, Finkelstein Y, Bhutta ZA. Interventions for adolescent substance abuse: an overview of systematic reviews. J Adolesc Health. 2016;59(4):S61–75. https://doi.org/10.1016/j.jadohealth.2016.06.021. CAN. Skolelevers drogvanor 2018. CAN-rapport 178. Stockholm: Centralförbundet för alkohol- och narkotikaupplysning; 2018. Beckman L, Svensson M, Geidne S, Eriksson C. Effects on alcohol use of a Swedish based prevention program for early adolescents: a longitudinal study. BMC Public Health. 2017;17(1):2. https://doi.org/10.1186/s12889-016-3947-3. Wing C, Simon K, Bello-Gomez RA. Designing difference in difference studies: best practices for public health policy research. Annu Rev Public Health. 2018;39(1):453–69. https://doi.org/10.1146/annurev-publhealth-040617-013507. Abadie A. Semiparametric difference-in-differences estimators. Rev Econ Stud. 2005;72(1):1–19. https://doi.org/10.1111/0034-6527.00321. Angrist JD, Pischke JS. Mostly harmless econometrics: an empiricist's companion. Princeton, NJ: Princeton university press; 2008. https://doi.org/10.2307/j.ctvcm4j72. Mood C. Logistic regression: why we cannot do what we think we can do, and what we can do about it. Eur Sociol Rev. 2010;26(1):67–82. https://doi.org/10.1093/esr/jcp006. Breen R, Karlson KB, Holm A. Interpreting and understanding logits, probits, and other nonlinear probability models. Annu Rev Sociol. 2018;44(1):39–54. https://doi.org/10.1146/annurev-soc-073117-041429. Cameron AC, Miller DL. A practioner's guide to cluster-robust inference. J Hum Resour. 2015;50(2):317–72. https://doi.org/10.3368/jhr.50.2.317. Cameron AC, Trivedi PK. Microeconometrics using Stata. College Station: Stata Press; 2010. Hawkins JD, Brown EC, Oesterle S, Arthur MW, Abbott RD, Catalano RF. Early effects of communities that care on targeted risks and initiation of delinquent behavior and substance use. J Adolesc Health. 2008;43(1):15–22. https://doi.org/10.1016/j.jadohealth.2008.01.022. Bagnardi V, Sorini E, Disalvatore D, Assi V, Corrao G, De Stefani R, et al. Alcohol, less is better' project: outcomes of an Italian community-based prevention programme on reducing per-capita alcohol consumption. Addiction. 2011;106(1):102–10. https://doi.org/10.1111/j.1360-0443.2010.03105.x. Farrington DP, Hawkins DJ. The need for long-term follow-ups of delinquency prevention experiments. JAMA Netw Open. 2019;2(3):e190780. https://doi.org/10.1001/jamanetworkopen.2019.0780. Shakeshaft A, Doran C, Petrie D, Breen C, Havard A, Abudeen A, et al. The effectiveness of community action in reducing risky alcohol consumption and harm: a cluster randomised controlled trial. PLoS Med. 2014;11(3):e1001617. https://doi.org/10.1371/journal.pmed.1001617. Koutakis N, Stattin H, Kerr M. Reducing youth alcohol drinking through a parent-targeted intervention: the Örebro prevention program. Addiction. 2008;103(10):1629–37. https://doi.org/10.1111/j.1360-0443.2008.02326.x. Tael-Öeren M, Naughton F, Sutton S. A parent-oriented alcohol prevention program "Effekt" had no impact on adolescents' alcohol use: findings from a cluster-randomized controlled trial in Estonia. Drug Alcohol Depend. 2019;194:279–87. https://doi.org/10.1016/j.drugalcdep.2018.10.024. This study was supported by grants from the Public Health Agency of Sweden (02916–2015), the County Administrative Board of Skåne and Systembolaget (FO 2019–0048). The Public Health Agency of Sweden supported preparations for the study and the baseline measures. The County Administrative Board of Skåne supported the data collection for year 2 and year 3. Systembolaget supported the project's final stage, i.e. the write up of the evaluation. The funders were not involved in the design of the study, the data collection, the analysis, nor the interpretation of the data or the writing of the manuscript. Open Access funding provided by Malmö University. Department of Criminology, Malmö University, 205 06, Malmö, Sweden Robert Svensson & Karl Kronkvist Department of Social Work, Malmö University, 205 06, Malmö, Sweden Björn Johnson Robert Svensson Karl Kronkvist Design of the study: RS and BJ. Conducted statistical analyses: RS. Wrote the first draft of the manuscript: RS. Data interpretation and revisions of the manuscript: RS, BJ, KK. Read and approved the final version of the manuscript: RS, BJ, KK. Correspondence to Robert Svensson. According to the Act concerning the Ethical Review of Research Involving Humans (Act 2003:460), parents must be informed and must consent to research that includes children under the age of 15. The study is based on the passive consent of the parents, i.e. we informed the parents with a letter that their children would be invited to participate in the study, and asked those parents who did not want their children to participate to inform us of this via e-mail, post or telephone. All students were informed about the study both verbally and in writing prior to the initiation of the data collection process. Among the students themselves, the study is based on active consent, with the participating students showing their consent by completing and sending in the questionnaire. The study has been assessed and approved by the Regional Ethics Review Board in Lund (application no. 2016/88). The Regional Ethics Review Board in Lund approved the opt-out consent process. 12889_2021_10755_MOESM1_ESM.docx Svensson, R., Johnson, B. & Kronkvist, K. A community intervention to reduce alcohol consumption and drunkenness among adolescents in Sweden: a quasi-experiment. BMC Public Health 21, 764 (2021). https://doi.org/10.1186/s12889-021-10755-3 The Öckerö method
CommonCrawl
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up. Hypothesis testing via separate inference for each group and then combining Suppose there are two groups, A and B, and we are interested in inferring a certain parameter for each one and also the difference between the two parameters. Here we can take a Bayesian perspective and strive for a posterior distribution in each case. I am wondering if the following is a sound way of doing this: estimate the posterior for group A, estimate the posterior for group B, and estimate the posterior of the difference by sampling extensively the first two posteriors and taking the difference. I am specifically unsure about this kind of divide-and-conquer approach where each group is treated separately, and then the results are combined. Usually, it is done in one take where, perhaps, a linear model is fitted with an indicator for the group membership. Let me give a simple example. Say, the outcome is binary. One can then use a Bernoulli–beta model to infer the posterior of the success probability, which will be a beta distribution for each group. As the last step, one can sample the two betas and get a posterior for the difference. hypothesis-testing bayesian IvanIvan The approach you describe makes a lot of sense, if you have independent priors (and likelihoods - you also get problems if observations in the two groups influence each other or are somehow correlated). A simple test for whether you really think that is to consider whether what you think about each group has nothing to do with what you see for the other group. E.g. if I show you one of the groups first, this will not change in the slightest what you expect for the other group. To take an example, let's assume our outcome in a randomized controlled trial is a count outcome, we assume it's a Poisson distribution and we followed 100 patients on placebo for 1 year and we saw 4000 events. A-priori, we were pretty unsure about the placebo event rate and had a marginal Gamma(0.1,0.1) prior for both treatment groups. A-posteriori we now have a Gamma(4000.1, 100.1) prior for the annualized event rate. Does this change what you expect for the active treatment group? If you have independent priors for the two treatment groups, it does not. This approach has problems, if you are e.g. somewhat unsure about where each treatment group ends up (you could still have marginal Gamma(0.1,0.1) priors for both treatment groups), but you assume that relative rate reductions (or increases) compared with placebo beyond 90% are pretty unlikely. In that case it might be more plausible to assume a prior such as, say, a N(0, 1) on the log-rate-ratio for treatment vs. placebo (instead of assuming a prior on the treatment group rate, although this of course implies one). In that case, your belief about what you expect to see in the treatment group changes after seeing the placebo group data (it's now the posterior for the placebo log-rate with the added noise of the N(0,1) prior for the treatment effect). If that latter option is what you believe a-priori, then the approach you describe is not so suitable. I would guess that this is how many people in the clinical trials space think (i.e. they'd prefer to assign a prior on the expected placebo outcome and a prior on the treatment effect relative to that). BjörnBjörn $\begingroup$ That makes sense. I wasn't even considering this possibility of putting a prior on a combination of the parameters, e.g., on their log ratio. In this setting, it's clear that one would not be able to follow the steps I outlined (unless one marginalizes out the other dimension, thereby going back to the independent-priors scenario). What throws me off balance a bit is "if I show you one of the groups first, this will not change in the slightest what you expect for the other." It feels like seeing results for one group always informs you about the other, because it is the same population. $\endgroup$ – Ivan $\begingroup$ If you feel that one group tells you about the other, then independent priors are not right. Personally, I feel the same and would typically put priors on one group (e.g. the control group for which there's usually a lot of historical data) and then a prior on the treatment effect, as described. $\endgroup$ – Björn I don't have the reputation points to respond as a comment to another answer, but intuitively, what you are doing is fine if you assume there is no relationship between group A and group B. How could such an assumption show up in your model? It would almost certainly be in the prior distribution. For a proportion, let's say you have 500 successes in 1000 trials in Group A, and 100 successes in 1000 trials in Group B. If you assume independent Beta(1,1) priors (that's what the other posters means when they say "factorizing"), then the posteriors are Beta(501, 1001) for Group A and Beta(101, 10001) for Group B. But you could also imagine a scenario in which you have little prior information, expect little data, and expect your two groups to be somewhat similar. Maybe it would make sense to utilize the information in the other group to inform the posterior. For instance, since we observed 50% success in Group A, and 10% success in Group B, we could say "I expected Group A to be around 40% and Group B to be around 20%", where we move our estimates towards the average of the two groups, 600 successes in 2000 trials = 30%. If your prior distributions are like this, then your estimates for theta_A and theta_B will correlated, and you can't independently sample like you want, because you won't be properly accounting for that correlation. I assume you didn't do this for your priors, so I don't think you will have anything to worry about. Brian GrecoBrian Greco $\begingroup$ Great point! Shrinkage via a multilevel setup is another prominent use case for having a prior that would not factorize. Indeed, if data is scarce, one would very much like to benefit from all data points and to see estimates pulled toward regions where there is more evidence $\endgroup$ Let $Y_A$ and $Y_B$ denote the datasets for groups $A$ and $B$. Similarly let $\theta_A$ and $\theta_B$ denote the corresponding parameters. If the joint distribution factors, \begin{equation} p(Y_A,Y_B,\theta_A,\theta_B) = p(Y_A,\theta_A)\,p(Y_B,\theta_B) , \end{equation} then the joint posterior factors, \begin{equation} p(\theta_A,\theta_B|Y_A,Y_B) = p(\theta_A|Y_A)\,p(\theta_B|Y_B) , \end{equation} where \begin{equation} p(\theta_i|Y_i) = \frac{p(Y_i|\theta_i)\,p(\theta_i)}{p(Y_i)} \end{equation} for $i \in \{A,B\}$. The joint distribution will factor if the likelihood factors, \begin{equation} p(Y_A,Y_B|\theta_A,\theta_B) = p(Y_A|\theta_A)\,p(Y_B|\theta_B) , \end{equation} and the prior factors, \begin{equation} p(\theta_A,\theta_B) = p(\theta_A)\,p(\theta_B) . \end{equation} If these assumptions are reasonable, then the proposed procedure is fine. mefmef $\begingroup$ Thank you! I suppose, if the randomization was done right, these are fair assumptions to make. Would you agree? $\endgroup$ $\begingroup$ @Ivan I'm not sure exactly what "the randomization" means here, but I suspect that's right with respect to the likelihood. But for the prior it depends on whether knowing one of the parameters tells you anything about the other one before you see the data. It's often assumed that it doesn't. $\endgroup$ – mef $\begingroup$ I am referring to a randomized controlled trial (RCT) whose execution is sound, including random treatment assignment. I miss this connection in the answer. I would appreciate if you could add a sentence or two explaining your thoughts on the factorization of the prior of the parameters in a RCT. $\endgroup$ Thanks for contributing an answer to Cross Validated! Not the answer you're looking for? Browse other questions tagged hypothesis-testing bayesian or ask your own question. Bayesian hypothesis testing and Bayes factors Bayes-factor for testing a null-hypothesis? Bayesian A/B/C testing Group composition for hypothesis testing Obtaining posteriors for multivariate Normal mixture models Hypothesis Testing the difference in two means - Is N just the number in each group or should it also include each observation in each group? How to jointly model $N$ groups where data in each group is i.i.d. Normal and infer the posterior distribution?
CommonCrawl
Surgery on Herman rings of the standard Blaschke family Linking curves, sutured manifolds and the Ambrose conjecture for generic 3-manifolds January 2018, 38(1): 43-62. doi: 10.3934/dcds.2018002 Stability and bifurcation on predator-prey systems with nonlocal prey competition Shanshan Chen 1,2, and Jianshe Yu 1,, School of Mathematics and Information Science, Guangzhou University, Guangzhou, Guangdong, 510006, China Department of Mathematics, Harbin Institute of Technology, Weihai, Shandong, 264209, China Received November 2016 Revised July 2017 Published September 2017 Fund Project: The authors are supported by the National Natural Science Foundation of China (Nos. 11471085,11771109) In this paper, we investigate diffusive predator-prey systems with nonlocal intraspecific competition of prey for resources. We prove the existence and uniqueness of positive steady states when the conversion rate is large. To show the existence of complex spatiotemporal patterns, we consider the Hopf bifurcation for a spatially homogeneous kernel function, by using the conversion rate as the bifurcation parameter. Our results suggest that Hopf bifurcation is more likely to occur with nonlocal competition of prey. Moreover, we find that the steady state can lose the stability when conversion rate passes through some Hopf bifurcation value, and the bifurcating periodic solutions near such bifurcation value can be spatially nonhomogeneous. This phenomenon is different from that for the model without nonlocal competition of prey, where the bifurcating periodic solutions are spatially homogeneous near such bifurcation value. Keywords: Predator-prey system, steady state, reaction-diffusion, nonlocal competition, Hopf bifurcation. Mathematics Subject Classification: Primary:35K57, 35B36;Secondary:45K05. Citation: Shanshan Chen, Jianshe Yu. Stability and bifurcation on predator-prey systems with nonlocal prey competition. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 43-62. doi: 10.3934/dcds.2018002 C.O. Alves, M. Delgado, M.A.S. Souto and A. Suárez, Existence of positive solution of a nonlocal logistic population model, Z. Angew. Math. Phys., 66 (2015), 943-953. doi: 10.1007/s00033-014-0458-x. Google Scholar H. Berestycki, G. Nadin, B. Perthame and L. Ryzhik, The non-local Fisher-KPP equation: Travelling waves and steady states, Nonlinearity, 22 (2009), 2813-2844. doi: 10.1088/0951-7715/22/12/002. Google Scholar J. Billingham, Dynamics of a strongly nonlocal reaction-diffusion population model, Nonlinearity, 17 (2004), 313-346. doi: 10.1088/0951-7715/17/1/018. Google Scholar N.F. Britton, Spatial structures and periodic travelling waves in an integro-differential reaction-diffusion population model, SIAM J. Appl. Math., 50 (1990), 1663-1688. doi: 10.1137/0150099. Google Scholar C.-C. Chen and L.-C. Hung, Nonexistence of traveling wave solutions, exact and semi-exact traveling wave solutions for diffusive Lotka-Volterra systems of three competing species, Commun. Pure Appl. Anal., 15 (2016), 1451-1469. doi: 10.3934/cpaa.2016.15.1451. Google Scholar S. Chen and J. Yu, Dynamics of a diffusive predator-prey system with a nonlinear growth rate for the predator, J. Differential Equations, 260 (2016), 7923-7939. doi: 10.1016/j.jde.2016.02.007. Google Scholar S. Chen and J. Yu, Stability analysis of a reaction-diffusion equation with spatiotemporal delay and Dirichlet boundary condition, J. Dyn. Diff. Equat., 28 (2016), 857-866. doi: 10.1007/s10884-014-9384-z. Google Scholar S. Chen and J. Yu, Stability and bifurcations in a nonlocal delayed reaction-diffusion population model, J. Differential Equations, 260 (2016), 218-240. doi: 10.1016/j.jde.2015.08.038. Google Scholar F.J. S.A. Corrêa, M. Delgado and A. Suárez, Some nonlinear heterogeneous problems with nonlocal reaction term, Advances in Differential Equations, 16 (2011), 623-641. Google Scholar Y. Du and S.-B. Hsu, A diffusive predator-prey model in heterogeneous environment, J. Differential Equations, 203 (2004), 331-364. doi: 10.1016/j.jde.2004.05.010. Google Scholar Y. Du and S.-B. Hsu, On a nonlocal reaction-diffusion problem arising from the modeling of phytoplankton growth, SIAM J. Math. Anal., 42 (2010), 1305-1333. doi: 10.1137/090775105. Google Scholar Y. Du and Y. Lou, Some uniqueness and exact multiplicity results for a predator-prey model, Trans. Amer. Math. Soc., 349 (1997), 2443-2475. doi: 10.1090/S0002-9947-97-01842-4. Google Scholar Y. Du and Y. Lou, S-shaped global bifurcation curve and Hopf bifurcation of positive solutions to a predator-prey model, J. Differential Equations, 144 (1998), 390-440. doi: 10.1006/jdeq.1997.3394. Google Scholar Y. Du and Y. Lou, Qualitative behaviour of positive solutions of a predator-prey model: Effects of saturation, Proc. Roy. Soc. Edinburgh Sect. A, 131 (2001), 321-349. doi: 10.1017/S0308210500000895. Google Scholar J. Fang and X.-Q. Zhao, Monotone wavefronts of the nonlocal Fisher-KPP equation, Nonlinearity, 24 (2011), 3043-3054. doi: 10.1088/0951-7715/24/11/002. Google Scholar G. Faye and M. Holzer, Modulated traveling fronts for a nonlocal Fisher-KPP equation: A dynamical systems approach, J. Differential Equations, 258 (2015), 2257-2289. doi: 10.1016/j.jde.2014.12.006. Google Scholar J. Furter and M. Grinfeld, Local vs. non-local interactions in population dynamics, J. Math. Biol., 27 (1989), 65-80. doi: 10.1007/BF00276081. Google Scholar S.A. Gourley, Travelling front solutions of a nonlocal Fisher equation, J. Math. Biol., 41 (2000), 272-284. doi: 10.1007/s002850000047. Google Scholar S.A. Gourley and N.F. Britton, A predator-prey reaction-diffusion system with nonlocal effects, J. Math. Biol., 34 (1996), 297-333. doi: 10.1007/BF00160498. Google Scholar S.A. Gourley and J.W.-H. So, Dynamics of a food-limited population model incorporating nonlocal delays on a finite domain, J. Math. Biol., 44 (2002), 49-78. doi: 10.1007/s002850100109. Google Scholar S.A. Gourley, J.W.-H. So and J. Wu, Nonlocality of reaction-diffusion equations induced by delay: biological modeling and nonlinear dynamics, J. Math. Sci., 124 (2004), 5119-5153. doi: 10.1023/B:JOTH.0000047249.39572.6d. Google Scholar F. Hamel and L. Ryzhik, On the nonlocal Fisher-KPP equation: Steady states, spreading speed and global bounds, Nonlinearity, 27 (2014), 2735-2753. doi: 10.1088/0951-7715/27/11/2735. Google Scholar B.-S. Han, Z.-C. Wang and Z. Feng, Traveling waves for the nonlocal diffusive single species model with Allee effect, J. Math. Anal. Appl., 443 (2016), 243-264. doi: 10.1016/j.jmaa.2016.05.031. Google Scholar J. Jin, J. Shi, J. Wei and F. Yi, Bifurcations of patterned solutions in diffusive Lengyel-Epstein system of CIMA chemical reaction, Rocky Moun. J. Math., 43 (2013), 1637-1674. doi: 10.1216/RMJ-2013-43-5-1637. Google Scholar W. Ko and K. Ryu, Qualitative analysis of a predator-prey model with Holling type II functional response incorporating a prey refuge, J. Differential Equations, 231 (2006), 534-550. doi: 10.1016/j.jde.2006.08.001. Google Scholar A. Leung, Limiting behaviour for a prey-predator model with diffusion and crowding effects, J. Math. Biol., 6 (1978), 87-93. doi: 10.1007/BF02478520. Google Scholar G.M. Lieberman, Bounds for the steady-state Sel'kov model for arbitrary $p$ in any number of dimensions, SIAM J. Math. Anal., 36 (2005), 1400-1406. doi: 10.1137/S003614100343651X. Google Scholar C.-S. Lin, W.-M. Ni and I. Takagi, Large amplitude stationary solutions to a chemotaxis systems, J. Differential Equations, 72 (1988), 1-27. doi: 10.1016/0022-0396(88)90147-7. Google Scholar Y. Lou and W.-M. Ni, Diffusion, self-diffusion and cross-diffusion, J. Differential Equations, 131 (1996), 79-131. doi: 10.1006/jdeq.1996.0157. Google Scholar Y. Lou, W.-M. Ni and S. Yotsutani, Pattern formation in a cross-diffusion system, Discrete Cont. Dyn. Syst., 35 (2015), 1589-1607. doi: 10.3934/dcds.2015.35.1589. Google Scholar A. Madzvamuse, H.S. Ndakwo and R. Barreira, Stability analysis of reaction-diffusion models on evolving domains: The effects of cross-diffusion, Discrete Cont. Dyn. Syst., 36 (2016), 2133-2170. doi: 10.3934/dcds.2016.36.2133. Google Scholar S.M. Merchant and W. Nagata, Instabilities and spatiotemporal patterns behind predator invasions with nonlocal prey competition, Theor. Popul. Biol., 80 (2011), 289-297. doi: 10.1016/j.tpb.2011.10.001. Google Scholar R. Peng and J. Shi, Non-existence of non-constant positive steady states of two Holling type-II predator-prey systems: Strong interaction case, J. Differential Equations, 247 (2009), 866-886. doi: 10.1016/j.jde.2009.03.008. Google Scholar R. Peng, J. Shi and M. Wang, Stationary pattern of a ratio-dependent food chain model with diffusion, SIAM J. Appl. Math., 67 (2007), 1479-1503. doi: 10.1137/05064624X. Google Scholar R. Peng, J. Shi and M. Wang, On stationary patterns of a reaction-diffusion model with autocatalysis and saturation law, Nonlinearity, 21 (2008), 1471-1488. doi: 10.1088/0951-7715/21/7/006. Google Scholar R. Peng, F.-Q. Yi and X.-Q. Zhao, Spatiotemporal patterns in a reaction-diffusion model with the Degn-Harrison reaction scheme, J. Differential Equations, 254 (2013), 2465-2498. doi: 10.1016/j.jde.2012.12.009. Google Scholar Y. Su and X. Zou, Transient oscillatory patterns in the diffusive non-local blowfly equation with delay under the zero-flux boundary condition, Nonlinearity, 27 (2014), 87-104. doi: 10.1088/0951-7715/27/1/87. Google Scholar L. Sun, J. Shi and Y. Wang, Existence and uniqueness of steady state solutions of a nonlocal diffusive logistic equation, Z. Angew. Math. Phys., 64 (2013), 1267-1278. doi: 10.1007/s00033-012-0286-9. Google Scholar C. Wang, R. Liu, J. Shi and C.M. del Rio, Traveling waves of a mutualistic model of mistletoes and birds, Discrete Cont. Dyn. Syst., 35 (2015), 1743-1765. doi: 10.3934/dcds.2015.35.1743. Google Scholar Y. Yamada, On logistic diffusion equations with nonlocal interaction terms, Nonlinear Anal., 118 (2015), 51-62. doi: 10.1016/j.na.2015.01.016. Google Scholar W.-b. Yang, J.-H. Wu and H. Nie, Some uniqueness and multiplicity results for a predator-prey dynamics with a nonlinear growth rate, Commun. Pure Appl. Anal., 14 (2015), 1183-1204. doi: 10.3934/cpaa.2015.14.1183. Google Scholar F. Yi, J. Wei and J. Shi, Bifurcation and spatiotemporal patterns in a homogeneous diffusive predator-prey system, J. Differential Equations, 246 (2009), 1944-1977. doi: 10.1016/j.jde.2008.10.024. Google Scholar J. Zhou, Qualitative analysis of a modified Leslie-Gower predator-prey model with Crowley-Martin functional responses, Commun. Pure Appl. Anal., 14 (2015), 1127-1145. doi: 10.3934/cpaa.2015.14.1127. Google Scholar J. Zhou and C. Mu, Coexistence states of a Holling type-II predator-prey system, J. Math. Anal. Appl., 369 (2010), 555-563. doi: 10.1016/j.jmaa.2010.04.001. Google Scholar Figure 1. The constant steady state loses its stability through Hopf bifurcation, and the solution converges to the bifurcated spatially nonhomogeneous periodic solution. Here initial values: $u(x,0)=0.3+0.1\cos^2\frac{x}{4},v(x,0)=0.2+0.1\cos^2\frac{x}{2},x\in[0,2\pi]$. (Upper): $\gamma=4$; (Lower): $\gamma=9$. Figure 2. The constant steady state loses its stability through Hopf bifurcation. (Upper): $\gamma=2.7$, and the solution converges to the bifurcated spatially nonhomogeneous periodic solution. Here initial values: $u(x,0)=0.3+0.1\cos^2\frac{x}{3},v(x,0)=0.2+0.1\cos^2\frac{x}{3},x\in[0,1.5\pi]$. (Lower): $\gamma=6$, and the solution converges to the bifurcated spatially homogeneous periodic solution. Here initial values: $u(x,0)=0.7+0.5\cos^2\frac{x}{3},v(x,0)=0.7+0.5\cos^2\frac{x}{3},x\in[0,1.5\pi]$. Xiaoling Zou, Dejun Fan, Ke Wang. Stationary distribution and stochastic Hopf bifurcation for a predator-prey system with noises. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1507-1519. doi: 10.3934/dcdsb.2013.18.1507 Mostafa Bendahmane. Analysis of a reaction-diffusion system modeling predator-prey with prey-taxis. Networks & Heterogeneous Media, 2008, 3 (4) : 863-879. doi: 10.3934/nhm.2008.3.863 Sebastién Gaucel, Michel Langlais. Some remarks on a singular reaction-diffusion system arising in predator-prey modeling. Discrete & Continuous Dynamical Systems - B, 2007, 8 (1) : 61-72. doi: 10.3934/dcdsb.2007.8.61 Jiang Liu, Xiaohui Shang, Zengji Du. Traveling wave solutions of a reaction-diffusion predator-prey model. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1063-1078. doi: 10.3934/dcdss.2017057 Samira Boussaïd, Danielle Hilhorst, Thanh Nam Nguyen. Convergence to steady state for the solutions of a nonlocal reaction-diffusion equation. Evolution Equations & Control Theory, 2015, 4 (1) : 39-59. doi: 10.3934/eect.2015.4.39 Xiaoyuan Chang, Junjie Wei. Stability and Hopf bifurcation in a diffusive predator-prey system incorporating a prey refuge. Mathematical Biosciences & Engineering, 2013, 10 (4) : 979-996. doi: 10.3934/mbe.2013.10.979 Na Min, Mingxin Wang. Hopf bifurcation and steady-state bifurcation for a Leslie-Gower prey-predator model with strong Allee effect in prey. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1071-1099. doi: 10.3934/dcds.2019045 Zuolin Shen, Junjie Wei. Hopf bifurcation analysis in a diffusive predator-prey system with delay and surplus killing effect. Mathematical Biosciences & Engineering, 2018, 15 (3) : 693-715. doi: 10.3934/mbe.2018031 Hongyong Zhao, Daiyong Wu. Point to point traveling wave and periodic traveling wave induced by Hopf bifurcation for a diffusive predator-prey system. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020129 Wenshu Zhou, Hongxing Zhao, Xiaodan Wei, Guokai Xu. Existence of positive steady states for a predator-prey model with diffusion. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2189-2201. doi: 10.3934/cpaa.2013.12.2189 Hongmei Cheng, Rong Yuan. Existence and stability of traveling waves for Leslie-Gower predator-prey system with nonlocal diffusion. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5433-5454. doi: 10.3934/dcds.2017236 Jun Zhou, Chan-Gyun Kim, Junping Shi. Positive steady state solutions of a diffusive Leslie-Gower predator-prey model with Holling type II functional response and cross-diffusion. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3875-3899. doi: 10.3934/dcds.2014.34.3875 Qi An, Weihua Jiang. Spatiotemporal attractors generated by the Turing-Hopf bifurcation in a time-delayed reaction-diffusion system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 487-510. doi: 10.3934/dcdsb.2018183 Rebecca McKay, Theodore Kolokolnikov, Paul Muir. Interface oscillations in reaction-diffusion systems above the Hopf bifurcation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2523-2543. doi: 10.3934/dcdsb.2012.17.2523 Xun Cao, Weihua Jiang. Double zero singularity and spatiotemporal patterns in a diffusive predator-prey model with nonlocal prey competition. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020069 Marcos Lizana, Julio Marín. On the dynamics of a ratio dependent Predator-Prey system with diffusion and delay. Discrete & Continuous Dynamical Systems - B, 2006, 6 (6) : 1321-1338. doi: 10.3934/dcdsb.2006.6.1321 Qizhen Xiao, Binxiang Dai. Heteroclinic bifurcation for a general predator-prey model with Allee effect and state feedback impulsive control strategy. Mathematical Biosciences & Engineering, 2015, 12 (5) : 1065-1081. doi: 10.3934/mbe.2015.12.1065 Kaigang Huang, Yongli Cai, Feng Rao, Shengmao Fu, Weiming Wang. Positive steady states of a density-dependent predator-prey model with diffusion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3087-3107. doi: 10.3934/dcdsb.2017209 Christian Kuehn, Thilo Gross. Nonlocal generalized models of predator-prey systems. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 693-720. doi: 10.3934/dcdsb.2013.18.693 Bo Li, Xiaoyan Zhang. Steady states of a Sel'kov-Schnakenberg reaction-diffusion system. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1009-1023. doi: 10.3934/dcdss.2017053 PDF downloads (195) Shanshan Chen Jianshe Yu
CommonCrawl
\begin{document} \title{Counterexamples to the List Square Coloring Conjecture} \begin{abstract} The square $G^2$ of a graph $G$ is the graph defined on $V(G)$ such that two vertices $u$ and $v$ are adjacent in $G^2$ if the distance between $u$ and $v$ in $G$ is at most 2. Let $\chi(H)$ and $\chi_l(H)$ be the chromatic number and the list chromatic number of $H$, respectively. A graph $H$ is called {\em chromatic-choosable} if $\chi_l (H) = \chi(H)$. It is an interesting problem to find graphs that are chromatic-choosable. Kostochka and Woodall \cite{KW2001} conjectured that $\chi_l(G^2) = \chi(G^2)$ for every graph $G$, which is called List Square Coloring Conjecture. In this paper, we give infinitely many counterexamples to the conjecture. Moreover, we show that the value $\chi_l(G^2) - \chi(G^2)$ can be arbitrary large. \end{abstract} \noindent {\bf Keywords:} Square of graph, chromatic number, list chromatic number \noindent {\bf 2010 Mathematics Subject Classification: 05C15} \section{Introduction} A proper $k$-coloring $\phi: V(G) \rightarrow \{1, 2, \ldots, k \}$ of a graph $G$ is an assignment of colors to the vertices of $G$ so that any two adjacent vertices receive distinct colors. The {\em chromatic number} $\chi(G)$ of a graph $G$ is the least $k$ such that there exists a proper $k$-coloring of $G$. A list assignment $L$ is an assignment of lists of colors to vertices. A graph $G$ is said to be {\em $k$-choosable} if for any list $L(v)$ of size at least $k$, there exists a proper coloring $\phi$ such that $\phi(v) \in L(v)$ for every $v \in V(G)$. The least $k$ such that $G$ is $k$-choosable is called the {\it list chromatic number} $\chi_\ell(G)$ of a graph $G$. Clearly $\chi_l(G) \geq \chi(G)$ for every graph $G$. A graph $G$ is called {\em chromatic-choosable} if $\chi_l (G) = \chi(G)$. It is an interesting problem to determine which graphs are chromatic-choosable. There are several famous conjectures that some classes of graphs are chromatic-choosable including the List Coloring Conjecture. Given a graph $G$, the {\em total graph} $T(G)$ of $G$ is the graph such that $V(T(G)) = V(G) \cup E(G)$, and two vertices $x$ and $y$ are adjacent in $T(G)$ if (1) $x,y\in V(G)$, $x$ and $y$ are adjacent vertices in $G$, or (2) $x,y\in E(G)$, $x$ and $y$ are adjacent edges in $G$, or (3) $x\in V(G)$, $y\in E(G)$, and $x$ is incident to $y$ in $G$. The {\it line graph} $L(G)$ of a graph $G$ is the graph such that $V(L(G))=E(G)$ and two vertices $x$ and $y$ are adjacent in $L(G)$ if and only if $x$ and $y$ are adjacent edges in $G$. The famous List Coloring Conjecture (or called Edge List Coloring Conjecture) is stated as follows, which was proposed independently by Vizing, by Gupa, by Albertson and Collins, and by Bollob\'{a}s and Harris (see \cite{Toft} for detail). \begin{conjecture}\label{LECC} {\rm \bf (List Coloring Conjecture)} For any graph $G$, $\chi_l(L(G)) = \chi(L(G))$. \end{conjecture} It was shown that the List Coloring Conjecture is true for some graph families, see~\cite{G1995, PW1999, W1999}. On the other hand, Borodin, Kostochka, Woodall \cite{BKW1997} proposed the following conjecture as a version of the famous List Coloring Conjecture for total graphs. \begin{conjecture}\label{LTCC} {\rm \bf (List Total Coloring Conjecture)} For any graph $G$, $\chi_l(T(G)) = \chi(T(G))$. \end{conjecture} For a simple graph $G$, the {\it square} $G^2$ of $G$ is defined such that $V(G^2) = V(G)$ and two vertices $x$ and $y$ are adjacent in $G^2$ if and only if the distance between $x$ and $y$ in $G$ is at most 2. Kostochka and Woodall \cite{KW2001} proposed the following conjecture. \begin{conjecture} \label{LSCC} {\rm \bf (List Square Coloring Conjecture)} For any graph $G$, $\chi_l(G^2)=\chi(G^2)$. \end{conjecture} Note that the List Square Coloring Conjecture implies the List Total Coloring Conjecture. If $H$ is the graph obtained by placing a vertex in the middle of every edge of a graph $G$, then $H^2 = T(G)$. Hence if the List Square Coloring Conjecture is true for a special class of bipartite graphs, then the List Total Coloring Conjecture is true. The List Square Coloring Conjecture has attracted a lot of attention and been cited in many papers related with coloring problems so far, and it has been widely accepted to be true. The List Square Coloring Conjecture has been proved for several small classes of graphs. In this paper, we disprove the List Square Coloring Conjecture by showing that there exists a graph $G$ such that $\chi_l (G^2) \neq \chi(G^2)$. We show that for each prime $n \geq 3$, there exists a graph $G$ such that $G^2$ is the complete multipartite graph $K_{n*(2n-1)}$, where $K_{n*(2n-1)}$ denotes the complete multipartite graph with $(2n-1)$ partite sets in which each partite set has size $n$. Note that $\chi_l ( K_{n*(2n-1)}) > \chi( K_{n*(2n-1)})$ for every integer $n \geq 3$. Thus there exist infinitely many counterexamples to the List Square Coloring Conjecture. Moreover, we show that the gap between $\chi_l (G^2)$ and $\chi(G^2)$ can be arbitrary large, using the property that $\chi_l ( K_{n*(2n-1)}) - \chi( K_{n*(2n-1)}) \geq n-1$ for every integer $n \geq 3$. In the next section, first we construct a graph $G$, and next we will show that $G^2$ is a complete multipartite graph by proving several lemmas. \section{Construction} Let $[n]$ denote $\{1,2,\ldots,n\}$. A \textit{Latin square} of order $n$ is an $n\times n$ array such that in each cell, an element of $[n]$ is arranged and there is no same element in each row and each column. For a Latin square $L$ of order $n$, the element on the $i$th row and the $j$th column is denoted by $L(i,j)$. For example, $L$ in Figure \ref{Latin-square} is a Latin square of order 3, and $L(1,2)=2$, $L(1,3)=3$, and $L(3,2)=3$. Two Latin squares $L_1$ and $L_2$ are \textit{orthogonal} if for any $(i,j) \in [n]\times [n]$, there exists unique $(k,\ell)\in[n]\times [n]$ such that $L_1(k,\ell)=i$ and $L_2(k,\ell)=j$. For example, $L_1$ and $L_2$ in Figure \ref{Latin-square} are orthogonal. \begin{figure} \caption{Latin squares $L, L_1$ and $L_2$ of order $3$} \label{Latin-square} \end{figure} From now on, we fix a prime number $n$ with $n\ge 3$ in this section. For $i\in [n]$, we define a Latin square $L_i$ of order $n$ by \begin{eqnarray} \label{eq:Latin} L_i(j,k)= j+i(k-1) \pmod{n}, \quad \mbox{ for } (j, k) \in [n] \times [n]. \end{eqnarray} Then it is (also well-known) easily checked that $L_i$ is a Latin square of order $n$ and $\{L_1,L_2,\ldots, L_{n-1}\}$ is a family of mutually orthogonal Latin squares of order $n$ as $n$ is prime (see page 252 in \cite{Van-Lint}). For example, in Figure~\ref{Latin}, $L_1$, $L_2$, $L_3$, and $L_4$ are Latin squares of order 5 when $n = 5$. \begin{figure} \caption{$\{L_1,L_2,L_3,L_4\}$ is a family of mutually orthogonal Latin squares of order $n$ defined in (\ref{eq:Latin}).} \label{Latin} \end{figure} Now we will construct a graph $G$ which is a counterexample to Conjecture~\ref{LSCC}. \begin{construction}\label{construction} \rm For each prime number $n \geq 3$, we construct a graph $G$ with $2n^2-n$ vertices as follows. For $1 \leq i \leq n$, let $P_i$ be the set of $n$ elements such that \begin{eqnarray*} P_i&=&\{ v_{i,1}, v_{i,2}, ..., v_{i,n}\} \end{eqnarray*} and for $1 \leq j \leq n-1$, let $Q_j$ be the set of $n$ elements such that \begin{eqnarray*} Q_j&=&\{ w_{j,1}, w_{j,2}, ...., w_{j,n}\}. \end{eqnarray*} Let $\{L_1,L_2,\ldots, L_{n-1}\}$ is the family of mutually orthogonal Latin squares of order $n$ obtained by (\ref{eq:Latin}). Graph $G$ is defined as follows. \begin{itemize} \item[] $V(G) = \left( \cup_{i =1}^{n} P_i \right) \bigcup \ \left( \cup_{j =1}^{n-1} Q_j \right) = P_1\cup \cdots \cup P_n \cup Q_1\cup \cdots \cup Q_{n-1}$. \item[] $E(G) = E_1 \cup E_2$ such that \begin{eqnarray*} E_1&=&\bigcup_{i\in [n-1]}\bigcup_{j\in [n]} \{ w_{i,j} v_{k,L_i(j,k)} : 1 \leq k \leq n \}, \\ E_2&=& \bigcup_{j\in [n]} \{ xy : x,y\in T_j\}, \end{eqnarray*} where \[T_j=\{ v_{1,j}, v_{2,j}, \ldots, v_{n,j}\} \quad \mbox{for }1 \leq j \leq n.\] \end{itemize} \end{construction} That is, for each $i\in [n]$, $T_i$ is a clique of size $n$ in $G$, and $T_1$, $T_2$, \ldots, $T_n$ are mutually vertex disjoint. And for $i\in [n-1]$ and for $j\in [n]$, \begin{eqnarray}\label{Neighbor} N_G(w_{i,j})&=&\{ v_{1,L_i(j,1)} ,v_{2,L_i(j,2)}, ..., v_{n,L_i(j,n)} \}, \end{eqnarray} which is obtained by reading the $j$th row of the Latin square $L_i$ defined in~(\ref{eq:Latin}). See Figure~\ref{fig1} for an illustration of the case when $n=3$. \begin{figure} \caption{The graph $G$ and Latin squares $L_1$ and $L_2$ defined in (\ref{eq:Latin}) when $n=3$. In $N_G(w_{i,j})$, the bold subscripts are the $j$th row of the Latin square $L_i$. } \label{fig1} \end{figure} From now on, we denote $G$ the graph defined in Construction \ref{construction}. We will show that $G^2$ is the complete multipartite graph $K_{n \star (2n-1)}$ whose partite sets are $P_1$, $\ldots$, $P_n$, $Q_1$, $\ldots$, $Q_{n-1}$. For simplicity, let $P=P_1\cup \cdots \cup P_n$ and $Q=Q_1\cup \cdots \cup Q_{n-1}$. From the definition of $G$, the following lemma holds. \begin{lemma}\label{N(w)} The graph $G$ satisfies the following properties. \begin{itemize} \item[\rm(1)] For every $x\in Q$, \[|N_G(x) \cap P_k|=1, \mbox{ for each } 1 \leq k \leq n.\] \item[\rm(2)] For every $x\in Q$, \[|N_G(x) \cap T_k|=1, \mbox{ for each } 1 \leq k \leq n.\] \item[\rm(3)] If $x$ and $y$ are distinct vertices in $Q$, then \[|N_{G}(x)\cap N_G(y)|\le 1. \] In particular, if $x,y\in Q_i$ for some $i\in [n-1]$, then \[|N_{G}(x)\cap N_G(y)|=0. \] \end{itemize} \end{lemma} \begin{proof} Let $x \in Q$, denoted $x=w_{i,j}$. By (\ref{Neighbor}), it is clear that $N_G(x)$ contains exactly one vertex $v_{k, L_{i}(j,k)}$ of $P_k$ for each $k\in [n]$. Therefore (1) is true. Let $x$ be a vertex in $Q$, denoted $x=w_{i,j}$. For each $1\le k\le n$, there exists unique $\ell\in [n]$ such that $L_{i}(j,\ell)=k$ since $L_{i}$ is a Latin square. Then $v_{\ell, L_{i}(j,\ell)}=v_{\ell,k} \in N_G(w_{i,j})$ by (\ref{Neighbor}). By the definition of $T_k$, $v_{\ell,k} \in T_k$. Therefore, $v_{\ell,k} \in N_G(w_{i,j})\cap T_k$. From the uniqueness of $\ell$, $|N_G(w_{i,j})\cap T_k| = 1$. Thus (2) is true. Next we will prove (3). Let $x$ and $y$ be two vertices in $Q$, denoted $x=w_{i,j}$ and $y=w_{i',j'}$. By (\ref{Neighbor}), \begin{eqnarray*} N_G(w_{i,j}) &=&\{ v_{1,L_i(j,1)} ,v_{2,L_i(j,2)}, ..., v_{n,L_i(j,n)} \}, \\ N_G(w_{i',j'})&=&\{ v_{1,L_{i'}(j',1)} ,v_{2,L_{i'}(j',2)}, ...,v_{n,L_{i'}(j',n)} \}. \end{eqnarray*} \begin{claim}\label{claim_nw} It holds that $v_{k,L_i(j,k)}\in N_{G}(w_{i,j})\cap N_G(w_{i',j'})$ if and only if \begin{equation} \label{equation} (i-i')(k-1) \equiv j'- j \pmod{n}. \end{equation} \end{claim} \begin{proof} It is easy to see that \begin{eqnarray*} &&v_{k,L_i(j,k)}\in N_{G}(w_{i,j})\cap N_G(w_{i',j'})\\ \Leftrightarrow && v_{k,L_i(j,k)}=v_{k,L_{i'}(j',k)} \\ \Leftrightarrow && L_i(j,k)=L_{i'}(j',k) \\ \Leftrightarrow && j+i(k-1)\equiv j'+i'(k-1) \pmod{n}\\ \Leftrightarrow && (i-i')(k-1) \equiv j'- j \pmod{n}. \end{eqnarray*} \end{proof} Suppose that $v_{k,L_i(j,k)}, v_{k',L_{i'}(j',k')} \in N_{G}(w_{i,j})\cap N_G(w_{i',j'})$ for some $k,k'\in [n]$. Then by Claim~\ref{claim_nw}, \begin{eqnarray*} && (i-i')(k-1) \equiv j'- j \pmod{n},\\ && (i-i')(k'-1) \equiv j'- j \pmod{n}. \end{eqnarray*} By subtracting two equations, \[ (i-i')(k-k')\equiv 0 \pmod{n}.\] First, consider the case when $i-i'\neq 0$. Then $k-k'\equiv 0 {\pmod{n}}$ since $n$ is prime. Since $1\le k ,k' \le n$, we have $k=k'$. Consequently $\{v_{k,L_i(j,k)}, v_{k',L_{i'}(j',k')} \} = \{v_{k,L_i(j,k)}, v_{k,L_{i'}(j',k)} \}$. Note that $\{v_{k,L_i(j,k)}, v_{k,L_{i'}(j',k)} \} \subset T_k$. Thus $N_{G}(w_{i,j})\cap N_G(w_{i',j'}) \subset T_k$. Therefore by (2), we have \[ |N_{G}(w_{i,j})\cap N_G(w_{i',j'})| =|N_{G}(w_{i,j})\cap N_G(w_{i',j'}) \cap T_k|\leq |N_G(w_{i,j}) \cap T_k| \le 1.\] Next, consider the case when $i = i'$. Suppose that $N_{G}(w_{i,j})\cap N_G(w_{i',j'})\neq \emptyset$. Then (\ref{equation}) is true for some $k$. Since $i=i'$, (\ref{equation}) is equivalent to $j\equiv j' \pmod{n}$. Since $1 \leq j, j' \leq n$, we have $j = j'$. It implies that $w_{i,j}= w_{i',j'}$, which is a contradiction for the assumption that $w_{i,j}\neq w_{i',j'}$. Therefore $N_{G}(w_{i,j})\cap N_G(w_{i',j'}) = \emptyset$. Thus (3) is true. \end{proof} \begin{lemma}\label{N(v)} The graph $G$ satisfies the following properties. \begin{itemize} \item[\rm (1)] If $x\in P$, \[|N_G(x) \cap Q_k|=1, \mbox{ for each } 1 \leq k \leq n-1.\] \item[\rm (2)] If $x$ and $y$ are distinct vertices in $P$, then \[|N_{G}(x)\cap N_G(y)\cap Q|\le 1. \] \end{itemize} \end{lemma} \begin{proof} Let $x\in P$. Suppose that $|N_{G}(x) \cap Q_k|\ge 2$ for some $k \in \{1, \ldots, n-1\}$. Then there are two vertices $y,z\in Q_k$ such that $x \in N_{G}(y)\cap N_G(z)$, which implies that $N_{G}(y)\cap N_G(z)\neq \emptyset$. By (3) of Lemma~\ref{N(w)}, it is impossible. Thus (1) is true. Let $x$ and $y$ be distinct vertices in $P$. Suppose that $|N_{G}(x)\cap N_G(y)\cap Q|\ge 2$. Then there exist two vertices $z_1,z_2\in Q$ such that $z_1,z_2 \in N_{G}(x)\cap N_G(y)\cap Q$. Then $x,y\in N_G(z_1)\cap N_G(z_2)$, and consequently $|N_G(z_1)\cap N_G(z_2)|\ge 2$. It is a contradiction to (3) of Lemma~\ref{N(w)}. Thus (2) is true. \end{proof} \begin{lemma}\label{independent} For each $1\le i\le n$, $P_i$ is an independent set of $G^2$. Also for each $1\le i\le n-1$, $Q_i$ is an independent set of $G^2$. \end{lemma} \begin{proof} For $1 \leq i \leq n$, let $v_{i,j}$ and $v_{i,j'}$ be any two vertices in $P_i$. Suppose that $v_{i,j}$ and $v_{i,j'}$ are adjacent in $G^2$. Then there exists a common neighbor $x$ of $v_{i,j}$ and $v_{i,j'}$ since $v_{i,j}$ and $v_{i,j'}$ are not adjacent in $G$. It follows that $x \in Q$ by Construction~\ref{construction}. Thus $v_{i,j}, v_{i,j'}\in N_G(x)\cap P_i$, and so $|N_G({x})\cap P_i|\ge 2$. It is a contradiction to (1) of Lemma~\ref{N(w)}. Therefore, $P_i$ is an independent set in $G^2$. Next we will show that $Q_i$ is an independent set in $G^2$. For $1 \leq i \leq n-1$, let $w_{i,j}$ and $w_{i,j'}$ be distinct vertices in $Q_i$. Suppose that $w_{i,j}$ and $w_{i,j'}$ are adjacent in $G^2$. Then there exists a common neighbor $y$ of $w_{i,j}$ and $w_{i,j'}$ since $w_{i,j}$ and $w_{i,j'}$ are not adjacent in $G$. It follows that $y \in P$ by the construction of $G$. Thus $w_{i,j}, w_{i,j'}\in N_G(y)\cap Q_i$, and so $|N_G({y})\cap Q_i|\ge 2$. It is a contradiction to (1) of Lemma~\ref{N(v)}. Therefore, $Q_i$ is an independent set in $G^2$. \end{proof} \begin{lemma}\label{adjacent_all} For any vertex $x\in P$ and for any vertex $y\in Q$, $x$ and $y$ are adjacent in $G^2$. \end{lemma} \begin{proof} Let $x$ and $y$ be vertices in $P$ and $Q$, respectively. Since $P$ is the disjoint union of $T_1, \ldots, T_n$ which are defined in Construction \ref{construction}, $x\in T_k$ for some $k \in \{1, \ldots, n \}$. By (2) of Lemma~\ref{N(w)}, $N_G(y)\cap T_k \neq \emptyset$ and $G[T_k]$ induces a complete subgraph in $G$. Therefore the distance between $x$ and $y$ in $G$ is at most 2. Thus $x$ is adjacent to $y$ in $G^2$. \end{proof} Now we will show that the subgraphs induced by $P$ and $Q$ in $G^2$, denoted $G^2[P]$ and $G^2[Q]$, respectively, are complete multipartite graphs. Let $K_{n*r}$ denote the complete multipartite graph with $r$ partite sets in which each partite set has size $n$. \begin{lemma}\label{thm:G[P]} The subgraph induced by $P$ in $G^2$, denoted $G^2[P]$, is $K_{n*n}$ whose partite sets are $P_1$, $P_2$, \ldots, $P_n$. \end{lemma} \begin{proof} Let \[\mathcal{F}=\{ N_G(w) : w\in Q \}\cup \{ T_1,T_2,\ldots, T_n\}.\] Note that for each $w\in Q$, the subgraph induced by $N_G(w)$ in $G^2$ is a complete graph and $N_G(w) \subset P$. And each $T_i$ is a clique in $G^2$ and $T_i \subset P$ by the definition of $T_i$. Therefore, $\mathcal{F}$ is a family of cliques in $G^2[P]$. We will show that for any $X,Y\in \mathcal{F}$, we have $|X\cap Y|\le 1$. By (3) of Lemma~\ref{N(w)}, for any two vertices $x, y\in Q$, $|N_G(x)\cap N_G(y)|\le 1$. By the definition, $|T_i\cap T_j|=0$. Also, by (2) of Lemma~\ref{N(w)}, $|N_G(w)\cap T_j|= 1$ for any $w$ in $Q$ and for any $1 \leq j \leq n$. Thus for any $X,Y\in \mathcal{F}$, we have $|X\cap Y|\le 1$, which implies that any two cliques of $\mathcal{F}$ are edge-disjoint. Note that $|\mathcal{F}|=|Q|+n=n(n-1)+n=n^2$. Thus $\mathcal{F}$ is a family of $n^2$ mutually edge-disjoint cliques in $G^2[P]$. As each clique of $\mathcal{F}$ is $K_n$ and $K_n$ has ${n\choose 2}$ edges, we have \[|E(G^2[P])| \ge n^2\times {n\choose 2} =n^2 \times \frac{n(n-1)}{2}.\] On the other hand, by Lemma~\ref{independent}, $E(G^2[P])$ has at most ${n^2 \choose 2} - n\times {n\choose 2}$ edges, since each of $P_1,P_2,\ldots,P_n$ is an independent set in $G^2$. Note that \[{n^2 \choose 2} - n\times {n\choose 2}=\frac{n^2(n^2-1)}{2}-\frac{n^2(n-1)}{2} = n^2 \times \frac{n(n-1)}{2}.\] Thus \[|E(G^2[P])| = n^2 \times \frac{n(n-1)}{2}.\] This implies that $G^2[P]$ is $K_{n*n}$ whose partite sets are $P_1$, $P_2$, \ldots, $P_n$. \end{proof} The following lemma holds by a similar argument with Lemma~\ref{thm:G[P]}. \begin{lemma}\label{thm:G[Q]} The subgraph induced by $Q$ in $G^2$, denoted $G^2[Q]$, is $K_{n*(n-1)}$ whose partite sets are $Q_1, \ldots, Q_{n-1}$. \end{lemma} \begin{proof} Let \[\mathcal{F}=\{ N_G(v)\cap Q \mid v\in P \}.\] For each $v\in P$, the subgraph induced by $N_G(v)\cap Q$ in $G^2$ is a complete graph. Therefore, $\mathcal{F}$ is a family of cliques in $G^2[Q]$. By (2) of Lemma~\ref{N(v)}, for any two vertices $x, y\in P$, we have $|N_G(x)\cap N_G(y)\cap Q|\le 1$. Therefore any two cliques of $\mathcal{F}$ is edge-disjoint. Note that $|\mathcal{F}|=|P|=n^2$. Thus $\mathcal{F}$ is a family of $n^2$ mutually edge-disjoint cliques in $G^2[Q]$. As each clique of $\mathcal{F}$ is $K_{n-1}$ and $K_{n-1}$ has ${n-1\choose 2}$ edges, we have \[|E(G^2[Q])| \ge n^2 \times {n-1\choose 2} =n^2\times\frac{(n-1)(n-2)}{2}.\] On the other hand, by Lemma~\ref{independent}, $E(G^2[Q])$ has at most ${n^2-n \choose 2} - (n-1)\times {n\choose 2}$ edges, since each of $Q_1$,\ldots, $Q_{n-1}$ is an independent set in $G^2$. Note that \[ {n^2-n \choose 2} - (n-1)\times {n\choose 2} =\frac{(n^2-n)(n^2-n-1)}{2} -\frac{(n-1)n(n-1)}{2} =\frac{n^2(n-1)(n-2)}{2} .\] Thus \[|E(G^2[Q])| = n^2 \times \frac{(n-1)(n-2)}{2}.\] This implies that $G^2[Q]$ is $K_{n*(n-1)}$ whose partite sets are $Q_1$,\ldots, $Q_{n-1}$. \end{proof} Now by Lemmas~\ref{independent}, ~\ref{adjacent_all}, ~\ref{thm:G[P]} and~\ref{thm:G[Q]}, we conclude that the square $G^2$ of $G$ is the complete multipartite graph $K_{n*(2n-1)}$ whose partite sets are $P_1$, $P_2$, $\ldots$, $P_n$, $Q_1$, $\ldots$, $Q_{n-1}$, which implies the following main theorem. \begin{theorem}\label{G2} For each prime $n \geq 3$, there exists a graph $G$ such that $G^2$ is the complete multipartite graph $K_{n*(2n-1)}$. \end{theorem} The following lower bound on the list chromatic number of a complete multipartite graph was obtained in \cite{Vetrik2012}. \begin{theorem}\label{thm:Vetrik}{\rm (Theorem 4, \cite{Vetrik2012})} For a complete multipartite graph $K_{n*r}$ with $n,r \ge 2$, \[\chi_\ell (K_{n*r}) > (n-1)\lfloor\frac{2r-1}{n} \rfloor.\] \end{theorem} \begin{proof} The proof is the same as in \cite{Vetrik2012}. We include it here for the convenience of readers. Let $A_1, \ldots, A_n$ be a family of disjoint color sets such that $||A_i| - |A_j|| \leq 1$ for each $1 \leq i,j \leq n$ and $|\bigcup_{j=1}^n A_j| = 2r -1$. Then $|A_j| \geq \lfloor \frac{2r-1}{n} \rfloor$ for each $j \in \{1, \ldots, n\}$. Define a list assignment $L$ as follows. For $1 \leq i \leq r$, let $V_i = \{v_{i1}, \ldots, v_{in} \}$ be the $i$th partite set in $K_{n \star r}$. For each $v_{ik} \in V_i$, define $L(v_{ik}) = \bigcup_{j=1}^n A_j \setminus A_k$. Then $|L(x)| \geq (n-1)\lfloor \frac{2r-1}{n} \rfloor$ for each vertex $x$ in $K_{n \star r}$. Note that in any coloring from these lists, at least two colors on each partite $V_i$ are used. Thus at least $2r$ colors are needed to have a proper coloring from the lists, but $|\bigcup_{j=1}^n A_j| = 2r -1$. Hence $K_{n \star r}$ is not $L$-colorable. This implies that $\chi_\ell (K_{n*r}) > (n-1)\lfloor\frac{2r-1}{n} \rfloor$. \end{proof} Consequently, we obtain that $\chi_{\ell}(G) >\chi(G)$ by the following theorem. \begin{theorem} \label{main-theorem} For each prime $n \geq 3$, if $G$ is the graph defined in Construction~\ref{construction}, then \[ \chi_{\ell}(G^2) - \chi(G^2) \geq n-1. \] \end{theorem} \begin{proof} It is clear that $\chi(G^2)=2n-1$ by Theorem~\ref{G2}. On the other hand, by Theorems~\ref{G2} and~\ref{thm:Vetrik}, \begin{eqnarray*} \chi_\ell(G^2)&=&\chi_{\ell} (K_{n*(2n-1)}) > (n-1)\lfloor\frac{4n-3}{n} \rfloor \geq 3(n-1), \end{eqnarray*} when $n\ge 3$. Thus for $n \geq 3$, \[ \chi_\ell(G^2) - \chi(G^2) \geq n-1.\] \end{proof} \begin{remark} \rm Since there are infinitely many primes, from Theorem~\ref{main-theorem}, the gap $\chi_l(G^2) - \chi(G^2)$ can be arbitrary large. \end{remark} \end{document}
arXiv
Phase-field models on graphs Phase-field models on graphs are a discrete analogue to phase-field models, defined on a graph. They are used in image analysis (for feature identification) and for the segmentation of social networks. Graph Ginzburg–Landau functional For a graph with vertices V and edge weights $\omega _{i,j}$, the graph Ginzburg–Landau functional of a map $u:V\to \mathbb {R} $ is given by $F_{\varepsilon }(u)={\frac {\varepsilon }{2}}\sum _{i,j\in V}\omega _{ij}(u_{i}-u_{j})^{2}+{\frac {1}{\varepsilon }}\sum _{i\in V}W(u_{i}),$ where W is a double well potential, for example the quartic potential W(x) = x2(1 − x2). The graph Ginzburg–Landau functional was introduced by Bertozzi and Flenner. [1] In analogy to continuum phase-field models, where regions with u close to 0 or 1 are models for two phases of the material, vertices can be classified into those with uj close to 0 or close to 1, and for small $\varepsilon $, minimisers of $F_{\varepsilon }$ will satisfy that uj is close to 0 or 1 for most nodes, splitting the nodes into two classes. Graph Allen–Cahn equation To effectively minimise $F_{\varepsilon }$, a natural approach is by gradient flow (steepest descent). This means to introduce an artificial time parameter and to solve the graph version of the Allen–Cahn equation, ${\frac {d}{dt}}u_{j}=-\varepsilon (\Delta u)_{j}-{\frac {1}{\varepsilon }}W'(u_{j}),$ where $\Delta $ is the graph Laplacian. The ordinary continuum Allen–Cahn equation and the graph Allen–Cahn equation are natural counterparts, just replacing ordinary calculus by calculus on graphs. A convergence result for a numerical graph Allen–Cahn scheme has been established by Luo and Bertozzi.[2] It is also possible to adapt other computational schemes for mean curvature flow, for example schemes involving thresholding like the Merriman–Bence–Osher scheme, to a graph setting, with analogous results.[3] See also • Graph cuts in computer vision References 1. Bertozzi, A.; Flenner, A. (2012-01-01). "Diffuse Interface Models on Graphs for Classification of High Dimensional Data". Multiscale Modeling & Simulation. 10 (3): 1090–1118. doi:10.1137/11083109X. ISSN 1540-3459. 2. Luo, Xiyang; Bertozzi, Andrea L. (2017-05-01). "Convergence of the Graph Allen–Cahn Scheme". Journal of Statistical Physics. 167 (3): 934–958. Bibcode:2017JSP...167..934L. doi:10.1007/s10955-017-1772-4. ISSN 1572-9613. 3. van Gennip, Yves. Graph Ginzburg–Landau: discrete dynamics, continuum limits, and applications. An overview. In Ei, S.-I.; Giga, Y.; Hamamuki, N.; Jimbo, S.; Kubo, H.; Kuroda, H.; Ozawa, T.; Sakajo, T.; Tsutaya, K. (2019-07-30). "Proceedings of 44th Sapporo Symposium on Partial Differential Equations". Hokkaido University Technical Report Series in Mathematics. 177: 89–102. doi:10.14943/89899.
Wikipedia
Self-adjoint In mathematics, and more specifically in abstract algebra, an element x of a *-algebra is self-adjoint if $x^{*}=x$. A self-adjoint element is also Hermitian, though the reverse doesn't necessarily hold. A collection C of elements of a star-algebra is self-adjoint if it is closed under the involution operation. For example, if $x^{*}=y$ then since $y^{*}=x^{**}=x$ in a star-algebra, the set {x,y} is a self-adjoint set even though x and y need not be self-adjoint elements. In functional analysis, a linear operator $A:H\to H$ on a Hilbert space is called self-adjoint if it is equal to its own adjoint A∗. See self-adjoint operator for a detailed discussion. If the Hilbert space is finite-dimensional and an orthonormal basis has been chosen, then the operator A is self-adjoint if and only if the matrix describing A with respect to this basis is Hermitian, i.e. if it is equal to its own conjugate transpose. Hermitian matrices are also called self-adjoint. In a dagger category, a morphism $f$ is called self-adjoint if $f=f^{\dagger }$; this is possible only for an endomorphism $f\colon a\to a$. See also • Hermitian matrix • Normal element • Symmetric matrix • Self-adjoint operator • Unitary element References • Reed, M.; Simon, B. (1972). Methods of Mathematical Physics. Vol 2. Academic Press. • Teschl, G. (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. Providence: American Mathematical Society. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Spectral theory and *-algebras Basic concepts • Involution/*-algebra • Banach algebra • B*-algebra • C*-algebra • Noncommutative topology • Projection-valued measure • Spectrum • Spectrum of a C*-algebra • Spectral radius • Operator space Main results • Gelfand–Mazur theorem • Gelfand–Naimark theorem • Gelfand representation • Polar decomposition • Singular value decomposition • Spectral theorem • Spectral theory of normal C*-algebras Special Elements/Operators • Isospectral • Normal operator • Hermitian/Self-adjoint operator • Unitary operator • Unit Spectrum • Krein–Rutman theorem • Normal eigenvalue • Spectrum of a C*-algebra • Spectral radius • Spectral asymmetry • Spectral gap Decomposition • Decomposition of a spectrum • Continuous • Point • Residual • Approximate point • Compression • Direct integral • Discrete • Spectral abscissa Spectral Theorem • Borel functional calculus • Min-max theorem • Positive operator-valued measure • Projection-valued measure • Riesz projector • Rigged Hilbert space • Spectral theorem • Spectral theory of compact operators • Spectral theory of normal C*-algebras Special algebras • Amenable Banach algebra • With an Approximate identity • Banach function algebra • Disk algebra • Nuclear C*-algebra • Uniform algebra • Von Neumann algebra • Tomita–Takesaki theory Finite-Dimensional • Alon–Boppana bound • Bauer–Fike theorem • Numerical range • Schur–Horn theorem Generalizations • Dirac spectrum • Essential spectrum • Pseudospectrum • Structure space (Shilov boundary) Miscellaneous • Abstract index group • Banach algebra cohomology • Cohen–Hewitt factorization theorem • Extensions of symmetric operators • Fredholm theory • Limiting absorption principle • Schröder–Bernstein theorems for operator algebras • Sherman–Takeda theorem • Unbounded operator Examples • Wiener algebra Applications • Almost Mathieu operator • Corona theorem • Hearing the shape of a drum (Dirichlet eigenvalue) • Heat kernel • Kuznetsov trace formula • Lax pair • Proto-value function • Ramanujan graph • Rayleigh–Faber–Krahn inequality • Spectral geometry • Spectral method • Spectral theory of ordinary differential equations • Sturm–Liouville theory • Superstrong approximation • Transfer operator • Transform theory • Weyl law • Wiener–Khinchin theorem
Wikipedia
Dietary selenium intake based on the Chinese Food Pagoda: the influence of dietary patterns on selenium intake Jing Wang1,2, Linsheng Yang1,2, Hairong Li1,2, Yonghua Li1 & Binggan Wei1 Selenium (Se) is essential for humans, with many critical roles in physiological and pathophysiological processes. Fish, eggs and meats are usually the rich food sources of Se. To improve the nutritional status of population, a new version of balanced dietary pattern in the form of the Chinese Food Pagoda (2016) was proclaimed. This study aimed to evaluate the contribution of this balanced dietary pattern to daily Se intake, and to assess Se intake status of Chinese residents under this Food Pagoda scenario. Based on the food consumption recommended in the Food Pagoda, this study collected the data of Se contents in various food composites and estimated dietary Se intakes (EITDS) in 12 provinces from the 4th China Total Diet Study. The estimated Se intakes based on the Chinese Food Pagoda (EICHFP) in 12 provinces were calculated. EITDS and EICHFP in various food groups among different regions were compared. The average EICHFP in all regions, within the range of 66.23–145.20 μg/day, was greater than the China recommended nutrient intake (RNI) (60 μg/day). None of the highest EICHFP went beyond the tolerable upper intake level of Se (400 μg/day). Animal source foods should be the primary source of daily Se intake according to the EICHFP. The average EITDS in China (88 μg/day) was in line with its range of EICHFP (81.01–124.25 μg/day), but that in half of the regions failed to achieve their lowest EICHFP. Significant differences between EITDS and EICHFP were observed in cereal food, aquatic and dairy products (P < 0.05), among which Se intake from aquatic and dairy products presented seriously insufficient in almost all regions. The ideal dietary pattern recommended in the Food Pagoda can meet the daily requirements of Chinese population for Se intake to maintain optimal health. From the perspective of the balanced diet and Se-rich sources, the consumption of aquatic products should be increased appropriately to improve the general Se intake level of Chinese population. Selenium (Se) is an essential micronutrient for human health, with critical roles in redox homeostasis, antioxidant defense and immune system [1, 2]. Insufficient or excessive Se intakes are linked to many acute and chronic diseases [3,4,5,6,7,8]. In particular, problems related to Se deficiency are an emerging issue for human health worldwide [9]. It is estimated that 15% of the global population suffers Se deficiency of different degrees [10]. China as one of the 40 Se-deficient countries has over 105 million people facing adverse health impacts due to Se deficiency [11, 12]. Owing to large variations in food Se, the dietary Se intake varies considerably among regions, normally being consistent with Se distribution in the environment. In China, low Se intakes are primarily found in the low-Se geographic belt from northeast to southwest, with a mean of 27.6 μg/day; while high Se intakes are observed in Se-rich areas, with a mean of 85.5 μg/day; in some selenosis areas, the Se intake can even reach up to 1253.7 μg/day on average [12]. Considering its narrow range between the necessary and the toxic dose, an optimal daily Se intake is required to maintain public health. A reasonable diet is the crucial determinant for daily Se intake [13]. It is well known that cereals, fish, eggs and meats are the major dietary sources of Se [12, 14, 15]. In response to stronger demands for healthy growth of people, the Chinese government proclaimed a new version of the Dietary Guidelines for Chinese Residents in the form of the Food Pagoda (Fig. 1), based on principles of nutritional science and the current national situation [16]. Five levels of the recommended consumption corresponding to five food groups are involved in the Food Pagoda, covering the essential foods we should consume in daily life [17]. This Pagoda recommends a relatively ideal dietary pattern to improve the general nutrition of Chinese residents. However, whether it can meet the daily requirements of Se intake for general population and achieve the optimal daily Se intake has yet to be ascertained. Food Guide Pagoda for Chinese residents [16] In the past decades of China, Se deficiency diseases, for e.g. Kashin-Beck disease and Keshan disease, have been prevalent in low-Se areas, with particularly high morbidity in underdeveloped regions. Apart from low Se contents in local foods, unreasonable food consumption patterns were also considered as one of the main reasons for deficient Se intake [18]. A study on dietary Se intake in 1990s found that urban residents consumed more Se-rich foods such as meats, seafood, eggs and dairy products than rural residents, resulting in contrasting Se intakes between the two populations [18]. With the rapid growth of China's economy, food supply and diversity increased dramatically [19]. Since the balanced diet conforming to the Chinese Food Pagoda is deemed as an ideal dietary pattern to promote nutrition, it is necessary to assess the Se intake level under the scenario of this Food Pagoda, and to be clear about the gap between this level and the current Se intake status of Chinese population in different provinces. This is the first study to evaluate the Se intake of Chinese residents associated with the 2016 Chinese Food Pagoda and discuss the influence of dietary patterns on Se intake. It will be valuable for the future research on daily optimal Se intake and also for the government to put forward proper strategies on Se supplementation. Therefore, this study aimed to: 1) test whether compliance with the Food Pagoda could meet daily requirements of Se intake for Chinese residents; 2) make quantitative comparisons of the China Total Diet Study-based estimates of Se intake (EITDS) with the Food Pagoda-based estimates of Se intake (EICHFP) in different food groups. The China TDS is a national survey to investigate the levels of various nutrients and chemical contaminants in foods and assess their dietary exposure for the Chinese population [20]. The data of food Se contents and the EITDS in China and 12 provinces used in this study were directly obtained from results of the 4th China TDS in 2007 [21]. Hereinto, the analysis of food Se was conducted by the National Institute of Nutrition and Food Safety. EITDS was calculated by multiplying determined food Se contents with the investigated food consumption data. Food consumption survey The 4th China TDS was carried out in 2007, with a similar design and experimental methods to the 3rd China TDS in 1990 [22]. The Chinese Centre for Disease Control and Prevention organized the food consumption survey. Multistage random cluster sampling method and food composites approach were used in this survey. A total of 12 provinces were selected to represent the average dietary patterns of different areas of China, covering about 50% of the total Chinese population. These provinces consist of Heilongjiang (HLJ), Liaoning (LN), Hebei (HeB), Shaanxi (ShX), Ningxia (NX), Henan (HN), Shanghai (ShH), Fujian (FJ), Jiangxi (JX), Guangxi (GX), Hubei (HuB), and Sichuan (SC). Three survey sites (two rural counties and one urban city) were randomly selected in each province as food sampling sites, and 30 households were sampled randomly from each site. 1080 households in total were covered in the survey. The food consumption pattern in each province was determined by a 3-day household dietary survey (including weighing and recording) and 24-h recalls. The average daily consumption of each food category by a standard Chinese adult man (aged 18–45, 63 kg body weight, light physical activity) was used as the standard food consumption pattern, and was calculated from the total household food consumption [22]. Samples collection and analysis Food samples were collected from local food markets, grocery stores and rural households in each survey site. All food items were aggregated into 12 groups, including cereals, beans, tubers, meat and poultry, eggs, aquatic products, milk and dairy products, vegetables, fruits, sugars, water and beverages, and alcohol. These samples were cooked and prepared according to the local habits, and then blended to form composites with weights proportional to the average daily consumption for the province [21]. The prepared food composites were shipped to the National Institute of Nutrition and Food Safety for analysis [21]. Total Se content in food composites was determined by the inductively coupled plasma mass spectrometry (Agilent 7500a ICP-MS) after microwave digestion of 0.3–0.5 g (solid) or 4–5 mL (liquid) in a mixture of 6 mL of concentrated HNO3 and 2 mL of 30% H2O2. Reagent blank, standard reference materials, and parallel samples were determined simultaneously to maintain the reliability of analysis. Limit of detection for Se was defined as three-times of the standard deviation of baseline value [21]. Calculation of dietary se intake based on the Chinese food pagoda (EICHFP) The ranges of EICHFP in China and 12 provinces were calculated according to the following equation [20]: $$ {\mathrm{EI}}_{\mathrm{CHFP}}=\mathrm{C}\times \mathrm{m}, $$ where C (μg/g) is the concentration of Se in each food group determined in the 4th China TDS, including 12 food groups in 12 provinces (as listed in Table 1); m (g/d) is the consumption of corresponding food groups recommended in the Dietary Guidelines and Food Pagoda for Chinese Residents (2016) (as shown in Fig. 1). The lower and upper limits of recommended consumption were used for calculating the lowest and highest EICHFP, respectively. In terms of Se contents in food groups, the lowest values in staple food like cereals, beans and tubers were found in Hubei, Liaoning and Heilongjiang province, which was broadly consistent with the distribution of low-Se belt in China [23]. It can thus be confirmed that food Se contents determined in the TDS are reliable. Table 1 Concentrations of Se in various food groups in China and 12 provinces (μg/g)a Data processing and chart production were mainly performed with the Microsoft Office Excel 2013 and SPSS 23.0. Coefficients of variation (CV) were calculated for the average EITDS and EICHFP in each food group. T test was used when comparing the difference between the average EITDS and EICHFP in various food categories. EICHFP in China and different regions Based on the concentrations of food Se in Table 1 and the food consumptions recommended in the Food Pagoda, results of the EICHFP were shown in Table 2. It was observed that the average EICHFP in 12 provinces were all greater than the China recommended nutrient intake (RNI) of Se (60 μg/day). The lowest EICHFP was also higher than the RNI in almost all regions with the exception of Heilongjiang and Ningxia province which might be related to the relatively low Se levels in their local food (Table 1). None of the highest EICHFP went beyond the tolerable upper intake level of Se (400 μg/day) set by the Chinese Nutrition Society [24]. Owing to the variation of Se levels in the local food, the average EICHFP varied greatly among regions, ranging from 66.23 to 145.20 μg/day. The highest average EICHFP was observed in Shaanxi province where Se levels in staple food and vegetables were the highest; while the lowest was found in Heilongjiang province where Se contents in all kinds of food groups were relatively low. It could be seen from Fig. 2 that differences of the average EICHFP among regions mainly lay in dairy products (CV = 1.05), cereals, beans and tubers (CV = 0.70), as well as vegetables and fruits (CV = 0.56). Animal source foods including meat, eggs, aquatic and dairy products made the highest contribution to daily Se intake in all regions according to the Food Pagoda eating patterns, ranging from 41.8 to 81.9%. Table 2 The average EICHFP and its ranges in China and different provinces (μg/day) The average EICHFP in different food groups in 12 provinces and Chinaa Abbreviations: HLJ Heilongjiang, LN Liaoning, HeB Hebei, ShX Shaanxi, NX Ningxia, HN Henan, ShH Shanghai, FJ Fujian, JX Jiangxi, GX Guangxi, HuB Hubei, SC Sichuan, AVG average, EI CHFP estimated Se intake based on the Chinese Food Pagoda. aNumbers in parentheses are coefficients of variation in each food group; the same as below EITDS in China and different regions According to the survey data from the 4th China TDS, the EITDS of daily dietary Se in various food groups in 12 provinces were presented in Fig. 3. Generally, the level of average EITDS for Chinese adults (88 μg/day) was slightly higher than the China RNI of Se (60 μg/day). Large geographical variation of dietary Se intake was observed among regions in China. As shown in Fig. 3, the highest EITDS was found in Shaanxi province (135.31 μg/day) at 2.3 times of RNI, followed by Shanghai (134.58 μg/day); while the lowest EITDS was observed in Heilongjiang province (43.86 μg/day), followed by Liaoning (53.35 μg/day), which were the only two provinces below the RNI. The EITDS in most of the provinces, such as Hebei, Henan, Jiangxi, Hubei, Sichuan and Guangxi, was within the range of 71.5–95.72 μg/day. Great variations of EITDS in food groups among regions were observed in aquatic products (CV = 1.17), dairy products (CV = 1.07), and cereals, beans and tubers (CV = 0.82), making differences in the major contributors to dietary Se intake in 12 provinces. Animal source food was found making the highest contribution to dietary Se intake in more than half of the regions, including Heilongjiang, Liaoning, Shanghai, Fujian, Hubei, Sichuan, and Guangxi province ranging from 38.1 to 69.4%. Particularly, aquatic products corresponded to the highest contributors in Fujian (45.4%) and Liaoning (30.9%) province. By contrast, cereals and tubers contributed the most to daily dietary Se intake in Shaanxi, Hebei, Ningxia, and Jiangxi province within the range of 41.6–61.9%. Vegetables made a predominant contribution to daily Se intake in Henan and Hubei province, accounting for 41.2 and 41.7% of the total intake respectively. Despite all this, it can be observed that animal foods and cereals are still the major sources of daily dietary Se intake in most regions of China, which is similar to previous studies in China [25]. The average EITDS in different food groups in 12 provinces and China Abbreviations: HLJ Heilongjiang, LN Liaoning, HeB Hebei, ShX Shaanxi, NX Ningxia, HN Henan, ShH Shanghai, FJ Fujian, JX Jiangxi, GX Guangxi, HuB Hubei, SC Sichuan, AVG average, EI TDS estimated Se intake based on the China Total Diet Study. Comparison of EITDS with EICHFP Results of the comparison between total EITDS and EICHFP were depicted in Fig. 4. In terms of the whole country, the average EITDS (88 μg/day) just fell into the range of its EICHFP (81.01–124.25 μg/day). This indicated that the daily dietary Se intake of Chinese population was overall in line with the recommendation proposed by the Chinese Food Pagoda. However, similar situation was only found in Hebei, Shaanxi, Ningxia, and Sichuan province. The majority of regions including Heilongjiang, Liaoning, Henan, Jiangxi, Hubei, and Guangxi province had a lower EITDS when compared with their corresponding lowest EICHFP. The gap between them ranged from 3.03 μg/day to 26.86 μg/day, with relatively bigger gaps in Hubei (26.86 μg/day) and Jiangxi (24.97 μg/day) province. By contrast, the EITDS in Shanghai and Fujian province (134.58 and 111.31 μg/day) were even higher than their highest EICHFP (123.05 and 104.58 μg/day). This comparison was just between the EITDS and EICHFP which were calculated under different food consumption patterns, regardless of the China RNI. Comparison of daily dietary Se between EITDS and EICHFP in 12 provinces in China Abbreviations: HLJ Heilongjiang, LN Liaoning, HeB Hebei, ShX Shaanxi, NX Ningxia, HN Henan, ShH Shanghai, FJ Fujian, JX Jiangxi, GX Guangxi, HuB Hubei, SC Sichuan, AVG average, EI TDS estimated Se intake based on the China Total Diet Study, EI CHFP estimated Se intake based on the Chinese Food Pagoda. The EITDS and their corresponding ranges of EICHFP from each food category among different regions were calculated and integrated according to the five levels of food groups classified by the Food Pagoda. As listed in Table 3, EITDS from the first level of the Food Pagoda (cereals, tubers and beans) in 12 provinces all exceeded their recommended ranges; while those from the fourth level (dairy products) were all far below their recommended amounts. This situation was more remarkable in Shaanxi, Hebei, Henan and Jiangxi province where the EITDS from cereals, tubers and beans were more than 3 times of their highest EICHFP, while those from dairy products were substantially poor. Additionally, Se intake from the third level (meat, eggs and aquatic products) varied greatly among regions, where only Heilongjiang and Hubei province had appropriate Se intake from this level. Those in Shanghai and Fujian province were much higher than their upper limits of EICHFP, which might account for their higher EITDS in Fig. 4. Almost all provinces had adequate Se intake from the second level of the Food Pagoda (vegetables and fruits) with the exception of Shaanxi, Henan and Guangxi province being slightly lower than their recommendations. Table 3 EITDS and EICHFP in different food categories in 12 provinces of China (μg/day)a Compared with other countries in the world, the dietary Se intake of Chinese population (88 μg/day) was relatively moderate. It was higher than numerous countries throughout Europe (including the UK) and the Middle East where the reported daily Se intake were less than 55 μg/day [26], but lower than those from certain developed countries, such as the US where 133.5 μg/day of average Se intake for men was reported [27]. However, great geographical variations of Se intake still existed in China. There are many possible factors that can have an effect on Se intake and result in these variations. These factors primarily include disparities in dietary patterns, consumption habits, food culture, economic levels, and also Se contents in food among different regions [18, 28, 29]. It is difficult to distinguish which factor plays a predominant role in dietary Se intake since the situation varies from regions to regions. For example, cereals and tubers made the highest contribution to daily dietary Se intake in Shaanxi, Hebei, Ningxia, and Jiangxi province, accounting for 41.6–61.9%; while 45.4 and 30.9% of daily Se intake were from aquatic products in Fujian and Liaoning province, making the highest contributors. The former is presumably ascribed to the relatively high Se levels in local crops (Table 1), as well as the food culture where tend to have large consumptions of flour or rice [21]; the latter, however, may be attributed to the consumption habits developed by their water-adjacent living environment. Correlations between Se contents and EITDS in each food group also showed that highly significant correlations were only observed in foods with large consumptions, including cereals (r = 0.936, P < 0.01), tubers (r = 0.787, P < 0.01), vegetables (r = 0.895, P < 0.01) and fruits (r = 0.769, P < 0.01). Foods with relatively high Se contents but small consumptions, such as seafood, eggs, and meats, presented poor correlations with Se intake (P > 0.05). In this study, the influence of dietary patterns on Se intake was discussed separately by quantitative comparison between EITDS and EICHFP which were calculated using the same food Se contents [21]. In the first place, the calculated results of EICHFP being greater than the China RNI (Table 2) has well suggested that Se intake complying with the balanced dietary patterns recommended in the Chinese Food Pagoda could meet the daily requirements of the majorities in China, even though Se contents in various food groups vary greatly among regions. By comparison with the total EITDS, it was found that half of regions failed to meet the minimum Se intake requirements of the Food Pagoda, indicating that dietary patterns in these regions may exist irrationality. When compared within various food categories, it was noteworthy that significant differences between the average EICHFP and EITDS were found in cereals, tubers and beans (t = − 2.975, P = 0.007), aquatic products (t = 2.468, P = 0.021) and dairy products (t = 3.089, P = 0.005). According to the average EICHFP in the whole country, there should be 7.7% of Se intake contributed from staple food, 23.0 and 16.1% contributed from aquatic and dairy products. Clearly, the current Se intake from staple food has been sufficient, accounting for 29.6% of the average EITDS in China; while that from aquatic and dairy products is largely deficient across the country, only accounting for 13.5 and 1.8% of the average EITDS in China. In fact, Chinese population consumed very limited milk and dairy products every day, with an average of 28.4 g/day per capita according to the food consumption data in the 4th China TDS [21], far below 300 g/day per capita recommended in the Pagoda. The consumption of aquatic products (29.0 g/day per capita) was also less than the lower limit of the recommended amount (40 g/day per capita) [16, 21]. However, compared with other animal source foods, milk and dairy products are not very good sources for Se. The average Se content in dairy products was only 0.055 μg/g, far less than that in aquatic products (0.410 μg/g) [21]. Therefore, from the view of the balanced dietary pattern and Se sources, Se nutrition for the general population can be further improved by properly increasing the consumption of seafood and its by-products. It is well known that human Se intake is closely associated with Se levels in foodstuffs and dietary pattern. The former is strongly dependent on the Se in soil which varies significantly across different regions of world [30]; the latter, however, can be changed with the increase of food supply and diversity as well as dietary habits. In the present study, for instance, in Heilongjiang and Liaoning province where EITDS has not reached the China RNI, their Se intake can be further improved by slightly adjusting eating patterns on the basis of Food Pagoda, such as consuming more fish, milk and dairy products, and so on. Even in some regions where Se intake has achieved the RNI, it can still be supplemented by adjusting the consumption of Se-rich foods within the recommended ranges to obtain Se-associated health benefit. The present study underestimated the influence of soil Se distribution on Se intake, because even though Se levels in food is a reflection of soil Se in most instances, soil Se contents do not help in cases where the food locally produced is sold and consumed by residents in other regions. Instead, by calculating the contribution of the balanced diet to daily Se intake, this study demonstrated that Se intake complying with balanced dietary patterns can achieve or even exceed the China RNI in different regions no matter how greatly their Se contents in food vary. Thus, it is believed that the daily requirement of the general population for Se can be satisfied if the Chinese Dietary Guidelines and the Food Guide Pagoda are strictly obeyed. Attempts can be made to improve the general Se intake levels by adjusting eating patterns. The present study made it clear that the balanced dietary pattern based on the Chinese Dietary Guidelines and Food Pagoda could meet daily requirements of the majorities for Se in China under the current Se levels in food. However, the comparison between EITDS and EICHFP showed that Se intake in half of the regions could not achieve their lowest EICHFP. The differences between them among regions mainly lay in cereal food, aquatic and dairy products. Se intake from staple food for Chinese population may have been sufficient, and more Se nutrition can be taken from aquatic products in terms of a well-balanced diet. AVG: EICHFP : Estimated Se intake based on the Chinese Food Pagoda EITDS : Estimated Se intake based on the China Total Diet Study FJ: GX: HB: HLJ: HN: ICP-MS: inductively coupled plasma mass spectrometry JX: LN: NX: RNI: Recommended nutrient intake SC: ShH: ShX: TDS: Total diet study Rayman MP. Selenium and human health. Lancet. 2012;379:1256–68. Kryukov GV, Castellano S, Novoselov SV, Lobanov AV, Zehtab O, Guigó R, et al. Characterization of mammalian selenoproteomes. Science. 2003;300:1439–43. Fairweather-Tait SJ, Bao YP, Broadley MR, Collings R, Ford D, Hesketh JE, et al. Selenium in human health and disease. Antioxid Redox Sign. 2011;14:1337–83. Méplan C, Hesketh J. Selenium and cancer: a story that should not be forgotten-insights from genomics. Cancer Treat Res. 2014;159:145–66. Guo X, Ma WJ, Zhang F, Ren FL, Qu CJ, Lammi MJ. Recent advances in the research of an endemic osteochondropathy in China: Kashin-Beck disease. Osteoarthr Cartilage. 2014;22:1774–83. Harthill M. Review: micronutrient selenium deficiency influences evolution of some viral infectious diseases. Biol Trace Elem Res. 2011;143:1325–36. Wang XL, Yang TB, Wei J, Lei GH, Zeng C. Association between serum selenium level and type 2 diabetes mellitus: a non-linear dose-response meta-analysis of observational studies. Nutr J. 2016;15:48. Manzanares W, Hardy G. Can dietary selenium intake increase the risk of toxicity in healthy children? Nutrition. 2016;32:149–50. Reis ARD, EI-Ramady H, Santos EF, Gratão PL, Schomburg L. Overview of selenium deficiency and toxicity worldwide: affected areas, selenium-related health issues, and case studies. In: Pilon-Smits E, Winkel L, Lin ZQ (eds) selenium in plants. Plant Ecophysiology 2017. 11:209–30. Tan LC, Nancharaiah YV, van Hullebusch ED, Lens PNL. Selenium: environmental significance, pollution, and biological treatment technologies. Biotechnol Adv. 2016;34:886–907. Xu ZC, Shao HF, Li S, Zheng C. Relationships between the selenium content in flue-cured tobacco leaves and the selenium content in soil in Enshi, China tobacco-growing area. Pak J Bot. 2012;44:1563–8. Dinh QT, Cui ZW, Huang J, Tran TAT, Wang D, Yang WX, et al. Selenium distribution in the Chinese environment and its relationship with human health: a review. Environ Int. 2018;112:294–309. Yu GH, Wen YM, He SY, Zhang L, Dong HY. Food selenium content and resident daily selenium intake in Guangzhou City. Chin J Appl Ecol. 2007;18:2600–4. Santos MD, Júnior FMRDS, Muccillo-Baisch AL. Selenium content of Brazilian foods: a review of the literature values. J Food Compos Anal. 2017;58:10–5. Choi Y, Kim J, Lee HS, Kim C, Hwang IK, Park HK, et al. Selenium content in representative Korean foods. J Food Compos Anal. 2009;22:117–22. The Chinese Nutrition Society. The Food Guide Pagoda for Chinese Residents. 2016. http://dg.cnsoc.org/upload/images/source/20160519163856103.jpg. Accessed 20 June 2016. Wang SS, Lay S, Yu HN, Shen SR. Dietary guidelines for Chinese residents (2016): comments and comparisons. J Zhejiang Univ-Sci B (Biomed & Biotechnol). 2016;17:649–56. Zhang ZW, Shimbo S, Qu JB, Watanabe T, Nakatsuka H, Matsuda-Inoguchi N, et al. Dietary selenium intake of Chinese adult women in the 1990s. Biol Trace Elem Res. 2001;80:125–38. Ge KY. The transition of Chinese dietary guidelines and the food guide pagoda. Asia Pac J Clin Nutr. 2011;20:439–46. Zhang L, Li JG, Liu X, Zhao YF, Li XW, Wen S. Dietary intake of PCDD/fs and dioxin-like PCBs from the Chinese total diet study in 2007. Chemosphere. 2013;90:1625–30. Wu YN, Li XW. The fourth China Total diet study. 1st ed. Beijing: Chemical Industry Press; 2015. Zhou PP, Zhao YF, Li JG, Wu GH, Zhang L, Liu Q, et al. Dietary exposure to persistent organochlorine pesticides in 2007 Chinese total diet study. Environ Int. 2012;42:152–9. Tan JA, Zhu WY, Wang WY, Li RB, Hou SF, Wang DC, et al. Selenium in soil and endemic diseases in China. Sci Total Environ. 2002;284:227–35. Cheng YY. A brief introduction to the 2013 revised "Chinese dietary reference intakes (DRIs)". Acta Nutrimenta Sinica. 2014;36:313–7. Gao J, Liu Y, Huang Y, Lin ZQ, Bañuelos GS, Lam MHW, et al. Daily selenium intake in a moderate selenium deficiency area of Suzhou, China. Food Chem. 2011;126:1088–93. Stoffaneller R, Morse NL. A review of dietary selenium intake and selenium status in Europe and the Middle East. Nutrients. 2015;7:1494–537. Chun OK, Floegel A, Chung SJ, Chung CE, Song WO, Koo SI. Estimation of antioxidant intakes from diet and supplements in U.S. adults. J Nutr. 2010;140:317–24. Wang ZH, Zhai FY, He Y, Wang HJ, Yu WT, Yu DM. Influence of family income on dietary nutrients intake and dietary structure in China. Journal of Hygiene Research. 2008;37:62–4. Li SM, Banuelos GS, Wu LH, Shi WM. The changing selenium nutritional status of Chinese residents. Nutrients. 2014;6:1103–14. Chen LC, Yang FM, Xu J, Hu Y, Hu QH, Zhang YL, et al. Determination of selenium concentration of rice in China and effect of fertilization of selenite and selenate on selenium content of rice. J Agric Food Chem. 2002;50:5128–30. This work was financially supported by the National Natural Science Foundation of China (41671500). The datasets supporting the findings of this study are included within the article. Key Laboratory of Land Surface Pattern and Simulation, Institute of Geographical Sciences and Natural Resources Research, Chinese Academy of Sciences, 11 A Datun Road, Beijing, 100101, People's Republic of China Jing Wang, Linsheng Yang, Hairong Li, Yonghua Li & Binggan Wei College of Resources and Environment, University of Chinese Academy of Sciences, Beijing, 100049, People's Republic of China Jing Wang, Linsheng Yang & Hairong Li Linsheng Yang Hairong Li Yonghua Li Binggan Wei LSY designed the structure of this study; HRL helped to collect the data and contributed to revising the manuscript; JW analyzed the data and wrote the first draft. YHL and BGW had responsibility for final content. All authors read and approved the final manuscript. Correspondence to Linsheng Yang or Hairong Li. Not applicable. There was no primary data collection. The authors declare that they have no competing interests regarding this study. Wang, J., Yang, L., Li, H. et al. Dietary selenium intake based on the Chinese Food Pagoda: the influence of dietary patterns on selenium intake. Nutr J 17, 50 (2018). https://doi.org/10.1186/s12937-018-0358-6 Chinese Food Pagoda China Total Diet Study
CommonCrawl
\begin{document} \maketitle \begin{abstract} We present an algorithm that computes the Lempel-Ziv decomposition in $O(n(\log\sigma + \log\log n))$ time and $n\log\sigma + \epsilon n$ bits of space, where $\epsilon$ is a constant rational parameter, $n$ is the length of the input string, and $\sigma$ is the alphabet size. The $n\log\sigma$ bits in the space bound are for the input string itself which is treated as read-only. \end{abstract} \section{Introduction} The Lempel-Ziv decomposition~\cite{LempelZiv} is a basic technique for data compression and plays an important role in string processing. It has several modifications used in various compression schemes. The decomposition considered in this paper is used in LZ77-based compression methods and in several compressed text indexes designed to efficiently store and search massive highly-repetitive data sets. The standard algorithms computing the Lempel-Ziv decomposition work in $O(n\log\sigma)$\footnote{Throughout the paper, $\log$ denotes the logarithm with the base~$2$.} time and $O(n\log n)$ bits of space, where $n$ is the length of the input string and $\sigma$ is the alphabet size. It is known that this is the best possible time for the general alphabets~\cite{Kosolobov}. However, for the most important case of integer alphabet, there exist algorithms working in $O(n)$ time and $O(n\log n)$ bits (see \cite{FischerIKoppl} for references). When $\sigma$ is small, this number of bits is too big compared to the $n\log\sigma$ bits of the input string and can be prohibitive. To address this issue, several algorithms using $O(n\log\sigma)$ bits were designed. The main contribution of this paper is a new algorithm computing the Lempel-Ziv decomposition in $O(n(\log\sigma + \log\log n))$ time and $n\log\sigma + \epsilon n$ bits of space, where $\epsilon$ is a constant rational parameter. The $n\log\sigma$ bits in the space bound are for the input string itself which is treated as read-only. The following table lists the time and space required by existing approaches to the Lempel-Ziv parsing in $O(n\log\sigma)$ bits of space. \begin{tabular}{|c|c|c|l|} \hline Time & Bits of space & Note & Author(s) \\ \hline $O(n\log\sigma)$ & $O(n\log\sigma)$ & &Ohlebusch and Gog \cite{OhlebuschGog} \\ $O(n\log^3 n)$ & $n\log\sigma + O(n)$ & online & Okanohara and Sadakane \cite{OkanoharaSadakane} \\ $O(n\log^2 n)$ & $O(n\log\sigma)$ & online & Starikovskaya \cite{Starikovskaya} \\ $O(n\log n)$ & $O(n\log\sigma)$ & online & Yamamoto et al. \cite{YamamotoIBannaiInenagaTakeda} \\ $O(n\log n\log\log\sigma)$ & $n\log\sigma+\epsilon n$ & & K\"arkk\"ainen et al. \cite{KarkkainenKempaPuglisi}\\ $O(n(\log\sigma + \log\log n))$ & $n\log\sigma + \epsilon n$ & & this paper\\ \hline \end{tabular} By a more careful analysis, one can show that when $\epsilon$ is not a constant, the running time of our algorithm is $O(\frac{n}{\epsilon}(\log\sigma + \log\frac{\log n}{\epsilon}))$; we omit the details here. \subsubsection{Preliminaries.}Let $w$ be a string of length $n$. Denote $|w| = n$. We write $w[0], w[1], \ldots, w[n{-}1]$ for the letters of $w$ and $w[i..j]$ for $w[i]w[i{+}1]\cdots w[j]$. A string can be \emph{reversed} to get $\lvec{w} = w[n{-}1]\cdots w[1]w[0]$ called the \emph{reversed $w$}. A string $u$ is a \emph{substring} (or \emph{factor}) of $w$ if $u=w[i..j]$ for some $i$ and $j$. The pair $(i,j)$ is not necessarily unique; we say that $i$ specifies an \emph{occurrence} of $u$ in $w$. A string can have many occurrences in another string. For $i,j \in \mathbb{Z}$, the set $\{k\in \mathbb{Z} \colon i \le k \le j\}$ is denoted by $[i..j]$; $[i..j)$ denotes $[i..j{-}1]$. Throughout the paper, $s$ denotes the input string of length $n$ over the integer alphabet $[0..\sigma)$. Without loss of generality, we assume that $\sigma \le n$ and $\sigma$ is a power of two. Thus, $s$ occupies $n\log\sigma$ bits. Simplifying the presentation, we suppose that $s[0]$ is a special letter that is smaller than any letter in $s[1..n{-}1]$. Our model of computation is the unit cost word RAM with the machine word size at least $\log n$ bits. Denote $r = \log_\sigma n = \frac{\log n}{\log\sigma}$. For simplicity, we assume that $\log n$ is divisible by $\log\sigma$. Thus, one machine word can contain a string of length $\le r$; we say that it is a \emph{packed string}. Any substring of $s$ of length $r$ can be packed in a machine word in constant time by standard bitwise operations. Therefore, one can compare any two substrings of $s$ of length $k$ in $O(k/r + 1)$ time. The \emph{Lempel-Ziv decomposition of $s$} is the decomposition $s = z_1z_2\cdots z_l$ such that each $z_i$ is either a letter that does not occur in $z_1z_2\cdots z_{i-1}$ or the longest substring that occurs at least twice in $z_1z_2\cdots z_i$ (e.g., $s = a\cdot b\cdot b\cdot abbabb\cdot c\cdot ab\cdot ab$). The substrings $z_1, z_2, \ldots, z_l$ are called the \emph{Lempel-Ziv factors}. Our algorithm consecutively reports the factors in the form of pairs $(|z_i|, p_i)$, where $p_i$ is either the position of a nontrivial occurrence of $z_i$ in $z_1z_2\cdots z_i$ (it is called an \emph{earlier occurrence of $z_i$}) or $z_i$ itself if $z_i$ is a letter that does not occur in $z_1z_2\cdots z_{i-1}$. The reported pairs are not stored in main memory. Fix a rational constant $\epsilon > 0$. It suffices to prove that our algorithm works in $O(n(\log\sigma + \log\log n))$ time and $n\log\sigma + O(\epsilon n)$ bits: the substitution $\epsilon' = c\epsilon$, where $c$ is the constant under the bit-$O$, gives the required $n\log\sigma + \epsilon'n$ bits with the same working time. We use different approaches to process the Lempel-Ziv factors of different lengths. In Section~\ref{SectShortFactors} we show how to process ``short'' factors of length ${<}r/2$. In Section~\ref{SectMidFactors} we describe new compact data structures that allow us to find all ``medium'' factors of length ${<}(\log n/\epsilon)^2$. In Section~\ref{SectLongFactors} we apply the clever technique of~\cite{BurkhardtKarkkainen} for the analysis of all other ``long'' factors. \section{Short Factors}\label{SectShortFactors} In this section we consider the Lempel-Ziv factors of length $<r/2$, so we assume $r \ge 2$. Suppose the algorithm has reported the factors $z_1, z_2, \ldots, z_{k-1}$ and now we process $z_k$. Denote $p = |z_1z_2\cdots z_{k-1}|$. We maintain arrays $H_1, H_2, \ldots, H_{\lceil r/2\rceil}$ defined as follows: for $i \in [1..\lceil\frac{r}{2}\rceil]$, the array $H_i$ contains $\sigma^i$ integers such that for any $x \in [0..\sigma^i)$, either $H_i[x]$ equals the position from $[0..p)$ of an occurrence in $s$ of the packed string $x$ of length $i$ or $H_i[x] = -1$ if there are no such positions. For each $i\in [1..r]$ and $j\in [0..n]$, denote by $x^j_i$ the packed string $s[j..j{+}i{-}1]$. We have $H_1[x^p_1] = -1$ iff $z_k$ is a letter that does not appear in $s[0..p{-}1]$; in this case the algorithm reports $z_k$ immediately. Further, we have $H_{\lceil r/2\rceil}[x^p_{\lceil r/2\rceil}] \ne -1$ iff $|z_k| \ge \frac{r}2$; this case is considered in Sections~\ref{SectMidFactors},~\ref{SectLongFactors}. Suppose $H_1[x^p_1] \ne -1$ and $H_{\lceil r/2\rceil}[x^p_{\lceil r/2\rceil}] = -1$. Our algorithm finds the minimal $q \in [2..\lceil \frac{r}2\rceil]$ such that $H_q[x^p_q]=-1$. Then we obviously have $|z_k| = q{-}1$ and $H_{|z_k|}[x^p_{|z_k|}]$ is the position of an earlier occurrence of $z_k$. Clearly, the algorithm works in $O(|z_k|)$ time. The inequality $r = {\log n}/{\log\sigma} \ge 2$ implies $\sigma \le \sqrt{n}$. Thus, $H_1, H_2, \ldots, H_{\lceil r/2\rceil}$ altogether occupy at most $\sigma^{\lceil r/2\rceil}r\log n \le \sigma^{\frac{r}2}\sigma^{\frac{1}2} r\log n \le n^{\frac{3}{4}}r\log n = o(n)$ bits. To maintain $H_1, \ldots, H_{\lceil r/2\rceil}$, we consecutively examine the positions $j = 0, 1, \ldots, p{-}1$ and for those positions, for which $H_{\lceil r/2\rceil}[x^j_{\lceil r/2\rceil}] = -1$, we perform the assignments $H_1[x^j_1] \gets j, H_2[x^j_2] \gets j, \ldots, H_{\lceil r/2\rceil}[x^j_{\lceil r/2\rceil}] \gets j$. Hence, we execute these assignments for at most $\sigma^{\lceil r/2\rceil}$ positions and the overall time required for the maintenance of $H_1, \ldots, H_{\lceil r/2\rceil}$ is $O(n + r\sigma^{\lceil r/2\rceil}) = O(n)$. \section{Medium Factors}\label{SectMidFactors} Suppose the algorithm has reported the Lempel-Ziv factors $z_1, z_2, \ldots, z_{k-1}$ and already decided that $|z_k| \ge \frac{r}2$ applying the procedure of Section~\ref{SectShortFactors}. Denote $p = |z_1z_2\cdots z_{k-1}|$, $\tau = \lceil\frac{\log n}{\epsilon}\rceil$, and $b = \lceil\epsilon n / (\log\sigma + \log\log n)\rceil$. We assume $p{+}b{+}\tau^2 < n$; the case $p{+}b{+}\tau^2 \ge n$ is analogous. Our algorithm processes $s[0..p{+}b]$ and reports not only $z_k$ but also all Lempel-Ziv factors starting in positions $[p..p{+}b]$. The algorithm consists of three phases: the first one builds for other phases an indexing data structure on the string $s[p..p{+}b]$ in $O(b\log\sigma)$ time and $O(b(\log\sigma + \log\log n)) = O(\epsilon n)$ bits; the second phase scans $s[0..p{+}b]$ in $O(n)$ time and fills a bit array $\mathit{lz}[0..b]$ so that for any $i\in [0..b]$, $\mathit{lz}[i] = 1$ iff there is a Lempel-Ziv factor starting in the position $p{+}i$; finally, the last phase scans $s[0..p{+}b]$ in $O(n)$ time and reports earlier occurrences of the found Lempel-Ziv factors. Thus, the overall time required by this algorithm is $O((n + b\log\sigma)\frac{n}b) = O(n(\log\sigma + \log\log n))$. The data structures we use can search only the Lempel-Ziv factors of length $<\tau^2$; we delegate the longer factors to the procedure of Section~\ref{SectLongFactors}. This restriction allows us to make our structures fast and compact. More precisely, our algorithm consecutively computes the lengths of the Lempel-Ziv factors starting in $[p..p{+}b]$ and once we have found a factor of length $\ge \tau^2$, we invoke the procedure of Section~\ref{SectLongFactors} to compute the length and an earlier occurrence of this factor. \subsection{Main Tools} Let $x$ be a string of length $d{+}1$. Denote $\lvec{x}_i = \lrange{x[0..i]}$. The \emph{suffix array of $\lvec{x}$} is the permutation $\mathit{SA}[0..d]$ of the integers $[0..d]$ such that $\lvec{x}_{\mathit{SA}[0]} < \lvec{x}_{\mathit{SA}[1]} < \ldots < \lvec{x}_{\mathit{SA}[d]}$ in the lexicographical order. The \emph{Burrows-Wheeler transform}~\cite{BurrowsWheeler} of $\lvec{x}$ is the string $\mathit{BWT}[0..d]$ such that $\mathit{BWT}[i] = x[\mathit{SA}[i]{+}1]$ if $\mathit{SA}[i] < d$ and $\mathit{BWT}[i] = x[0]$ otherwise. We equip $\mathit{BWT}$ with the function $\Psi$ defined as follows: $\Psi(i) = \mathit{SA}^{-1}[\mathit{SA}[i] + 1]$ if $\mathit{SA}[i] < d$ and $\Psi(i) = 0$ otherwise. \begin{lemma}[see~\cite{HonSadakaneSung}] The string $\mathit{BWT}$ and the function $\Psi$ for a string $\lvec{x}$ of length $d{+}1$ over the alphabet $[0..\sigma)$ can be constructed in $O(d\log\log\sigma)$ time and $O(d\log\sigma)$ bits of space; $\Psi$ is encoded in $O(d\log\sigma)$ bits with $O(1)$ access time.\label{BWT} \end{lemma} \iflong \begin{example} Consider the string $x = \$aabadcaababadcaaba$. $$\scriptsize{ \begin{array}{r|c|c|c|c} x[0..\mathit{SA}[i]] & \mathit{BWT}[i] & \mathit{SA}[i] & \Psi(i) & i \\ \hline \$ & a & 0 & 1 & 0 \\ \$a & a & 1 & 2 & 1 \\ \$aa & b & 2 & 11 & 2 \\ \$aabadcaa & b & 8 & 12 & 3 \\ \$aabadcaababadcaa & b & 16 & 13 & 4 \\ \$aaba & d & 4 & 17 & 5 \\ \$aabadcaaba & b & 10 & 14 & 6 \\ \$aabadcaababadcaaba & \$ & 18 & 0 & 7 \\ \$aabadcaababa & d & 12 & 18 & 8 \\ \$aabadca & a & 7 & 3 & 9 \\ \$aabadcaababadca & a & 15 & 4 & 10 \\ \$aab & a & 3 & 5 & 11 \\ \$aabadcaab & a & 9 & 6 & 12 \\ \$aabadcaababadcaab & a & 17 & 7 & 13 \\ \$aabadcaabab & a & 11 & 8 & 14 \\ \$aabadc & a & 6 & 9 & 15 \\ \$aabadcaababadc & a & 14 & 10 & 16 \\ \$aabad & c & 5 & 15 & 17 \\ \$aabadcaababad & c & 13 & 16 & 18 \end{array} }$$ \end{example} \fi In the \emph{dynamic weighted ancestor (WA for short) problem} one has 1) a weighted tree, where the weight of each vertex is greater than the weight of parent, 2) the queries finding for a vertex $v$ and number $i$ the ancestor of $v$ with the minimal weight $\ge i$, 3) the updates inserting new vertices. Let $v$ be a vertex of a trie $T$ ($v \in T$ for short). Denote by $\mathit{lab}(v)$ the string written on the path from the root to $v$. We treat tries as weighted trees: $|\mathit{lab}(v)|$ is the weight of $v$. \begin{lemma}[see~\cite{KopelowitzLewenstein}] For a weighted tree with at most $k$ vertices, the dynamic WA problem can be solved in $O(k\log k)$ bits of space with queries and updates working in $O(\log k)$ amortized time.\label{WeightedAncestor} \end{lemma} One can easily modify the proof of~\cite{KopelowitzLewenstein} for a special case of this problem when the weights are integers $[0..\tau^2]$ and the height of the tree is bounded by $\tau^2$. \iflong \begin{lemma} \else \begin{lemma}[see~\cite{KopelowitzLewenstein}] \fi Let $T$ be a weighted tree with at most $m \le n$ vertices, the weights $[0..\tau^2]$, and the height ${\le}\tau^2$. The dynamic WA problem for $T$ can be solved in $O(m(\log m + \log\log n))$ bits of space with queries and updates working in $O(1)$ amortized time using a shared table of size $o(n)$ bits.\label{WeightedAncestorSpec} \end{lemma} \newcommand{\WASpecProof}{ \begin{proof} In~\cite{KopelowitzLewenstein}, using $O(m\log m)$ additional bits of space, the general problem for a tree with $m$ vertices, the weights $[0..\tau^2]$, and the height $\le\tau^2$ is reduced to the same problem for subtrees with at most $\log\log m$ vertices and the problem of the maintenance of a set of dynamic predecessor data structures on the weights $[0..\tau^2]$ so that each of these predecessor structures contains at most $\tau^2$ weights and all they contain $O(m)$ weights in total. Each query or update on the tree requires a constant number of queries/updates on the subtrees of size $\le \log\log m$ and on the predecessor structures. Since the weights are bounded by $\tau^2$, a subtree with at most $\log\log m$ vertices fits in $O(\log\log m\log\tau) = O((\log\log n)^2)$ bits. So, we can perform queries and updates on these trees in $O(1)$ time using a shared table of size $O(2^{O((\log\log n)^2)}\log^{O(1)} n) = o(n)$ bits. Further, one can organize a dynamic predecessor data structure with at most $\tau^2$ elements as a $B$-tree of a constant depth with $O(\sqrt{\tau})$-element predecessor structures on each level. Any predecessor structure with $O(\sqrt{\tau})$ weights fits in $O(\sqrt{\tau}\log\log n)$ bits and therefore, one can perform all operations on these small structures with the aid of a shared table of size $O(2^{\sqrt{\tau}\log\log n}\log^{O(1)} n) = o(n)$ bits. Thus we can perform all operations on the source predecessor structure in $O(1)$ time. \qed \end{proof} } \iflong \WASpecProof \fi Denote by $\mathit{lcp}(t_1, t_2)$ the length of the longest common prefix of the strings $t_1$ and $t_2$. Denote $\mathit{rlcp}(i, j) = \min\{\tau^2, \mathit{lcp}(x'_{\mathit{SA}[i]}, x'_{\mathit{SA}[j]})\}$. \begin{lemma}[see~\cite{BellerGogOhlebuschSchnattinger}] For a string $x$ of length $d{+}1$, using $\mathit{BWT}$ of $\lvec{x}$, one can compute an array $\mathit{rlcp}[0..d{-}1]$ such that $\mathit{rlcp}[i] = \mathit{rlcp}(i,i{+}1)$, for $i\in[0..d)$, in $O(d\log\sigma)$ time and $O(d\log\sigma)$ bits; the array occupies $O(d\log\log n)$ bits.\label{LCP} \end{lemma} \subsection{Indexing Data Structure}\label{SubsectConstruct} \noindent\mycap{Trie.} Denote $d = 1{+}b{+}\tau^2$. The algorithm creates a string $x$ of length $d{+}1$ and copies the string $s[p..p{+}b{+}\tau^2]$ in $x[1..d]$; $x[0]$ is set to a special letter less than any letter in $x[1..d]$. Let $\mathit{SA}$ be the suffix array of $\lvec{x}$ (we use it only conceptually). Denote $x'_i = \lrange{x[i{-}\tau^2{+}1..i]}$ (we assume that $x[-1], x[-2], \ldots$ are equal to $x[0]$). Here we discuss the design of our indexing data structure, a carefully packed in $O(d(\log\sigma + \log\log n))$ bits augmented compact trie of the strings $x'_0, x'_1, \ldots, x'_d$. For simplicity, suppose $d$ is a multiple of $r$. The skeleton of our structure is a compact trie $Q_0$ of the strings $\{x'_{\mathit{SA}[jr]} \colon j\in [0..d/r]\}$. We augment $Q_0$ with the WA structure of Lemma~\ref{WeightedAncestorSpec}. Each vertex $v \in Q_0$ contains the following fields: 1) the pointer to the parent of $v$ (if any); 2) the pointers to the children of $v$ in the lexicographical order; 3) the length of $\mathit{lab}(v)$; 4) the length of the string written on the edge connecting $v$ to its parent (if any). Notice that the fields 3)--4) fit in $O(\log\log n)$ bits. Clearly, $Q_0$ occupies $O((d/r)\log n) = O(d\log\sigma)$ bits of space. The pointers to the substrings of $x$ written on the edges of $Q_0$ are not stored, so, one cannot use $Q_0$ for searching. \begin{figure} \caption{\small Solid vertices and edges are from $Q_0$.} \label{fig:trieconstruct} \end{figure} We create an array $L[0..d/r]$ such that for $i\in[0..d/r]$, $L[i]$ is the pointer to the leaf of $Q_0$ corresponding to $x'_{\mathit{SA}[ir]}$. Now we build a compact trie $Q$ inserting the strings $\{x'_{\mathit{SA}[ir{+}j]}\}_{j=1}^{r{-}1}$ in $Q_0$ for each $i \in [0..d/r)$ as follows. For a fixed $i$, these strings add to $Q_0$ trees $T_1, \ldots, T_l$ attached to the branches $x'_{\mathit{SA}[ir]}$ and $x'_{\mathit{SA}[(i{+}1)r]}$ in $Q_0$ (see Fig.~\ref{fig:trieconstruct}). We store $T_1, \ldots, T_l$ in a contiguous memory block $F_i$. The pointer to $F_i$ is stored in the leaf of $Q_0$ corresponding to $x'_{\mathit{SA}[ir]}$, so, one can find $F_i$ in $O(1)$ time using $L$. Since $T_1, \ldots, T_l$ have at most $2r$ vertices in total, $O(\log\log n)$ bits per vertex suffice for the fields 1)--4). Now we discuss how $T_1, \ldots, T_l$ are attached to $Q_0$. Consider $v \in Q_0$ and the vertices $v_1,\ldots,v_h$ splitting the edge connecting $v$ to its parent in $Q_0$. Let $T_{i_1}, \ldots, T_{i_g}$ be the trees that must be attached to $v, v_1,\ldots, v_h$ (see Fig.~\ref{fig:trieconstruct}). We add to $v$ a memory block $N_v$ containing the WA structure of Lemma~\ref{WeightedAncestorSpec} for the chain $v, v_1, \ldots, v_h$ with the weights $|\mathit{lab}(v)|, |\mathit{lab}(v_1)|, \ldots, |\mathit{lab}(v_h)|$. Each of the vertices $v,v_1,\ldots,v_h$ in this chain contains the $O(\log\log n)$-bit pointers (inside $F_i$) to the roots of $T_{i_1}, \ldots, T_{i_g}$ attached to this vertex. Hence, $N_v$ occupies $O((h + g)\log\log n)$ bits. One can find the children for each of the vertices $v, v_1,\ldots, v_h$ in $O(1)$ time using $Q_0$ and the chain in the block $N_v$. Further, one can find, for any $j \in [1..g]$, the parent of the root of $T_{i_j}$ in $O(1)$ time by a WA query on $Q_0$ to find a suitable $v$ and a WA query on the chain in $N_v$. Finally, we augment each $T_i$ with the WA structure of Lemma~\ref{WeightedAncestorSpec}. Thus, by Lemma~\ref{WeightedAncestorSpec}, $T_1, \ldots, T_l$ add at most $O(r\log\log n)$ bits to $Q$. For each $i \in [0..d/r)$, we augment the leaf referred by $L[i]$ with an array $L_i[0..r{-}2]$ such that for $j \in [0..r{-}2]$, $L_i[j]$ is the $O(\log\log n)$-bit pointer (inside $F_i$) to the leaf of $Q$ corresponding to $x'_{\mathit{SA}[ir{+}1{+}j]}$. So, for any $j \in [0..d]$, one can easily find the leaf of $Q$ corresponding to $x'_{\mathit{SA}[j]}$ in $O(1)$ time via $L$ and $L_{\lfloor j/r\rfloor}$. Finally, the whole described structure $Q$ occupies $O(d(\log\sigma + \log\log n))$ bits. \noindent\mycap{Prefix links.} Consider $v \in Q$. Denote by $[i_v..j_v]$ the longest segment such that for each $i\in [i_v..j_v]$, $x'_{\mathit{SA}[i]}$ starts with $\mathit{lab}(v)$ (see Fig.~\ref{fig:bwttrie}). Let $\mathit{BWT}$ be the Burrows-Wheeler transform of $\lvec{x}$. Denote the set of the letters of $\mathit{BWT}[i_v..j_v]$ by $P_v$. We associate with $v$ the \emph{prefix links} mapping each $c \in P_v$ to an integer $p_v(c) \in [i_v..j_v]$ such that $x[\mathit{SA}[p_v(c)]{+}1] = c$ (there might be many such $p_v(c)$; we choose any). The prefix links correspond to the well-known \emph{Weiner-links}. Hence, $Q$ has at most $O(d)$ prefix links. Observe that $P_u \supset P_v$ for any ancestor $u$ of $v$. The problem is to store the prefix links in $O(d(\log\sigma + \log\log n))$ bits. \begin{figure} \caption{\small $\tau^2 = 4$, the prefix links associated with vertices are in squares.} \label{fig:bwttrie} \end{figure} Fix $i \in [0..d)$. Denote by $V_i$ the set of the vertices $v \notin Q_0$ such that $v$ does not have descendants from $Q_0$ and lies between branches $x'_{\mathit{SA}[ir]}$ and $x'_{\mathit{SA}[(i{+}1)r]}$. We associate with each $v \in V_i$ a dictionary $D_v$ mapping each $c \in P_v$ to $p_v(c){-}ir$ and store all $D_v$, for $v\in V_i$, in a contiguous memory block $H_i$. Since $|V_i| < r$ and $P_v$ is a subset of $\mathit{BWT}[ir..(i{+}1)r]$, we have $p_v(c){-}ir \in [1..r)$ and all $D_v$, for $v \in V_i$, occupy overall $O(\sum_{v\in V_i}|P_v|(\log\sigma + \log\log n)) = O(r^2(\log\sigma + \log\log n))$ bits of space. Therefore, we can store in each $v\in V_i$ the $O(\log\log n)$-bit pointer to $D_v$ (inside $H_i$). The pointer to $H_i$ itself is stored in the leaf referred by $L[i]$. Consider $v \notin Q_0$ such that $v$ lies on an edge connecting a vertex $w \in Q_0$ to its parent in $Q_0$. Let $x'_{\mathit{SA}[j_1r]}$ and $x'_{\mathit{SA}[j_2r]}$ be the strings corresponding to the leftmost and rightmost descendant leaves of $w$ contained in $Q_0$. We split $P_v$ on three subsets: $P_1 = \{c\in P_v \colon p_v(c) < j_1r\}$, $P_2 = \{c\in P_v \colon p_v(c) > j_2r\}$, $P_3 = P_v \setminus (P_1\cup P_2)$. Clearly $P_3 \subset P_w\subset P_v$. Hence, we can use $P_w$ instead of $P_3$ and store only the sets $P_1$ and $P_2$ in a way similar to that discussed above. Suppose $v\in Q_0$. Let for $c\in P_v$, $j_c \in [i_v..j_v]$ be the position of the first occurrence of $c$ in $\mathit{BWT}[i_v..j_v]$. Clearly, we can set $p_v(c) = j_c$. We add to $v$ a dictionary mapping each $c\in P_v$ to $h_c = |\{c' \in P_v \colon j_{c'} < j_c\}|$. Denote $q = |P_v|$. Since $q \le \sigma$, the dictionary occupies $O(q\log\sigma)$ bits. Now it suffices to map $h_c$ to $j_c$. Let $j'_0, \ldots, j'_{q-1}$ denote all $j_c$, for $c\in P_v$, in increasing order. Obviously $j'_{h_c} = j_c$. The idea is to sample each $(\tau^2\log n)$th position in $\mathit{BWT}$. We add to $v$ a bit array $A_v[0..q{-}1]$ indicating the sampled $j'_0,\ldots,j'_{q-1}$: $A_v[0] = 1$ and for $h \in [1..q)$, $A_v[h] = 1$ iff $j'_{h{-}1} < l\tau^2\log n \le j'_h$ for an integer $l$; $A_v$ is equipped with the structure of~\cite{RamanRamanRao} supporting the queries $\mathrm{r}_{A_v}(h) = \sum_{i=0}^h A_v[i]$ in $O(1)$ time and $o(q)$ additional bits. The sampled sequence $\{j'_h \colon A_v[h] = 1\}$ is stored in an array $B_v$. Finally, we add an array $C_v[0..q{-}1]$ such that $C_v[h] = j'_h - B_v[\mathrm{r}_{A_v}(h){-}1]$. Now we map $h$ to $j'_h$ as follows: $j'_h = B_v[\mathrm{r}_{A_v}(h){-}1] + C_v[h]$. Clearly, each value of $C_v$ is in the range $[0..\tau^2\log n]$ and hence, $C_v$ occupies $O(q\log(\tau^2\log n)) = O(q\log\log n)$ bits. It suffices to estimate the space consumed by $B_v$. Since the number of the vertices in $Q_0$ is $O(d/r)$ and the height of $Q$ is at most $\tau^2$, all $B_v$ arrays occupy at most $O((d/r)\log n + \frac{d}{\tau^2\log n} \tau^2\log n) = O(d\log\sigma)$ bits in total. \noindent\mycap{Construction of $Q$.} Initially, $Q$ contains one leaf corresponding to $x'_{\mathit{SA}[0]}$. We consecutively insert $x'_{\mathit{SA}[1]}, \ldots, x'_{\mathit{SA}[d]}$ in $Q$ in groups of $r$ elements. During the construction, we maintain on $Q$ a set of the dynamic WA structures of Lemma~\ref{WeightedAncestorSpec} in such a way that one can answer any WA query on $Q$ in $O(1)$ time. Suppose we have inserted $x'_{\mathit{SA}[0]}, \ldots, x'_{\mathit{SA}[ir]}$ in $Q$ and now we are to insert $x'_{\mathit{SA}[ir{+}1]}, \ldots, x'_{\mathit{SA}[(i{+}1)r]}$. We first allocate the memory block $F_i$ required for new vertices. Using Lemma~\ref{LCP}, we compute $\mathit{rlcp}(j{-}1, j)$ for all $j \in [ir{+}1..(i{+}1)r]$. Since $\mathit{rlcp}(j_1, j_2) = \min\{\mathit{rlcp}(j_1, j_2{-}1), \mathit{rlcp}(j_2{-}1, j_2)\}$, the algorithm can compute $\mathit{rlcp}(ir, ir{+}j)$ for all $j\in[1..r]$ in $O(r)$ time. Using the WA query on the leaf $x'_{\mathit{SA}[ir]}$ and the value $\mathit{rlcp}(ir, (i{+}1)r)$, we find the position where we insert a new leaf $x'_{\mathit{SA}[(i{+}1)r]}$. Similarly, using the WA queries, we consecutively insert $x'_{\mathit{SA}[ir{+}j]}$ for $j = 1,2,\ldots$ as long as $\mathit{rlcp}(ir, ir{+}j) > \mathit{rlcp}(ir, (i{+}1)r)$ and then all other $x'_{\mathit{SA}[(i{+}1)r{-}j]}$ for $j=1,2,\ldots$ (Fig.~\ref{fig:trieconstruct}). All related WA structures, the arrays $L, L_i$, the pointers, and the fields for the vertices are built in an obvious way. One can construct the prefix links of a vertex from those of its children in $O(q\log\sigma)$ time, where $q$ is the number of the links in the children. As there are at most $O(d)$ prefix links, one DFS traverse of $Q$ builds them in $O(d\log\sigma)$ time. Finally, using the result of~\cite{HagerupMiltersenPagh}, the algorithm converts in $O(d\log\sigma)$ time all dictionaries in the prefix links of the resulting trie $Q$ in the perfect hashes with $O(1)$ access time. So, one can access any prefix link in $O(1)$ time. \subsection{Algorithm for Medium Factors} In the \emph{dynamic marked descendant problem} one has a tree, a set of marked vertices, the queries asking whether there is a marked descendant of a given vertex, and the updates marking a given vertex. We assume that each vertex is a descendant of itself. We solve this problem on $Q$ as follows\iflong.\else~(see arXiv:1504.06712).\fi \begin{lemma} In $O(d(\log\sigma + \log\log n))$ bits one can solve the dynamic marked descendant problem on $Q$ so that any $k$ queries and updates take $O(k + d)$ time.\label{MarkedDescendant} \end{lemma} \newcommand{\MarkedDescProof}{ \begin{proof} Let $q$ be the number of the vertices in $Q$. Obviously $q = O(d)$. We perform a DFS traverse of $Q$ in the lexicographical order and assign the indices $0,1,\ldots,q{-}1$ to the vertices of $Q$ in the order of their appearance in the traverse. Denote by $\mathit{idx}(v)$ the index of a vertex $v$. We add to our structure a bit array $M[0..q{-}1]$ initially filled with zeros. A vertex $v$ is marked iff $M[\mathit{idx}(v)] = 1$. It is easy to see that the indices of the descendants of $v$ form a contiguous segment $[\mathit{idx}(v)..j]$ for some $j \ge \mathit{idx}(v)$. So, the problem is to find for each vertex the segment of the descendant indices and then test whether there is an index $k$ in this segment such that $M[k] = 1$. For each $v \in Q_0$, we store $\mathit{idx}(v)$ and the segment of the descendant indices explicitly using $O(\log n)$ bits. Consider a vertex $v \notin Q_0$. Let the leftmost descendant leaf of $v$ corresponds to a string $x'_{\mathit{SA}[j]}$, where $j = ir{-}k$ for some $i\in [0..d/r]$ and $k \in [0..r)$. Denote by $u$ the leaf corresponding to $x'_{\mathit{SA}[ir]}$. Since there are at most $2r$ vertices inserted between the leaves corresponding to $x'_{\mathit{SA}[(i{-}1)r]}$ and $x'_{\mathit{SA}[ir]}$ and the height of $Q$ is at most $\tau^2$, we have $0 < \mathit{idx}(u) - \mathit{idx}(v) \le 2r + \tau^2$. So, we store in $v$ the value $\mathit{idx}(u) - \mathit{idx}(v)$ using $O(\log\log n)$ bits. Obviously, one can compute $\mathit{idx}(v)$ in $O(1)$ time using $\mathit{idx}(u)$ stored explicitly. The structure occupies $O((d/r)\log n + d\log\log n) = O(d(\log\sigma + \log\log n))$ bits. Now it is sufficient to describe how to answer the queries on the segments of the dynamic bit array $M$. We can answer the queries on the segments of length $\le \frac{\log n}2$ using a shared table occupying $O(2^{\log n/2}\log^3 n) = o(n)$ bits. So, the problem is reduced to the queries on the segments of the form $[i\log n..j\log n)$. We build a perfect binary tree $T$ with leaves corresponding to the segments $[i\log n..(i{+}1)\log n)$ for $i \in [0..q/\log n)$ (without loss of generality, we assume that $q$ is a multiple of $\log n$ and $q/\log n$ is a power of $2$). Each internal vertex $v$ of $T$ naturally corresponds to a segment $[i2^j\log n..(i{+}1)2^j\log n)$ for some $i$ and $j > 0$. Denote $c = i2^j + 2^{j-1}$. We associate with $v$ bit arrays $D_v$ and $E_v$ of lengths $2^{j{-}1}$ such that for any $k \in [1..2^{j-1}]$, $D_v[k{-}1] = 1$ iff there are ones in the segment $M[(c{-}k)\log n..c\log n{-}1]$ and, similarly, $E_v[k{-}1] = 1$ iff there are ones in $M[c\log n..(c{+}k)\log n{-}1]$. We construct on $T$ the least common ancestor structure (in the case of the perfect binary tree with $O(q/\log n)$ vertices, this can be simply done in $O(q)$ bits). Then, to answer the query on a segment $[i\log n..j\log n)$, we first find in $O(1)$ time the least common ancestor $v$ of the leaves of $T$ corresponding to the segments $[i\log n..(i{+}1)\log n)$ and $[(j{-}1)\log n..j\log n)$ and then test appropriate bits of $D_v$ and $E_v$. All in $O(1)$ time. The structure occupies $O(\frac{q}{\log n}\log\frac{q}{\log n}) = O(d)$ bits. When we set $M[i] = 1$ for some $i \in [0..q)$, the modifications are straightforward: if the segment $[\lfloor i/\log n\rfloor..\lfloor i/\log n\rfloor{+}\log n))$ already has ones, then we are done; otherwise, for each ancestor $v$ of the leaf of $T$ corresponding to $[\lfloor i/\log n\rfloor..\lfloor i/\log n\rfloor{+}\log n)$, we scan the array $D_v$ [$E_v$] from left to right [right to left] from the appropriate position and flip all zero bits. Since there are only $O(d)$ bits in the structure, the height of $T$ is $O(\log q) = O(\log n)$, and the updates are initiated at most $q/\log n$ times, $k$ updates run in $O(d + (q/\log n)\log n + k) = O(d + k)$ time. \qed \end{proof} } \iflong \MarkedDescProof \fi \noindent\mycap{Filling $\mathit{lz}$.} Denote $s_i = s[0..i]$. Let for $i\in [0..p{+}d)$, $t_i$ denotes the longest prefix of $\lvec{s_i}$ presented in $Q$. We add to each $v \in Q$ an $O(\log\log n)$-bit field $v.\mathit{mlen}$ initialized to $\tau^2$. Also, we use an integer variable $f$ that initially equals $0$. The algorithm increases $f$ computing $|t_f|$ in each step and augments $Q$ as follows. Suppose $v \in Q$ is such that $t_{f-1}$ is a prefix of $\mathit{lab}(v)$ and other vertices with this property are descendants of $v$. We say that $v$ \emph{corresponds to $t_{f-1}$}. We are to find the vertex of $Q$ corresponding to $t_f$. Suppose $p_v(s[f])$ is defined. By Lemma~\ref{BWT}, one can compute $i = \Psi(p_v(s[f]))$ in $O(1)$ time. Obviously, $x'_{\mathit{SA}[i]}$ starts with $s[f]t_{f-1}$. We obtain the leaf corresponding to $x'_{\mathit{SA}[i]}$ in $O(1)$ time via $L$ and $L_{\lfloor i/r\rfloor}$ and then find $w\in Q$ corresponding to $t_f$ by the WA query on the obtained leaf and the number $\min\{\tau^2, |t_{f-1}|{+}1\}$. Suppose $p_v(s[f])$ is undefined. If $v$ is the root of $Q$, then we have $|t_f| = 0$. Otherwise, we recursively process the parent $u$ of $v$ in the same way as $v$ assuming $t_{f-1} = \mathit{lab}(u)$. Finally, once we have found $w\in Q$ corresponding to $t_f$, we mark the parent of $w$ using the structure of Lemma~\ref{MarkedDescendant} and assign $w.\mathit{mlen} \gets \min\{w.\mathit{mlen}, |\mathit{lab}(w)|{-}|t_f|\}$. Let $i \in [p..f{+}1]$ such that $|s[i..f{+}1]| \le \tau^2$. Suppose all positions $[0..f]$ are processed as described above. It is easy to verify that the string $s[i..f{+}1]$ has an occurrence in $s[0..f]$ iff either the vertex $v \in Q$ corresponding to $\lrange{s[i..f{+}1]}$ has a marked descendant or the parent of $v$ is marked and $|\mathit{lab}(v)| - v.\mathit{mlen} \ge |s[i..f{+}1]|$. Based on this observation, the algorithm computes $\mathit{lz}$ as follows. \begin{algorithmic}[1] \For{$(t \gets p;\;t \le p + b;\;t \gets t + \max\{1, z\})$} \For{$(z \gets 0, v \gets $ the root of $Q;\;\mathbf{true};\;v \gets w, z \gets z + 1)$} \State increase $f$ processing $Q$ accordingly until $f = t + z - 1$ \If{$z \ge \tau^2$} invoke the procedure of Section~\ref{SectLongFactors} to find $z$ and $\mathbf{break};$ \EndIf \State find $w \in Q$ corresp. to $\lrange{s[t..t{+}z]}$ using $v$, prefix links, WA queries\label{lst:prefixFind} \If{$w$ is undefined} $\mathbf{break};$ \EndIf \If{$w$ do not have marked descendants} \If{$\mathit{parent}(w)$ is not marked $\mathrel{\mathbf{or}} |\mathit{lab}(w)|-w.\mathit{mlen} \le z$} $\mathbf{break};$ \EndIf \EndIf \EndFor \State $\mathit{lz}[t{-}p] \gets 1;$ \EndFor \end{algorithmic} The lengths of the Lempel-Ziv factors are accumulated in $z$. The above observation implies the correctness. Line~\ref{lst:prefixFind} is similar to the procedure described above. Since $O(n)$ queries to the prefix links and $O(n)$ markings of vertices take $O(n)$ time, by standard arguments, one can show that the algorithm takes $O(n)$ time. \noindent\mycap{Searching of occurrences.} Denote by $Z$ the set of all Lempel-Ziv factors of lengths $[r/2..\tau^2)$ starting in $[p..p{+}b]$. Obviously $|Z| = O(d/r)$. Using $\mathit{lz}$, we build in $O(d\log\sigma)$ time a compact trie $R$ of the strings $\{\lvec{z} \colon z\in Z\}$. We add to each $v\in R$ such that $z_v = \lrange{\mathit{lab}(v)} \in Z$ the list of all starting positions of the Lempel-Ziv factors $z_v$ in $[p..p{+}b]$. Obviously, $R$ occupies $O((d/r)\log n) = O(d\log\sigma)$ bits. We construct for the strings $Z$ a succinct Aho-Corasick automaton of~\cite{Belazzougui} occupying $O((d/r)\log n) = O(d\log\sigma)$ bits. In~\cite{Belazzougui} it is shown that the reporting states of the automaton can be associated with vertices of $R$, so that we can scan $s[0..p{+}d{-}1]$ in $O(n)$ time and store the found positions of the first occurrences of the strings $Z$ in $R$. Finally, by a DFS traverse on $R$, we obtain for each string of $Z$ the position of its first occurrence in $s[0..p{+}d{-}1]$. To find earlier occurrences of other Lempel-Ziv factors starting in $[p..p{+}b]$, we use the algorithms of Sections~\ref{SectShortFactors},~\ref{SectLongFactors}. \section{Long Factors}\label{SectLongFactors} \subsection{Main Tools} Let $k\in \mathbb{N}$. A set $D \subset [0..k)$ is called a \emph{difference cover of $[0..k)$} if for any $x \in [0..k)$, there exist $y,z \in D$ such that $y - z \equiv x\pmod{k}$. Obviously $|D| \ge \sqrt{k}$. Conversely, for any $k \in \mathbb{N}$, there is a difference cover of $[0..k)$ with $O(\sqrt{k})$ elements and it can be constructed in $O(k)$ time (see \cite{BurkhardtKarkkainen}). \iflong \begin{example} The set $D = \{1,2,4\}$ is a difference cover of~$[0..5)$. \begin{minipage}{0.2\textwidth} $$ \begin{array}{r|c|c|c|c|c} x & 0 & 1 & 2 & 3 & 4 \\ \hline y,z & 1,1 & 2,1 & 1,4 & 4,1 & 1,2 \end{array} $$ \end{minipage} \begin{minipage}{0.6\textwidth} \includegraphics[scale=0.20]{diffcover}\\ \small (the figure is from~\cite{BilleGortzSachVildhoj}.) \end{minipage} \end{example} \fi \begin{lemma}[see \cite{BurkhardtKarkkainen}] Let $D$ be a difference cover of $[0..k)$. For any integers $i,j$, there exists $d \in [0..k)$ such that $(i - d) \bmod k \in D$ and $(j - d) \bmod k \in D$.\label{DiffCoverProperty} \end{lemma} An \emph{ordered tree} is a tree whose leaves are totally ordered (e.g, a trie). \begin{lemma}[see~\cite{NavarroSadakane}] In $O(k\log k)$ bits of space we can maintain an ordered tree with at most $k$ vertices under the following operations:\\ 1.insertion of a new leaf (possibly splitting an edge) in $O(\log k)$ time;\\ 2.searching of the leftmost/rightmost descendant leaf of a vertex in $O(\log k)$ time. \label{OrderedTree} \end{lemma} \begin{lemma}[see \cite{BenderColeDemaineFarachColtonZito}] A linked list can be designed to support the following operations: 1. insertion of a new element in $O(1)$ amortized time; 2. determine whether $x$ precedes $y$ for given elements $x$ and $y$ in $O(1)$ time. \label{OrderedList} \end{lemma} \iflong To support fast navigation in tries, we associate with each vertex $v$ a dictionary mapping the first letters in the labels written on the outgoing edges of $v$ to the corresponding children of $v$. So, whether a trie contains a string with a prefix $w$ can be checked in $O(|w|\log\rho)$ time, where $\rho$ is the alphabet size. Notice that a compact trie for a set of $k$ substrings of the string $s$ can be stored in $O(k\log n)$ bits using pointers for the edge labels. But the described searching time is too slow for our purposes, so, using packed strings and fast string dictionaries, we improve our tries with the operations provided in the following lemma. \else The ordinary searching in tries is too slow for our purposes, so, using packed strings and fast string dictionaries (see ``\emph{ternary trees}''), we improve our tries with the operations provided in the following lemma. \fi \iflong \begin{lemma} \else \begin{lemma}[see arXiv:1504.06712] \fi In $O(k\log n)$ bits of space we can maintain a compact trie for at most $k$ substrings of $s$ under the following operations:\\ 1. insertion of a string $w$ in $O(|w|/r + \log n)$ amortized time;\\ 2. searching of a string $w$ in $O(|u|/r + \log n)$ time, where $u$ is the longest prefix of $w$ present in the trie; we scan $w$ from left to right $r$ letters at a time and report the vertices of the trie corresponding to the prefixes of lengths $r, 2r, \ldots, \lfloor|u|/r\rfloor r$, and $|u|$ immediately after reading these prefixes. \label{Trie} \end{lemma} \newcommand{\TrieProof}{ \begin{figure} \caption{\small A compact trie $T$ is on the left; the corresponding ternary tree $T'$ is on the right. If $r = 4$, the searching of $w = aaaaccccaaaac$ reports the vertices 6,6,8,8 corresponding to the prefixes of lengths $r$, $2r$, $3r$, and $|w|$, respectively.} \label{fig:trie} \end{figure} \begin{proof} Denote by $S$ the set of all strings stored in $T$. For a substring $t$ of the string $s$, denote by $t'$ a string of length $\lfloor|t|/r\rfloor$ such that for any $i \in [0..|t'|)$, $t'[i]$ is equal to the packed string $t[ri..r(i{+}1){-}1]$. We maintain a special compact trie $T'$ containing the set of strings $\{t' \colon t \in S\}$: the dictionaries associated with the vertices of $T'$ are organized in such a way that the searching and insertion of a string $w'$ both work in $O(|w'| + \log k)$ amortized time; such tries are called \emph{dynamic ternary trees} (see \cite{FranceschiniGrossi} for a comprehensive list of references). For each $v\in T$, we insert in $T'$ a vertex corresponding to the string $t'$ (if there is no such vertex), where $t = \mathit{lab}(v)$ (consider the vertices~3 on the left and~2 on the right of Fig.~\ref{fig:trie}). All vertices of $T'$ are augmented with the pointers to the corresponding vertices of $T$ (depicted as dashed lines in Fig.~\ref{fig:trie}). Let $w$ be a string to be searched in $T$. Using the pointers of $T'$, we can report vertices corresponding to the prefixes $w[0..r{-}1]$, $w[0..2r{-}1], \ldots, w[0..|w'|r{-}1]$ while traverse $T'$. Denote by $u$ the longest prefix of $w$ presented in $T$. Once $u'$ is found in $T'$ in $O(|u'| + \log k)$ time, we start to traverse $T$ reading the string $u[|u'|r..|u|{-}1]$ from the position corresponding to $u[0..|u'|r{-}1]$. This operation requires additional $O(r\log\sigma) = O(\log n)$ time. The insertion is analogous.\qed \end{proof} } \iflong \TrieProof \fi In the \emph{dynamic tree range reporting problem} one has ordered trees $T_1$ and $T_2$ and a set of pairs $Z = \{(x_1^i,x_2^i)\}$, where $x_1^i$ and $x_2^i$ are leaves of $T_1$ and $T_2$, respectively (see Fig.~\ref{fig:treerange}); the query asks, for given vertices $v_1\in T_1$ and $v_2\in T_2$, to find a pair $(x_1,x_2) \in Z$ such that $x_1$ and $x_2$ are descendants of $v_1$ and $v_2$, respectively; the update inserts new pairs in $Z$ or new vertices in $T_1$ and $T_2$. To solve this problem, we apply the structure of~\cite{Blelloch} and Lemmas~\ref{OrderedTree} and~\ref{OrderedList}. \iflong \begin{lemma} \else \begin{lemma}[see arXiv:1504.06712] \fi The dynamic tree range reporting problem with $|Z|\le k$ can be solved in $O(k\log k)$ bits of space with updates and queries working in $O(\log k)$ amortized time. \label{OrthogonalTree} \end{lemma} \newcommand{\OrthTreeProof}{ \begin{proof} To prove this Lemma, we need an additional tool. In the \emph{dynamic orthogonal range reporting problem} one has two linked lists $X$ and $Y$, and a set of pairs $Z = \{(x_i,y_i)\}$, where $x_i \in X$ and $y_i \in Y$; the query asks to report for given elements $x_1, x_2 \in X$ and $y_1,y_2 \in Y$, a pair $(x,y) \in Z$ such that $x$ lies between $x_1$ and $x_2$ in $X$, and $y$ lies between $y_1$ and $y_2$ in $Y$; the update inserts new pairs in $Z$ or new elements in $X$ or $Y$. \begin{lemma}[see~\cite{Blelloch}] The dynamic orthogonal range reporting problem on at most $k$ pairs can be solved in $O(k\log k)$ bits of space with updates and queries working in $O(\log k)$ amortized time. \label{OrthogonalRange} \end{lemma} We maintain the ordered tree structure of Lemma~\ref{OrderedTree} on $T_1$ and $T_2$. The order on the lists of leaves of $T_1$ and $T_2$ is maintained with the aid of enhanced linked lists of Lemma~\ref{OrderedList}. To process queries efficiently, we build the dynamic orthogonal range reporting structure of Lemma~\ref{OrthogonalRange} on these lists and the set of pairs $Z$. These structures take overall $O(k\log k)$ bits of space. By Lemmas~\ref{OrderedList},~\ref{OrderedTree},~\ref{OrthogonalRange}, the update of $T_1$, $T_2$, or $Z$ requires $O(\log k)$ amortized time. Suppose we process a query for vertices $v_1\in T_1$ and $v_2\in T_2$. We obtain the leftmost and rightmost descendant leaves of $v_1$ and $v_2$ using Lemma~\ref{OrderedTree}. Then we report a desired pair from $Z$ (or decide that there are no such pairs) using Lemma~\ref{OrthogonalRange}. By Lemmas~\ref{OrderedTree} and \ref{OrthogonalRange}, the query takes $O(\log k)$ amortized time.\qed \end{proof} } \iflong \OrthTreeProof \fi \subsection{Algorithm for Long Factors} \noindent\mycap{Data structures.} At the beginning, using the algorithm of~\cite{BurkhardtKarkkainen}, our algorithm constructs a difference cover $D$ of $[0..\tau^2)$ such that $|D| = \Theta(\tau)$. Denote $M = \{i\in[0..n) \colon i\bmod\tau^2 \in D\}$. The set $M$ is the basic component in our constructions. Suppose the algorithm has reported the Lempel-Ziv factors $z_1, z_2, \ldots, z_{k-1}$ and already decided that $|z_k| \ge \tau^2$ applying the procedure of Section~\ref{SectMidFactors}. Denote $p = |z_1z_2\cdots z_{k-1}|$. We use an integer variable $z$ to compute the length of $|z_k|$ and $z$ is initially equal to $\tau^2$. Let us first discuss the related data structures. We use an auxiliary variable $t$ such that $p \le t < p + z$ at any time of the work; initially $t = p$. Denote $s_i = s[0..i]$. Our main data structures are compact tries $S$ and $T$: $S$ contains the strings $\lvec{s_i}$ and $T$ contains the strings $s[i{+}1..i{+}\tau^2]$ for all $i\in[0..t) \cap M$ (we append $\tau^2$ letters $s[0]$ to the right of $s$ so that $s[i{+}1..i{+}\tau^2]$ is always defined). Both $S$ and $T$ are augmented with the structures supporting the searching of Lemma~\ref{Trie} and the tree range queries of Lemma~\ref{OrthogonalTree} on pairs of leaves of $S$ and $T$. Since $s[0]$ is a sentinel letter, each $\lvec{s_i}$, for $i \in [0..t) \cap M$, is represented in $S$ by a leaf. The set of pairs for our tree range reporting structure contains the pairs of leaves corresponding to $\lvec{s_i}$ in $S$ and $s[i{+}1..i{+}\tau^2]$ in $T$ for all $i \in [0..t) \cap M$ (see Fig.~\ref{fig:treerange}). Also, we add to $S$ the WA structure of Lemma~\ref{WeightedAncestor}. \begin{figure} \caption{\small $\tau = 3$, $D = \{0, 1, 3, 6\}$ is a diff. cover of $[0..\tau^2)$, positions in $M$ are underlined.} \label{fig:treerange} \end{figure} Let us consider vertices $v \in S$ and $v' \in T$ corresponding to strings $\lvec{t}_v$ and $t_{v'}$, respectively. Denote by $\mathrm{treeRng}(v, v')$ the tree range query that returns either $\mathbf{nil}$ or a suitable pair of descendant leaves of $v$ and $v'$. We have $\mathrm{treeRng}(v, v') \ne \mathbf{nil}$ iff there is $i \in [0..t) \cap M$ such that $s[i{-}|t_v|{+}1..i]s[i{+}1..i{+}|t_{v'}|] = \lvec{t}_vt_{v'}$. Since $|M| \le \frac{n}{\tau^2}|D| = O(\frac{n}{\tau})$, it follows from Lemmas~\ref{WeightedAncestor},~\ref{Trie},~\ref{OrthogonalTree} that $S$ and $T$ with all related structures occupy at most $O(\frac{n}{\tau} \log n) = O(\epsilon n)$ bits. \noindent\mycap{The algorithm.} Suppose the factor $z_k$ occurs in a position $x \in [0..p)$; then, by Lemma~\ref{DiffCoverProperty}, there is a $d \in [0..\tau^2)$ such that $x + |z_k| - d \in M$ and $p + |z_k| - d \in M$. Based on this observation, our algorithm, for each $t \in M \cap [p..z)$, finds the vertex $v\in S$ corresponding to $\lrange{s[p..t]}$ and the vertex $v'\in T$ corresponding to as long as possible prefix of $s[t{+}1..n{+}\tau^2]$ such that $\mathrm{treeRng}(v, v') \ne \mathbf{nil}$ and with the aid of this bidirectional search, we further increase $z$ if it is possible. \begin{algorithmic}[1] \For{$(t \gets \min\{i \ge p \colon i \in M\};\;t < p + z;\;t \gets \min\{i > t \colon i \in M\})$} \State $x \gets$ the length of the longest prefix of $s[t{+}1..t{+}\tau^2]$ present in $T$\label{lst:longestPrefixT} \State $y \gets$ the length of the longest prefix of $\lvec{s_t}$ present in $S$\label{lst:longestPrefixS} \If{$y < t - p + 1$} go to line~\ref{lst:insertTrie} \EndIf \State $v \gets $ the vertex corresp. to the longest prefix of $\lvec{s_t}$ present in $S$\label{lst:findPrefix} \State $v \gets \mathrm{weiAnc}(v, t - p + 1);$\label{lst:wancestor} \For{$j = t, t{+}r, t{+}2r, \ldots, t{+}\lfloor x/r\rfloor r, x$ and $v' \in T$ corresp. to $s[t{+}1..j]$}\label{lst:loopTraverse} \If{$j \ge p + z$} \Comment{$|s[p..j]| > |s[p..p{+}z{-}1]|$} \If{$\mathrm{treeRng}(v, v') = \mathbf{nil}$} \label{lst:treeRng} \State $j \gets \max\{j'{\colon}\mathrm{treeRng}(v, u){\ne}\mathbf{nil}$ for $u\in T$ corresp. $s[t{+}1..j']\};$\label{lst:binary} \EndIf \State $z \gets \max\{z, j - p + 1\};$\label{lst:increaseZ} \If{$\mathrm{treeRng}(v, v') = \mathbf{nil}$} $\mathbf{break};$ \EndIf \EndIf \EndFor \label{lst:loopTraverseEnd} \State insert $s[t{+}1..t{+}\tau^2]$ in $T$, $\lvec{s_t}$ in $S$; process the pair of the corresp. leaves\label{lst:insertTrie} \EndFor \end{algorithmic} Some lines need further clarification. Here $\mathrm{weiAnc}(v, i)$ denotes the WA query that returns either the ancestor of $v$ with the minimal weight $\ge i$ or $\mathbf{nil}$ if there is no such ancestor; we assume that any vertex is an ancestor of itself. Since $M$ has period $\tau^2$, one can compute, for any $t$, $\min\{i > t \colon i\in M\}$ in $O(1)$ time using an array of length $\tau^2$ for example. The operations on $T$ in lines~\ref{lst:longestPrefixT},~\ref{lst:insertTrie} take, by Lemma~\ref{Trie}, $O(\tau^2/r + \log n)$ time. To perform the similar operations on $S$ in lines~\ref{lst:longestPrefixS},~\ref{lst:findPrefix},~\ref{lst:insertTrie}, we use other techniques (discussed below) working in the same time. The loop in line~\ref{lst:loopTraverse} executes exactly the procedure described in Lemma~\ref{Trie}. To compute $j$ in line~\ref{lst:binary}, we perform the binary search on at most $r$ ancestors of the vertex $v'$; thus, we invoke $\mathrm{treeRng}$ $O(\log r)$ times in line~\ref{lst:binary}. Let us prove the correctness. Suppose we have $\tau^2 \le z < |z_k|$ in some iteration. It suffices to show that the algorithm cannot terminate with this value of $z$. Let $z_k$ occur in a position $x \in [0..p)$. By Lemma~\ref{DiffCoverProperty}, there is a $d \in [0..\tau^2)$ such that $x + z - d \in M$ and $p + z - d \in M$. Thus, the string $s[p..p{+}z{-}d]$ is presented in $S$ when $t = p + z - d$ and we find the corresponding vertex $v$ in line~\ref{lst:wancestor}. Moreover, the string $s[p{+}z{-}d{+}1..p{+}z]$ is presented in $T$ and we find the vertex corresponding to this or a longer string in the loop~\ref{lst:loopTraverse}--\ref{lst:loopTraverseEnd}. Denote this vertex by $w$; $w$ is either $v'$ or $u$ in line~\ref{lst:binary}. Obviously, $\mathrm{treeRng}(v, w) \ne \mathbf{nil}$, so, we increase $z$ in line~\ref{lst:increaseZ}. Let us estimate the running time. The main loop performs $O(|z_k|/\tau)$ iterations. The operations in lines~\ref{lst:longestPrefixT}, \ref{lst:longestPrefixS}, \ref{lst:findPrefix}, \ref{lst:insertTrie} require, as mentioned above, $O(\tau^2/r + \log n)$ time (some of them will be discussed in the sequel). One WA query and one modification of the tree range reporting structure take, by Lemmas~\ref{WeightedAncestor} and~\ref{OrthogonalTree}, $O(\log n)$ time. By Lemma~\ref{Trie}, the traverse of $T$ in line~\ref{lst:loopTraverse} requires $O(\tau^2/r + \log n)$ time. For each fixed $t$, every time we perform $\mathrm{treeRng}$ query in line~\ref{lst:treeRng}, except probably for the first and last queries, we increase $z$ by $r$. Hence, the algorithm executes at most $O(|z_k|/\tau + |z_k|/r)$ such queries in total. Finally, in line~\ref{lst:binary} we invoke $\mathrm{treeRng}$ at most $O(\log r)$ times for every fixed $t$. Putting everything together, we obtain $O(\frac{|z_k|}{\tau}(\tau^2/r + \log n) + \frac{|z_k|}{r}\log n + \frac{|z_k|}{\tau}\log r \log n) = O(|z_k|\log\sigma + |z_k|\log r) = O(|z_k|(\log\sigma + \log\log n))$ overall time. One can find the position of an early occurrence of $z_k$ from the pairs of leaves reported in lines~\ref{lst:treeRng},~\ref{lst:binary}. Now let us discuss how to insert and search strings in $S$. \noindent\mycap{Operations on $S$.} The operations on $S$ are based on the fact that for any $i \in [\tau^2..n) \cap M$, $i - \tau^2 \in M$. Let $u$ and $v$ be leaves of $S$ corresponding to some $\lvec{s_j}$ and $\lvec{s_k}$. To compare $\lvec{s_j}$ and $\lvec{s_k}$ in $O(1)$ time via $u$ and $v$, we store all leaves of $S$ in a linked list $K$ of Lemma~\ref{OrderedList} in the lexicographical order. To calculate $\mathit{lcp}(\lvec{s_j}, \lvec{s_k})$ in $O(\log n)$ time via $u$ and $v$, we put all leaves of $S$ in an augmented search tree $B$. Finally, we augment $S$ with the ordered tree structure of Lemma~\ref{OrderedTree}. Denote $s'_i = \lrange{s[i{-}\tau^2{+}1..i]}$. We add to $S$ a compact trie $S'$ containing $s'_i$ for all $i \in [0..t) \cap M$ (we assume $s[0]{=}s[-1]{=}\ldots$, so, $S'$ is well-defined). The vertices of $S'$ are linked to the respective vertices of $S$. Let $w$ be a leaf of $S'$ corresponding to a string $s'_i$. We add to $w$ the set $H_w = \{(p^j_1, p^j_2) \colon j\in [0..t)\cap M\text{ and }s'_j = s'_i\}$, where $p^j_1$ and $p^j_2$ are the pointers to the leaves of $S$ corresponding to $\lvec{s}_{j{-}\tau^2}$ and $\lvec{s_j}$, respectively; $H_w$ is stored in a search tree in the lexicographical order of the strings $\lvec{s}_{j{-}\tau^2}$ referred by $p^j_1$, so, one can find, for any $k \in [0..t{+}\tau^2) \cap M$, the predecessor or successor of the string $\lvec{s}_{k-\tau^2}$ in $H_w$ in $O(\log n)$ time. It is straightforward that all these structures occupy $O(\frac{n}{\tau}\log n) = O(\epsilon n)$ bits. Suppose $S$ contains $\lvec{s_i}$ for all $i \in [0..t)\cap M$ and we insert $\lvec{s_t}$. We first search $s'_t$ in $S'$. Suppose $S'$ does not contain $s'_t$. We insert $s'_t$ in $S'$ in $O(\tau^2/r + \log n)$ time, by Lemma~\ref{Trie}, then add to $S$ the vertices corresponding to the new vertices of $S'$ and link them to each other. Using the structure of Lemma~\ref{OrderedTree} on $S$, we find the position of $\lvec{s_t}$ in $K$ in $O(\log n)$ time. All other structures are easily modified in $O(\log n)$ time. Now suppose $S'$ has a vertex $w$ corresponding to $s'_t$. In $O(\log n)$ time we find in $H_w$ the pairs $(p^{j}_1, p^{k}_2)$ and $(p^{j}_1, p^{k}_2)$ such that $p^{j}_1$ points to the predecessor $\lvec{s}_{j{-}\tau^2}$ of $\lvec{s}_{t{-}\tau^2}$ in $H_w$ and $p^{k}_1$ points to the successor $\lvec{s}_{k{-}\tau^2}$. So, the leaf corresponding to $\lvec{s_t}$ must be between $\lvec{s}_{j}$ and $\lvec{s}_{k}$. Using $B$, we calculate $\mathit{lcp}(\lvec{s}_{j}, \lvec{s_t}) = \mathit{lcp}(\lvec{s}_{j{-}\tau^2}, \lvec{s}_{t{-}\tau^2}) + \tau^2$ and, similarly, $\mathit{lcp}(\lvec{s}_{k}, \lvec{s_t})$ in $O(\log n)$ time and then find the position where to insert the new leaf by WA queries on $S$. All other structures are simply modified in $O(\log n)$ time. Thus, the insertion takes $O(\tau^2/r + \log n)$ time. One can use a similar algorithm for the searching of $\lvec{s_t}$. \iflong \else \iffalse \section*{Appendix} \subsection*{To Section~\ref{SectMidFactors}} \textbf{The proof of Lemma~\ref{WeightedAncestorSpec}.} \WASpecProof \textbf{The proof of Lemma~\ref{MarkedDescendant}.} \MarkedDescProof \subsection*{To Section~\ref{SectLongFactors}} \textbf{The proof of Lemma~\ref{Trie}.} \TrieProof \textbf{The proof of Lemma~\ref{OrthogonalTree}.} \OrthTreeProof \fi \fi \end{document}
arXiv
Article Info. Molecules and Cells Pages.428-435 Korean Society for Molecular and Cellular Biology (한국분자세포생물학회) Membrane Topology of Helix 0 of the Epsin N-terminal Homology Domain Kweon, Dae-Hyuk (Department of Genetic Engineering, Faculty of Life Science and Technology, Sungkyunkwan University) ; Shin, Yeon-Kyun (Department of Biochemistry, Biophysics and Molecular Biology, Iowa State University) ; Shin, Jae Yoon (Department of Genetic Engineering, Faculty of Life Science and Technology, Sungkyunkwan University) ; Lee, Jong-Hwa (School of Bioresource Sciences, Andong National University) ; Lee, Jung-Bok (School of Bioresource Sciences, Andong National University) ; Seo, Jin-Ho (Department of Agricultural Biotechnology, Seoul National University) ; Kim, Yong Sung (Department of Biotechnology, Ajou University) Received : 2006.04.02 Accepted : 2006.05.22 Published : 2006.06.30 KSCI Specific interaction of the epsin N-terminal homology(ENTH) domain with the plasma membrane appears to bridge other related proteins to the specific regions of the membrane that are invaginated to form endocytic vesicles. An additional $\alpha$-helix, referred to as helix 0 (H0), is formed in the presence of the soluble ligand inositol-1,4,5-trisphosphate [$Ins(1,4,5)P_3$] at the N terminus of the ENTH domain (amino acid residues 3-15). The ENTH domain alone and full-length epsin cause tubulation of liposomes made of brain lipids. Thus, it is believed that H0 is membrane-inserted when it is coordinated with the phospholipid phosphatidylinositol-4,5-bisphosphate [$PtdIns(4,5)P_2$], resulting in membrane deformation as well as recruitment of accessory factors to the membrane. However, formation of H0 in a real biological membrane has not been demonstrated. In the present study, the membrane structure of H0 was determined by measurement of electron paramagnetic resonance (EPR) nitroxide accessibility. H0 was located at the phosphate head-group region of the membrane. Moreover, EPR line-shape analysis indicated that no pre-formed H0-like structure were present on normal acidic membranes. $PtdIns(4,5)P_2$ was necessary and sufficient for interaction of the H0 region with the membrane. H0 was stable only in the membrane. In conclusion, the H0 region of the ENTH domain has an intrinsic ability to form H0 in a $PtdIns(4,5)P_2$-containing membrane, perhaps functioning as a sensor of membrane patches enriched with $PtdIns(4,5)P_2$ that will initiate curvature to form endocytic vesicles. Endocytosis;ENTH;EPR;Epsin;Helix 0;Membrane Binding;Phosphatidylinositol-4,5-Bisphosphate Supported by : Korea Research Foundation Brasseur, R., Cornet, B., Burny, A., Vandenbranden, M., and Ruysschaert, J. M. (1988) Mode of insertion into a lipid membrane of the N-terminal HIV gp41 peptide segment. AIDS Res. Hum. Retroviruses 4, 83-90 https://doi.org/10.1089/aid.1988.4.83 Brodsky, F. M., Chen, C. Y., Knuehl, C., Towler, M. C., and Wakeham, D. E. (2001) Biological basket weaving: formation and function of clathrin-coated vesicles. Annu. Rev. Cell Dev. Biol. 17, 517-568 https://doi.org/10.1146/annurev.cellbio.17.1.517 De Camilli, P., Chen, H., Hyman, J., Panepucci, E., Bateman, A., et al. (2002) The ENTH domain. FEBS Lett. 513, 11-18 https://doi.org/10.1016/S0014-5793(01)03306-3 Epand, R. M., Hui, S. W., Argan, C., Gillespie, L. L., and Shore, G. C. (1986) Structural analysis and amphiphilic properties of a chemically synthesized mitochondrial signal peptide. J. Biol. Chem. 261, 10017-10020 Itoh, T., Koshiba, S., Kigawa, T., Kikuchi, A., Yokoyama, S., et al. (2001) Role of the ENTH domain in phosphatidylinositol- 4,5-bisphosphate binding and endocytosis. Science 291, 1047- 1051 https://doi.org/10.1126/science.291.5506.1047 Kweon, D. H., Kim, C. S., and Shin, Y. K. (2003) Regulation of neuronal SNARE assembly by the membrane. Nat. Struct. Biol. 10, 440-447 https://doi.org/10.1038/nsb928 Legendre-Guillemin, V., Wasiak, S., Hussain, N. K., Angers, A., and McPherson, P. S. (2004) ENTH/ANTH proteins and clathrin-mediated membrane budding. J. Cell Sci. 117, 9-18 https://doi.org/10.1242/jcs.00928 Schmidt, A. A. (2002) Membrane transport: the making of a vesicle. Nature 419, 347-349 https://doi.org/10.1038/419347a Stahelin, R. V., Long, F., Peter, B. J., Murray, D., De Camilli, P., et al. (2003) Contrasting membrane interaction mechanisms of AP180 N-terminal homology (ANTH) and epsin Nterminal homology (ENTH) domains. J. Biol. Chem. 278, 28993-28999 https://doi.org/10.1074/jbc.M302865200 Yu, Y. G., Thorgeirsson, T. E., and Shin, Y. K. (1994) Topology of an amphiphilic mitochondrial signal sequence in the membrane-inserted state: a spin labeling study. Biochemistry 33, 14221-14226 https://doi.org/10.1021/bi00251a034 Brett, T. J., Traub, L. M., and Fremont, D. H. (2002) Accessory protein recruitment motifs in clathrin-mediated endocytosis. Structure (Camb). 10, 797-809 https://doi.org/10.1016/S0969-2126(02)00784-0 Higgins, M. K. and McMahon, H. T. (2002) Snap-shots of clathrin-mediated endocytosis. Trends Biochem. Sci. 27, 257- 263 https://doi.org/10.1016/S0968-0004(02)02089-3 Macosko, J. C., Kim, C. H., and Shin, Y. K. (1997) The membrane topology of the fusion peptide region of influenza hemagglutinin determined by spin-labeling EPR. J. Mol. Biol. 267, 1139-1148 https://doi.org/10.1006/jmbi.1997.0931 Ford, M. G., Mills, I. G., Peter, B. J., Vallis, Y., Praefcke, G. J., et al. (2002) Curvature of clathrin-coated pits driven by epsin. Nature 419, 361-366 https://doi.org/10.1038/nature01020 McHaourab, H. S., Kalai, T., Hideg, K., and Hubbell, W. L. (1999) Motion of spin-labeled side chains in T4 lysozyme: effect of side chain structure. Biochemistry 38, 2947-2955 https://doi.org/10.1021/bi9826310 Han, X. and Tamm, L. K. (2000) A host-guest system to study structure-function relationships of membrane fusion peptides. Proc. Natl. Acad. Sci. USA 97, 13097-13102 https://doi.org/10.1073/pnas.230212097 Altenbach, C., Marti, T., Khorana, H. G., and Hubbell, W. L. (1990) Transmembrane protein structure: spin labeling of bacteriorhodopsin mutants. Science 248, 1088-1092 https://doi.org/10.1126/science.2160734 Columbus, L. and Hubbell, W. L. (2002) A new spin on protein dynamics. Trends Biochem. Sci. 27, 288-295 https://doi.org/10.1016/S0968-0004(02)02095-9 Kinuta, M., Yamada, H., Abe, T., Watanabe, M., Li, S. A., et al. (2002). Phosphatidylinositol 4,5-bisphosphate stimulates vesicle formation from liposomes by brain cytosol. Proc. Natl. Acad. Sci. USA 99, 2842-2847 https://doi.org/10.1073/pnas.261715599 Wendland, B. (2002) Epsins: adaptors in endocytosis? Nat. Rev. Mol. Cell Biol. 3, 971-977 https://doi.org/10.1038/nrm970 Rabenstein, M. and Shin, Y. K. (1995) A peptide from the heptad repeat of human immunodeficiency virus gp41 shows both membrane binding and coiled-coil formation. Biochemistry 34, 13390-13397 https://doi.org/10.1021/bi00041a016 Hyman, J., Chen, H., Di Fiore, P. P., De Camilli, P., and Brunger, A. T. (2000) Epsin 1 undergoes nucleocytosolic shuttling and its eps15 interactor NH(2)-terminal homology (ENTH) domain, structurally similar to Armadillo and HEAT repeats, interacts with the transcription factor promyelocytic leukemia Zn(2)+ finger protein (PLZF). J. Cell Biol. 149, 537-546 https://doi.org/10.1083/jcb.149.3.537 Altenbach, C., Greenhalgh, D. A., Khorana, H. G., and Hubbell, W. L. (1994) A collision gradient method to determine the immersion depth of nitroxides in lipid bilayers: application to spin-labeled mutants of bacteriorhodopsin. Proc. Natl. Acad. Sci. USA 91, 1667-1671 https://doi.org/10.1073/pnas.91.5.1667 Chen, H., Fre, S., Slepnev, V. I., Capua, M. R., Takei, K., et al. (1998) Epsin is an EH-domain-binding protein implicated in clathrin-mediated endocytosis. Nature 394, 793-797 https://doi.org/10.1038/29555 Itoh, T. and Takenawa, T. (2004) Regulation of endocytosis by phosphatidylinositol 4,5-bisphosphate and ENTH proteins. Curr. Top Microbiol. Immunol. 282, 31-47 Kirchhausen, T. (2000a) Clathrin. Annu. Rev. Biochem. 69, 699- 727 https://doi.org/10.1146/annurev.biochem.69.1.699 Kirchhausen, T. (2000b) Three ways to make a vesicle. Nat. Rev. Mol. Cell. Biol. 1, 187-198 https://doi.org/10.1038/35043117 Huang, S., Lifshitz, L., Patki-Kamath, V., Tuft, R., Fogarty, K., et al. (2004) Phosphatidylinositol-4,5-bisphosphate-rich plasma membrane patches organize active zones of endocytosis and ruffling in cultured adipocytes. Mol. Cell. Biol. 24, 9102- 9123 https://doi.org/10.1128/MCB.24.20.9102-9123.2004 Koshiba, S., Kigawa, T., Kikuchi, A., and Yokoyama, S. (2002) Solution structure of the epsin N-terminal homology (ENTH) domain of human epsin. J. Struct. Funct. Genomics 2, 1-8 https://doi.org/10.1023/A:1011397007366 Nossal, R. and Zimmerberg, J. (2002) Endocytosis: curvature to the ENTH degree. Curr. Biol. 12, R770-772 https://doi.org/10.1016/S0960-9822(02)01289-7 Ford, M. G., Pearse, B. M., Higgins, M. K., Vallis, Y., Owen, D. J., et al. (2001) Simultaneous binding of PtdIns(4,5)P2 and clathrin by AP180 in the nucleation of clathrin lattices on membranes. Science 291, 1051-1055 https://doi.org/10.1126/science.291.5506.1051 Chen, H., Slepnev, V. I., Di Fiore, P. P., and De Camilli, P. (1999) The interaction of epsin and Eps15 with the clathrin adaptor AP-2 is inhibited by mitotic phosphorylation and enhanced by stimulation-dependent dephosphorylation in nerve terminals. J. Biol. Chem. 274, 3257-3260 https://doi.org/10.1074/jbc.274.6.3257 Han, X., Bushweller, J. H., Cafiso, D. S., and Tamm, L. K. (2001) Membrane structure and fusion-triggering conformational change of the fusion domain from influenza hemagglutinin. Nat. Struct. Biol. 8, 715-720 https://doi.org/10.1038/90434 Brodin, L., Low, P., and Shupliakov, O. (2000) Sequential steps in clathrin-mediated synaptic vesicle endocytosis. Curr. Opin. Neurobiol. 10, 312-320 https://doi.org/10.1016/S0959-4388(00)00097-0 Shin, Y. K., Levinthal, C., Levinthal, F., and Hubbell, W. L. (1993) Colicin E1 binding to membranes: time-resolved studies of spin-labeled mutants. Science 259, 960-963 https://doi.org/10.1126/science.8382373 McHaourab, H. S., Lietzow, M. A., Hideg, K., and Hubbell, W. L. (1996) Motion of spin-labeled side chains in T4 lysozyme. Correlation with protein structure and dynamics. Biochemistry 35, 7692-7704 https://doi.org/10.1021/bi960482k Russell, C. J., Thorgeirsson, T. E., and Shin, Y. K. (1999) The membrane affinities of the aliphatic amino acid side chains in an alpha-helical context are independent of membrane immersion depth. Biochemistry 38, 337-346 https://doi.org/10.1021/bi981179h
CommonCrawl
Heinrich Guggenheimer Heinrich Walter Guggenheimer (July 21, 1924 – March 4, 2021) was a German-born Swiss-American[1] mathematician who has contributed to knowledge in differential geometry, topology, algebraic geometry, and convexity. He has also contributed volumes on Jewish sacred literature. Guggenheimer was born in Nuremberg, Germany. He is the son of Marguerite Bloch and the physicist Dr. Siegfried Guggenheimer. He studied in Zürich, Switzerland at the Eidgenössische Technische Hochschule, receiving his diploma in 1947 and a D.Sc. in 1951. His dissertation was titled "On complex analytic manifolds with Kahler metric". It was published in Commentarii Mathematici Helvetici[2] (in German). Guggenheimer began his teaching career at the Hebrew University as a lecturer, 1954–56. He was a professor at the Bar Ilan University, 1956–59. In 1959, he immigrated to the United States, becoming a naturalized citizen in 1965. Washington State University was his first American post, where he was an associate professor. After one year he moved to University of Minnesota where he was raised to a full professor in 1962. While in Minnesota, he wrote Differential Geometry (1963), a textbook treating "classical problems with modern methods". According to Robert Hermann in 1979, "Among today's treatises, the best one from the point of view of the Erlangen Program is Differential Geometry by H. Guggenheimer, Dover Publications, 1977."[3] In 1967 Guggenheimer published Plane Geometry and its Groups (Holden Day), and moved to New York City to teach at Polytechnic University, now called New York University Tandon School of Engineering. In 1977, he published Applicable Geometry: Global and Local Convexity.[4] Until 1995 Guggenheimer produced a steady stream of papers in mathematical journals. As a supervisor of graduate study in Minnesota and New York, he had six students proceed to Ph.D.s with theses supervised by him, two in Minnesota and four in New York. See the link to the Mathematics Genealogy Project below. Guggenheimer has also contributed to literature on Judaism. In 1966, he wrote "Logical problems in Jewish tradition".[5] The next year he contributed "Magic and Dialect" to Diogenes[6] where he examines the supposition that "knowledge of the right name gives power over the bearer of that name". In 1995 Guggenheimer presented his A Scholar's Haggadah, which makes a bilingual comparison of variances in the traditions of Passover observance. It includes Ashkenazic, Sephardic, and Oriental sources. His study of the Jerusalem Talmud provided text and commentary.[7] He died in March 2021 at the age of 96. Family On June 6, 1947, Guggenheimer married Eva Auguste Horowitz. Together they wrote Jewish Family Names and their Origins: an Etymological Dictionary (1992).[8] They have two sons, Michael, a professor of Arabic,[9]  and Tobias I. S., an architect,[10] and two daughters Dr. Esther Furman, a biochemist,[11] and Hanna Y. Guggenheimer, an artist. Notes 1. "Henry Walter Guggenheimer". GENi. Retrieved June 4, 2020. 2. Commentarii Mathematici Helvetici:25:257–97 3. Robert Hermann (1979) "Conformal and Non-Euclidean Geometry in R3 from the Kleinian Viewpoint", Appendix A, page 367 of Development of Mathematics in the 19th Century by Felix Klein, Math Sci Press, Boston. 4. Robert E. Krieger Publishing Company, Huntington N.Y. 5. Ph. Longworth ed. (1966) Confrontations with Judaism 6. Diogenes (15:80–6) 7. H. Guggenheimer (2000 to 2015) The Jerusalem Talmud, De Gruyter 8. KTAV Publishing House, Inc. ISBN 978-0-88125-297-2, 882 pages. Google Books 9. Michael, Guggenheimer. "Faculty". NYU.edu. Retrieved June 3, 2020. 10. Guggenheimer, Tobias. "NYS Professions-Online Verifications". nysed.gov. Retrieved June 3, 2020.{{cite web}}: CS1 maint: url-status (link) 11. Guggenheimer, Esther. "Esther Guggenheimer-Furman's research works". ResearchGate. Retrieved June 3, 2020.{{cite web}}: CS1 maint: url-status (link) References • Allen G. Debus, "Heinrich Walter Guggenheimer", Who's Who in Science, 1968. • Heinrich Guggenheimer at Mathematics Genealogy Project. • "Guggenheimer, Heinrich Walter" in American Men and Women of Science, 25th edition, Gale, 2008. Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands • Poland Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
# Linear programming and its applications in optimization Linear programming (LP) is a mathematical optimization technique that deals with optimization problems of the form: $$\text{minimize } \quad \mathbf{c}^T \mathbf{x}$$ $$\text{subject to } \quad \mathbf{A} \mathbf{x} \leq \mathbf{b}$$ $$\mathbf{x} \geq \mathbf{0}$$ where $\mathbf{c}$ is the objective function vector, $\mathbf{A}$ is the constraint matrix, $\mathbf{b}$ is the constraint vector, and $\mathbf{x}$ is the decision variable vector. Linear programming has numerous applications in optimization, including: - Production planning and inventory control - Resource allocation and scheduling - Network optimization - Financial portfolio optimization Consider a production planning problem where a company wants to maximize profit by producing and selling a certain number of products. The company has a limited amount of raw materials and production capacity. The objective is to determine the optimal production levels for each product while meeting the constraints on raw materials and production capacity. The LP formulation for this problem would look like: $$\text{maximize } \quad \mathbf{c}^T \mathbf{x}$$ $$\text{subject to } \quad \mathbf{A} \mathbf{x} \leq \mathbf{b}$$ $$\mathbf{x} \geq \mathbf{0}$$ where $\mathbf{c}$ is the profit vector, $\mathbf{A}$ is the constraint matrix, $\mathbf{b}$ is the constraint vector, and $\mathbf{x}$ is the production vector. ## Exercise Solve the following LP problem: $$\text{maximize } \quad 3x_1 + 4x_2$$ $$\text{subject to } \quad x_1 + x_2 \leq 10$$ $$\quad 2x_1 + x_2 \leq 8$$ $$\quad x_1, x_2 \geq 0$$ Answer: # Model predictive control framework Model predictive control (MPC) is an advanced control technique that uses optimization algorithms to make decisions based on a mathematical model of the system being controlled. The MPC framework consists of the following steps: 1. Define a mathematical model of the system being controlled. 2. Formulate an optimization problem that represents the control problem. 3. Solve the optimization problem to obtain the optimal control inputs. 4. Implement the optimal control inputs to control the system. MPC is particularly useful in real-time control systems, such as robotics and automotive systems, where the system dynamics are complex and the control inputs need to be computed quickly. Consider a simple mass-spring-damper system: $$\frac{d^2 x}{dt^2} + 2\zeta \omega_n \frac{d x}{dt} + \omega_n^2 x = F(t)$$ where $x$ is the displacement, $\omega_n$ is the natural frequency, $\zeta$ is the damping ratio, and $F(t)$ is the external force. The MPC framework for this system would involve defining a mathematical model of the mass-spring-damper system, formulating an optimization problem that represents the control problem, solving the optimization problem to obtain the optimal control inputs, and implementing the optimal control inputs to control the system. ## Exercise Formulate an optimization problem that represents the control problem for the mass-spring-damper system described in the example. Answer: # Nonlinear programming and its applications in optimization Nonlinear programming (NLP) is a mathematical optimization technique that deals with optimization problems where the objective function and/or the constraints are nonlinear. NLP has numerous applications in optimization, including: - Optimal control of nonlinear systems - Design optimization of structures and materials - Machine learning and artificial intelligence - Economic and engineering optimization problems Consider an optimal control problem for a nonlinear system: $$\text{minimize } \quad J(x(t)) = \int_0^T \frac{1}{2} x(t)^2 dt + \frac{1}{2} x(T)^2$$ $$\text{subject to } \quad \dot{x}(t) = x(t)^2$$ $$\quad x(0) = 0$$ where $x(t)$ is the state variable, $t$ is time, and $\dot{x}(t)$ is the time derivative of $x(t)$. The NLP formulation for this problem would involve defining a mathematical model of the nonlinear system, formulating an optimization problem that represents the control problem, and solving the optimization problem to obtain the optimal control inputs. ## Exercise Solve the following NLP problem: $$\text{minimize } \quad J(x) = \frac{1}{2} x^2$$ $$\text{subject to } \quad x^3 - 2x + 1 \leq 0$$ $$\quad x \geq 0$$ Answer: # Optimization techniques for real-time systems Optimization techniques for real-time systems involve solving optimization problems in real-time, often with constraints on computational time. Some commonly used optimization techniques for real-time systems include: - Interior-point methods - Sequential quadratic programming (SQP) - Online convex optimization - Stochastic optimization Consider a real-time control problem for a robotic arm where the objective is to minimize the energy consumption while moving the arm from one position to another. The control problem can be formulated as an optimization problem, and various optimization techniques can be applied to solve it in real-time. ## Exercise Discuss the advantages and disadvantages of using interior-point methods, sequential quadratic programming (SQP), online convex optimization, and stochastic optimization for real-time optimization problems. Answer: # State estimation and its importance in control systems State estimation is the process of estimating the state of a dynamic system from its observed outputs. State estimation is important in control systems because it provides a way to make accurate control decisions based on the current state of the system. Some common state estimation techniques include: - Kalman filtering - Extended Kalman filtering - Unscented Kalman filtering - Recursive least squares (RLS) Consider a control system for a robotic arm where the state variables are the position and velocity of the arm. State estimation can be used to estimate the current position and velocity of the arm based on its observed outputs, such as the force sensor and encoder readings. ## Exercise Discuss the advantages and disadvantages of using Kalman filtering, extended Kalman filtering, unscented Kalman filtering, and recursive least squares (RLS) for state estimation in control systems. Answer: # Model predictive control for state estimation Model predictive control (MPC) can be used for state estimation by formulating an optimization problem that represents the state estimation problem. The MPC framework for state estimation involves defining a mathematical model of the system, formulating an optimization problem that represents the state estimation problem, solving the optimization problem to obtain the estimated state, and implementing the estimated state to control the system. Consider the mass-spring-damper system described in the previous section. The MPC framework for state estimation would involve defining a mathematical model of the mass-spring-damper system, formulating an optimization problem that represents the state estimation problem, solving the optimization problem to obtain the estimated state, and implementing the estimated state to control the system. ## Exercise Formulate an optimization problem that represents the state estimation problem for the mass-spring-damper system described in the example. Answer: # Linear model predictive control Linear model predictive control (LMPC) is a special case of MPC where the mathematical model of the system and the constraints are linear. LMPC is used to make control decisions based on a linear model of the system, which simplifies the optimization problem and allows for faster computation. Consider a linear model predictive control problem for a mass-spring-damper system: $$\text{minimize } \quad \mathbf{c}^T \mathbf{x}$$ $$\text{subject to } \quad \mathbf{A} \mathbf{x} \leq \mathbf{b}$$ $$\mathbf{x} \geq \mathbf{0}$$ where $\mathbf{c}$ is the objective function vector, $\mathbf{A}$ is the constraint matrix, $\mathbf{b}$ is the constraint vector, and $\mathbf{x}$ is the decision variable vector. ## Exercise Solve the following LMPC problem: $$\text{minimize } \quad 3x_1 + 4x_2$$ $$\text{subject to } \quad x_1 + x_2 \leq 10$$ $$\quad 2x_1 + x_2 \leq 8$$ $$\quad x_1, x_2 \geq 0$$ Answer: # Nonlinear model predictive control Nonlinear model predictive control (NMPC) is a special case of MPC where the mathematical model of the system and the constraints are nonlinear. NMPC is used to make control decisions based on a nonlinear model of the system, which allows for more accurate control decisions but requires more computation. Consider a nonlinear model predictive control problem for a mass-spring-damper system: $$\text{minimize } \quad J(x(t)) = \int_0^T \frac{1}{2} x(t)^2 dt + \frac{1}{2} x(T)^2$$ $$\text{subject to } \quad \dot{x}(t) = x(t)^2$$ $$\quad x(0) = 0$$ where $x(t)$ is the state variable, $t$ is time, and $\dot{x}(t)$ is the time derivative of $x(t)$. ## Exercise Solve the following NMPC problem: $$\text{minimize } \quad J(x) = \frac{1}{2} x^2$$ $$\text{subject to } \quad x^3 - 2x + 1 \leq 0$$ $$\quad x \geq 0$$ Answer: # Advanced model predictive control techniques Advanced model predictive control techniques involve using more sophisticated optimization algorithms and techniques to make control decisions. Some common advanced MPC techniques include: - Convex optimization - Second-order cone programming (SOCP) - Mixed-integer programming (MIP) - Global optimization Consider an advanced MPC problem for a robotic arm where the objective is to minimize the energy consumption while moving the arm from one position to another. The advanced MPC technique can involve using convex optimization, second-order cone programming (SOCP), mixed-integer programming (MIP), or global optimization to solve the optimization problem. ## Exercise Discuss the advantages and disadvantages of using convex optimization, second-order cone programming (SOCP), mixed-integer programming (MIP), and global optimization for advanced model predictive control techniques. Answer: # Applications of real-time optimal control and state estimation Real-time optimal control and state estimation have numerous applications in various fields, including: - Robotics and automation - Automotive systems - Aerospace systems - Power systems - Manufacturing and production - Biomedical systems - Financial markets Consider a real-time control problem for a robotic arm where the objective is to minimize the energy consumption while moving the arm from one position to another. The control problem can be formulated as an optimization problem, and various optimization techniques can be applied to solve it in real-time. ## Exercise Discuss the applications of real-time optimal control and state estimation in robotics and automation, automotive systems, aerospace systems, power systems, manufacturing and production, biomedical systems, and financial markets. Answer: # Conclusion and future developments Real-time optimal control and state estimation are powerful techniques that can be used to make accurate control decisions in various fields. The future development of these techniques will likely involve advancements in optimization algorithms, more sophisticated mathematical models, and the integration of artificial intelligence and machine learning. ## Exercise Discuss the future developments of real-time optimal control and state estimation, including advancements in optimization algorithms, more sophisticated mathematical models, and the integration of artificial intelligence and machine learning. Answer:
Textbooks
\begin{document} \title{Convergence of local supermartingales and Novikov-Kazamaki type conditions for processes with jumps\thanks{We thank Tilmann Bl\"ummel, Pavel Chigansky, Sam Cohen, Christoph Czichowsky, Freddy Delbaen, Moritz D\"umbgen, Hardy Hulley, Jan Kallsen, Ioannis Karatzas, Kostas Kardaras, Kasper Larsen, and Nicolas Perkowski for discussions on the subject matter of this paper. We are indebted to Dominique L\'epingle for sending us the paper \citet{Lepingle_Memin_integrabilite}. M.L.~acknowledges funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n.~307465-POLYTE. J.R.~acknowledges generous support from the Oxford-Man Institute of Quantitative Finance, University of Oxford, where a major part of this work was completed. }} \author{Martin Larsson\thanks{Department of Mathematics, ETH Zurich, R\"amistrasse 101, CH-8092, Zurich, Switzerland. E-mail: [email protected]} \and Johannes Ruf\thanks{Department of Mathematics, University College London, Gower Street, London WC1E 6BT, United Kingdom. E-mail: [email protected]}} \date{November 23, 2014} \maketitle \begin{abstract} We characterize the event of convergence of a local supermartingale. Conditions are given in terms of its predictable characteristics and quadratic variation. The notion of extended local integrability plays a key role. We then apply these characterizations to provide a novel proof for the sufficiency and necessity of Novikov-Kazamaki type conditions for the martingale property of nonnegative local martingales with jumps. {\bf Keywords:} Supermartingale convergence, extended localization, stochastic exponential, local martingale, uniform integrability, Novikov-Kazamaki conditions, F\"ollmer measure. {\bf MSC2010 subject classification:} Primary 60G07; secondary: 60G17, 60G30, 60G44. \end{abstract} \section{Introduction} Among the most fundamental results in the theory of martingales are the martingale and supermartingale convergence theorems of \citet{Doob:1953}. One of Doob's results states that if $X$ is a nonnegative supermartingale, then $\lim_{t\to\infty}X_t$ exists almost surely. If $X$ is not nonnegative, or more generally fails to satisfy suitable integrability conditions, then the limit need not exist, or may only exist with some probability. One is therefore naturally led to search for convenient characterizations of the event of convergence $D=\{\lim_{t\to\infty}X_t \text{ exists in }\mathbb{R}\}$. An archetypical example of such a characterization arises from the Dambis-Dubins-Schwarz theorem: if~$X$ is a continuous local martingale, then $D=\{[X,X]_{\infty-}<\infty\}$ almost surely. This equality fails in general, however, if $X$ is not continuous, in which case it is natural to ask for a description of how the two events differ. The first main goal of the present paper is to address questions of this type: how can one describe the event of convergence of a process $X$, as well as of various related processes of interest? We do this in the setting where $X$ is a {\em local supermartingale on a stochastic interval $\lc0,\tau[\![$}, where $\tau$ is a foretellable time. (Precise definitions are given below, but we remark already here that every predictable time is foretellable.) While the continuous case is relatively simple, the general case offers a much wider range of phenomena. For instance, there exist locally bounded martingales $X$ for which both $\lim_{t\to \infty}X_t$ exists in $\mathbb{R}$ and $[X,X]_{\infty-}=\infty$, or for which $\liminf_{t\to\infty}X_t=-\infty$, $\limsup_{t\to\infty}X_t=\infty$, and $[X,X]_{\infty-}<\infty$ hold simultaneously almost surely. We provide a large number of examples of this type. To tame this disparate behavior, some form of restriction on the jump sizes is needed. The correct additional property is that of {\em extended local integrability}, which is a modification of the usual notion of local integrability. Our original motivation for considering questions of convergence came from the study of Novikov-Kazamaki type conditions for a nonnegative local martingale $Z={\mathcal E}(M)$ to be a uniformly integrable martingale. Here ${\mathcal E}(\cdot)$ denotes the stochastic exponential and $M$ is a local martingale. This problem was originally posed by \citet{Girsanov_1960}, and is of great importance in a variety of applications, for example in mathematical finance, where $Z$ corresponds to the Radon-Nikodym density process of a so-called risk-neutral measure. Early sufficient conditions were obtained by \citet{Gikhman_Skorohod_1972} and \citet{Lip_Shir_1972}. An important milestone is due to \cite{Novikov} who proved that if $M$ is continuous, then $\mathbb{E}[e^{\frac{1}{2}[M,M]_\infty}]<\infty$ implies that $Z$ is a uniformly integrable martingale. \cite{Kazamaki_1977} and \citet{Kazamaki_1983} later proved that $\sup_\sigma \mathbb{E}[e^{\frac{1}{2}M_\sigma}]<\infty$ is in fact sufficient, where the supremum is taken over all bounded stopping times~$\sigma$. These results have been generalized in a number of ways. The general case where $M$ may exhibit jumps has been considered by \cite{Novikov:1975}, \citet{Lepingle_Memin_integrabilite,Lepingle_Memin_Sur}, \citet{Okada_1982}, \citet{Yan_1982}, \citet{Kallsen_Shir}, \citet{Protter_Shimbo}, \citet{Sokol2013_optimal}, and \citet{GlauGrbac_2014}, among others. Approaches related to the one we present here can be found in \cite{Kabanov/Liptser/Shiryaev:1979,Kabanov/Liptser/Shiryaev:1980}, \citet{CFY}, \citet{KMK2010}, \citet{Mayerhofer_2011}, \citet{MU_martingale}, \citet{Ruf_Novikov, Ruf_martingale}, \citet{Blanchet_Ruf_2012}, \citet{Klebaner:2014}, and \citet{Hulley_Ruf:2015}, among others. Let us indicate how questions of convergence arise naturally in this context, assuming for simplicity that $M$ is continuous and $Z$ strictly positive, which is the situation studied by~\cite{Ruf_Novikov}. For any bounded stopping time $\sigma$ we have \[ \mathbb{E}_\mathbb{P}\left[ e^{ \frac{1}{2} [M,M]_\sigma } \right] = \mathbb{E}_\mathbb{P}\left[Z_\sigma e^{ -M_\sigma + [M,M]_\sigma} \right]. \] While {\em a priori} $Z$ need not be a uniformly integrable martingale, one can still find a probability measure~$\mathbb{Q}$, sometimes called the {\em F\"ollmer measure}, under which $Z$ may explode, say at time $\tau_\infty$, and such that $\mathrm{d} \mathbb{Q} / \mathrm{d} \mathbb{P} |_{\mathcal F_\sigma} = Z_\sigma$ holds for any bounded stopping time $\sigma<\tau_\infty$. For such stopping times, \[ \mathbb{E}_\mathbb{P}\left[ e^{ \frac{1}{2} [M,M]_\sigma } \right] = \mathbb{E}_\mathbb{Q}\left[ e^{N_\sigma} \right], \] where $N=-M+[M,M]$ is a local $\mathbb{Q}$--martingale on $\lc0,\tau_\infty[\![$. The key point is that $Z$ is a uniformly integrable martingale under $\mathbb{P}$ if and only if $\mathbb{Q}(\lim_{t\to\tau_\infty}N_t\text{ exists in }\mathbb{R})=1$. The role of Novikov's condition is to guarantee that the latter holds. In the continuous case there is not much more to say; it is the extension of this methodology to the general jump case that requires more sophisticated convergence criteria for the process $X=N$, as well as for certain related processes. Moreover, the fact that $\tau_\infty$ may {\em a priori} be finite explains why we explicitly allow $X$ to be defined on a stochastic interval when we develop the theory. Our convergence results allow us to give simple and transparent proofs of most Novikov-Kazamaki type conditions that are available in the literature. We are also led to necessary and sufficient conditions of this type, yielding improvements of existing criteria, even in the continuous case. The rest of the paper is organized as follows. Section~\ref{S:prelim} contains notational conventions and mathematical preliminaries. Section~\ref{S:EL} introduces the notion of extended localization and establishes some general properties. Our main convergence theorems and a number of corollaries are given in Section~\ref{S:convergence}. Section~\ref{S:NK} is devoted to Novikov-Kazamaki type conditions. Section~\ref{S:examp} contains counterexamples illustrating the sharpness of the results obtained in Sections~\ref{S:convergence} and~\ref{S:NK}. Auxiliary material is developed in the appendices: Appendix~\ref{A:SE} reviews stochastic exponentials of semimartingales on stochastic intervals. Appendix~\ref{S:follmer} reviews the F\"ollmer measure associated with a nonnegative local martingales. Appendix~\ref{App:Z} characterizes extended local integrability under the F\"ollmer measure. Finally, Appendix~\ref{app:embed} discusses a path space embedding needed to justify our use of the F\"ollmer measure in full generality. \section{Notation and preliminaries} \label{S:prelim} In this section we establish some basic notation that will be used throughout the paper. For further details and definitions the reader is referred to \citet{JacodS}. We work on a stochastic basis $(\Omega,\mathcal F, \mathbb F, \mathbb{P})$ where $\mathbb F=({\mathcal F}_t)_{t\ge0}$ is a right-continuous filtration, not necessarily augmented by the $\mathbb{P}$--nullsets. Given a c\`adl\`ag process $X=(X_t)_{t\ge0}$ we write $X_-$ for its left limits and $\Delta X=X-X_-$ for its jump process, using the convention $X_{0-}=X_0$. The jump measure of $X$ is denoted by $\mu^X$, and its compensator by $\nu^X$. We let $X^\tau$ denote the process stopped at a stopping time~$\tau$. If $X$ is a semimartingale, $X^c$ denotes its continuous local martingale part, and $H\cdot X$ is the stochastic integral of an $X$--integrable process $H$ with respect to $X$. The stochastic integral of a predictable function $F$ with respect to a random measure $\mu$ is denoted $F*\mu$. For two stopping times $\sigma$ and $\tau$, the stochastic interval $[\![\sigma,\tau[\![$ is the set \[ [\![\sigma,\tau[\![ = \{ (\omega,t)\in\Omega\times\mathbb{R}_+ : \sigma(\omega)\le t<\tau(\omega)\}. \] Stochastic intervals such as $]\!]\sigma,\tau]\!]$ are defined analogously. Note that all stochastic intervals are disjoint from $\Omega\times\{\infty\}$. A process on a stochastic interval $\lc0,\tau[\![$, where $\tau$ is a stopping time, is a measurable map $X:\lc0,\tau[\![\to\overline\mathbb{R}$. If $\tau'\le\tau$ is another stopping time, we may view $X$ as a process on $\lc0,\tau'[\![$ by considering its restriction to that set; this is often done without explicit mentioning. We say that $X$ is optional (predictable, progressive) if it is the restriction to $\lc0,\tau[\![$ of an optional (predictable, progressive) process. In this paper, $\tau$ will be a foretellable time; that is, a $[0,\infty]$--valued stopping time that admits a nondecreasing sequence $(\tau_n)_{n \in \mathbb{N}}$ of stopping times with $\tau_n<\tau$ almost surely for all $n \in \mathbb{N}$ on the event $\{\tau>0\}$ and $\lim_{n \to \infty} \tau_n = \tau$ almost surely. Such a sequence is called an announcing sequence. We view the stopped process $X^{\tau_n}$ as a process on $\lc0,\infty[\![$ by setting $X_t=X_{\tau_n}\1{\tau_n<\tau}$ for all $t\ge\tau_n$. If $\tau$ is a foretellable time and $X$ is a process on $\lc0,\tau[\![$, we say that $X$ is a semimartingale on $\lc0,\tau[\![$ if there exists an announcing sequence $(\tau_n)_{n\in \mathbb{N}}$ for $\tau$ such that $X^{\tau_n}$ is a semimartingale for each $n \in \mathbb{N}$. Local martingales and local supermartingales on $\lc0,\tau[\![$ are defined analogously. Basic notions for semimartingales carry over by localization to semimartingales on stochastic intervals. For instance, if $X$ is a semimartingale on $\lc0,\tau[\![$, its quadratic variation process $[X,X]$ is defined as the process on $\lc0,\tau[\![$ that satisfies $[X,X]^{\tau_n} = [X^{\tau_n}, X^{\tau_n}]$ for each $n \in \mathbb{N}$. Its jump measure~$\mu^X$ and compensator~$\nu^X$ are defined analogously, as are stochastic integrals with respect to $X$ (or $\mu^X$, $\nu^X$, $\mu^X-\nu^X$). In particular, $H$ is called $X$--integrable if it is $X^{\tau_n}$--integrable for each $n\in\mathbb{N}$, and $H\cdot X$ is defined as the semimartingale on $[\![ 0, \tau[\![$ that satisfies $(H \cdot X)^{\tau_n} = H \cdot X^{\tau_n}$ for each $n \in \mathbb{N}$. Similarly, $G_{\rm loc}(\mu^X)$ denotes the set of predictable functions $F$ for which the compensated integral $F*(\mu^{X^{\tau_n}}-\nu^{X^{\tau_n}})$ is defined for each $n\in\mathbb{N}$ (see Definition~II.1.27 in~\cite{JacodS}), and $F*(\mu^X-\nu^X)$ is the semimartingale on $\lc0,\tau[\![$ that satisfies $(F*(\mu^X-\nu^X))^{\tau_n}=F*(\mu^{X^{\tau_n}}-\nu^{X^{\tau_n}})$ for all $n\in\mathbb{N}$. One easily verifies that all these notions are independent of the particular sequence $(\tau_n)_{n\in\mathbb{N}}$. We refer to \citet{Maisonneuve1977}, \citet{Jacod_book}, and Appendix~A in~\citet{CFR2011} for further details on local martingales on stochastic intervals. Since we do not require ${\mathcal F}$ to contain all $\mathbb{P}$--nullsets, we may run into measurability problems with quantities like $\sup_{t<\tau}X_t$ for an optional (predictable, progressive) process $X$ on $\lc0,\tau[\![$. However, the left-continuous process $\sup_{t<\cdot}X_t$ is adapted to the $\mathbb{P}$--augmentation $\overline\mathbb{F}$ of $\mathbb{F}$; see the proof of Theorem~IV.33 in~\cite{Dellacherie/Meyer:1978}. Hence it is $\overline \mathbb{F}$--predictable, so we can find an $\mathbb{F}$--predictable process $U$ that is indistinguishable from it; see Lemma~7 in Appendix~1 of~\cite{Dellacherie/Meyer:1982}. Thus the process $V=U\vee X$ is $\mathbb{F}$-optional (predictable, progressive) and indistinguishable from $\sup_{t\le\cdot}X_t$. When writing the latter, we always refer to the indistinguishable process $V$. We define the set \[ {\mathcal T} = \{ \tau : \text{ $\tau$ is a bounded stopping time} \}. \] Finally, we emphasize the convention $Y(\omega)\boldsymbol 1_A(\omega)=0$ for all (possibly infinite-valued) random variables~$Y$, events $A\in{\mathcal F}$, and $\omega\in\Omega\setminus A$. \section{The notion of extended localization} \label{S:EL} The following strengthening of the notion of local integrability and boundedness turns out to be very useful. It is a mild variation of the notion of $\gamma$-localization introduced by~\cite{CS:2005}, see also \cite{Stricker_1981}. \begin{definition}[Extended locally integrable / bounded] \label{D:extended} Let $\tau$ be a foretellable time and $X$ a progressive process on $\lc0,\tau[\![$. Let $D\in{\mathcal F}$. We call $X$ {\em extended locally integrable on $D$} if there exists a nondecreasing sequence $(\tau_n)_{n\in\mathbb{N}}$ of stopping times as well as a sequence $(\Theta_n)_{n \in \mathbb{N}}$ of integrable random variables such that the following two conditions hold almost surely:\begin{enumerate} \item $\sup_{t\ge0} |X^{\tau_n}_t|\le\Theta_n$ for each $n\in\mathbb{N}$. \item\label{D:extended:ii} $D\subset\bigcup_{n\in\mathbb{N}}\{\tau_n \geq \tau\}$. \end{enumerate} If $D=\Omega$, we simply say that $X$ is {\em extended locally integrable}. Similarly, we call $X$ {\em extended locally bounded (on $D$)} if $\Theta_n$ can be taken deterministic for each $n \in \mathbb{N}$. \qed \end{definition} Extended localization naturally suggests itself when one deals with questions of convergence. The reason is the simple inclusion $D\subset\bigcup_{n\in\mathbb{N}}\{X=X^{\tau_n}\}$, where $D$ and $(\tau_n)_{n \in \mathbb{N}}$ are as in Definition~\ref{D:extended}. This inclusion shows that to prove that $X$ converges on $D$, it suffices to prove that each $X^{\tau_n}$ converges on $D$. If $X$ is extended locally integrable on $D$, one may thus assume when proving such results that $X$ is in fact uniformly bounded by an integrable random variable. This extended localization procedure will be used repeatedly throughout the paper. It is clear that a process is extended locally integrable if it is extended locally bounded. We now provide some further observations on this strenghtened notion of localization. \begin{lemma}[Properties of extended localization] \label{L:ELI} Let $\tau$ be a foretellable time, $D\in{\mathcal F}$, and $X$ a process on $\lc0,\tau[\![$. \begin{enumerate} \item\label{L:ELI:1} If $X=X'+X''$, where $X'$ and $X''$ are extended locally integrable (bounded) on $D$, then $X$ is extended locally integrable (bounded) on $D$. \item\label{L:ELI:2} If there exists a nondecreasing sequence $(\tau_n)_{n \in \mathbb{N}}$ of stopping times with $D\subset\bigcup_{n\in\mathbb{N}}\{\tau_n \geq \tau\}$ such that $X^{\tau_n}$ is extended locally integrable (bounded) on $D$ for each $n \in \mathbb{N}$, then $X$ is extended locally integrable (bounded) on $D$. In words, an extended locally extended locally integrable (bounded) process is extended locally integrable (bounded). \item\label{L:ELI:3} Suppose $X$ is c\`adl\`ag adapted. Then $\sup_{t<\tau}|X_t|<\infty$ on $D$ and $\Delta X$ is extended locally integrable (bounded) on $D$ if and only if $X$ is extended locally integrable (bounded) on $D$. \item\label{L:ELI:5} Suppose $X$ is c\`adl\`ag adapted. Then $x \boldsymbol 1_{x > 1} * \mu^X$ is extended locally integrable on $D$ if and only if $x \boldsymbol 1_{x > 1} * \nu^X$ is extended locally integrable on $D$. Any of these two conditions imply that $(\Delta X)^+$ is extended locally integrable on $D$. \item\label{L:ELI:6} Suppose $X$ is optional. If $\sup_{\sigma\in{\mathcal T}}\mathbb{E}[|X_\sigma|\1{\sigma<\tau}]<\infty$ then $X$ is extended locally integrable. \item\label{L:ELI:4} Suppose $X$ is predictable. Then $\sup_{t<\tau}|X_t|<\infty$ on $D$ if and only if $X$ is extended locally bounded on $D$ if and only if $X$ is extended locally integrable on $D$. \end{enumerate} \end{lemma} \begin{proof} The statement in \ref{L:ELI:1} follows by defining a sequence $(\tau_n)_{n \in \mathbb{N}}$ of stopping times by $\tau_n = \tau_n' \wedge \tau_n''$, where $(\tau_n')_{n \in \mathbb{N}}$ and $(\tau_n'')_{n \in \mathbb{N}}$ localize $X'$ and $X''$ extendedly. For \ref{L:ELI:2}, suppose without loss of generality that $\tau_n\le\tau$ for all $n\in\mathbb{N}$, and let $(\tau_m^{(n)})_{m \in \mathbb{N}}$ localize $X^{\tau_n}$ extendedly, for each $n \in \mathbb{N}$. Let $m_n$ be the smallest index such that $\mathbb{P}(D \cap \{\tau_{m_n}^{(n)} < \tau_n\}) \leq 2^{-n}$ for each $n \in \mathbb{N}$. Next, define $\widehat\tau_0=0$ and then iteratively $\widehat\tau_n = \tau_n \wedge (\tau_{m_n}^{(n)} \vee \widehat\tau_{n-1})$ for each $n \in \mathbb{N}$, and check, by applying Borel-Cantelli, that the sequence $(\widehat \tau_n)_{n \in \mathbb{N}}$ satisfies the conditions of Definition~\ref{D:extended}. For \ref{L:ELI:3} note that the sequence $(\tau_n)_{n \in \mathbb{N}}$ of crossing times, given by $\tau_n=\inf\{t \geq 0:|X_t|\ge n\}$, satisfies Definition~\ref{D:extended}\ref{D:extended:ii}. Thus, by~\ref{L:ELI:2}, it suffices to prove the statement with $X$ replaced by $X^{\tau_n}$. The equivalence then follows directly from the inequalities $|X^{\tau_n}|\le n+|\Delta X_{\tau_n}|\1{\tau_n<\tau}$ and $|\Delta X^{\tau_n}|\le 2 n+|X^{\tau_n}|$ To prove~\ref{L:ELI:5}, suppose first $x\boldsymbol 1_{x>1}*\mu^X$ is extended locally integrable on~$D$. In view of~\ref{L:ELI:2} we may assume by localization that it is dominated by some integrable random variable~$\Theta$, which then yields $\mathbb{E}[x\boldsymbol 1_{x>1}*\nu^X_{\tau-}]\le\mathbb{E}[\Theta]<\infty$. Thus $x\boldsymbol 1_{x>1}*\nu^X$ is dominated by the integrable random variable $x\boldsymbol 1_{x>1}*\nu^X_{\tau-}$, as required. For the converse direction simply interchange $\mu^X$ and $\nu^X$. The fact that $(\Delta X)^+ \leq 1 + x \boldsymbol 1_{x > 1} * \mu^X$ then allows us to conclude. We now prove \ref{L:ELI:6}, supposing without loss of generality that $X\ge0$. Let $\overline{\mathcal F}$ be the $\mathbb{P}$-completion of ${\mathcal F}$, and write $\mathbb{P}$ also for its extension to $\overline{\mathcal F}$. Define $C=\{\sup_{t<\tau}X_t=\infty\}\in\overline{\mathcal F}$. We first show that $\mathbb{P}(C)=0$, and assume for contradiction that $\mathbb{P}(C)>0$. For each $n\in\mathbb{N}$ define the optional set $O_n=\{t<\tau \text{ and } X_t\ge n\}\subset\Omega\times\mathbb{R}_+$. Then $C=\bigcap_{n\in\mathbb{N}}\pi(O_n)$, where $\pi(O_n)\in\overline {\mathcal F}$ is the projection of $O_n$ onto $\Omega$. The optional section theorem, see Theorem~IV.84 in~\cite{Dellacherie/Meyer:1978}, implies that for each $n\in\mathbb{N}$ there exists a stopping time $\sigma_n$ such that \begin{equation} \label{eq:L:ELI:section} [\![\sigma_n]\!]\subset O_n \qquad\text{and}\qquad \mathbb{P}\left( \{\sigma_n=\infty\}\cap\pi(O_n) \right) \le \frac{1}{2}\,\mathbb{P}(C). \end{equation} Note that the first condition means that $\sigma_n<\tau$ and $X_{\sigma_n}\ge n$ on $\{\sigma_n<\infty\}$ for each $n \in \mathbb{N}$. Thus, \[ \mathbb{E}[X_{m\wedge\sigma_n}\1{m\wedge\sigma_n<\tau}] \ge\mathbb{E}[X_{\sigma_n}\boldsymbol 1_{\{\sigma_n\le m\}\cap C}] \ge n\mathbb{P}(\{\sigma_n<m\}\cap C) \to n\mathbb{P}(\{\sigma_n<\infty\}\cap C) \] as $m\to\infty$ for each $n \in \mathbb{N}$. By hypothesis, the left-hand side is bounded by a constant $\kappa$ that does not depend on~$m \in \mathbb{N}$ or~$n \in \mathbb{N}$. Hence, using that $C\subset\pi(O_n)$ for each $n \in \mathbb{N}$ as well as \eqref{eq:L:ELI:section}, we get \[ \kappa \ge n\mathbb{P}(\{\sigma_n<\infty\}\cap C) \ge n\Big( \mathbb{P}(C) - \mathbb{P}(\{\sigma_n=\infty\}\cap \pi(O_n))\Big) \ge \frac{n}{2}\,\mathbb{P}(C). \] Letting $n$ tend to infinity, this yields a contradiction, proving $\mathbb{P}(C)=0$ as desired. Now define $\tau_n=\inf\{t \geq 0 :X_t\ge n\}$ for each $n\in\mathbb{N}$. By what we just proved, $\mathbb{P}(\bigcup_{n\in\mathbb{N}}\{\tau_n\ge\tau\})=1$. Furthermore, for each $n\in\mathbb{N}$ we have $0\le X^{\tau_n}\le n+X_{\tau_n}\1{\tau_n<\tau}$, which is integrable by assumption. Thus $X$ is extended locally integrable. For \ref{L:ELI:4}, let $U=\sup_{t<\cdot}|X_t|$. It is clear that extended local boundedness on $D$ implies extended local integrability on $D$ implies $U_{\tau-}<\infty$ on $D$. Hence it suffices to prove that $U_{\tau-}<\infty$ on $D$ implies extended local boundedness on $D$. To this end, we may assume that $\tau < \infty$, possibly after a change of time. We now define a process $U'$ on $[\![ 0, \infty [\![$ by $U' = U \boldsymbol 1_{[\![ 0, \tau[\![} + U_{\tau-} \boldsymbol 1_{[\![ \tau, \infty[\![} $, and follow the proof of Lemma~I.3.10 in \citet{JacodS} to conclude. \end{proof} We do not know whether the implication in Lemma~\ref{L:ELI}\ref{L:ELI:6} holds if $X$ is not assumed to be optional but only progressive. \begin{example} If $X$ is a uniformly integrable martingale then $X$ is extended locally integrable. This can be seen by considering first crossing times of $|X|$, as in the proof of Lemma~\ref{L:ELI}\ref{L:ELI:3}. \qed \end{example} \section{Convergence of local supermartingales} \label{S:convergence} In this section we state and prove a number of theorems regarding the event of convergence of a local supermartingale on a stochastic interval. The results are stated in Subsections~\ref{S:conv statements} and~\ref{S:conv locmg}, while the remaining subsections contain the proofs. \subsection{Convergence results in the general case} \label{S:conv statements} Our general convergence results will be obtained under the following basic assumption. \begin{assumption} \label{A:1} Let $\tau>0$ be a foretellable time with announcing sequence $(\tau_n)_{n \in \mathbb{N}}$ and $X = M-A$ a local supermartingale on~$[\![ 0,\tau[\![$, where $M$ and $A$ are a local martingale and a nondecreasing predictable process on $[\![ 0,\tau[\![$, respectively, both starting at zero. \end{assumption} \begin{theorem}[Characterization of the event of convergence] \label{T:conv} Suppose Assumption~\ref{A:1} holds and fix $D\in{\mathcal F}$. The following conditions are equivalent: \begin{enumerate}[label={\rm(\alph{*})}, ref={\rm(\alph{*})}] \item\label{T:conv:a} $\lim_{t\to\tau}X_t$ exists in $\mathbb{R}$ on $D$ and $(\Delta X)^- \wedge X^-$ is extended locally integrable on $D$. \item\label{T:conv:a'} $\liminf_{t\to\tau}X_t>-\infty$ on $D$ and $(\Delta X)^- \wedge X^-$ is extended locally integrable on $D$. \item\label{T:conv:b'} $X^-$ is extended locally integrable on $D$. \item\label{T:conv:b''} $X^+$ is extended locally integrable on $D$ and $A_{\tau-} < \infty$ on $D$. \item\label{T:conv:c} $[X^c,X^c]_{\tau-} + (x^2\wedge|x|)*\nu^X_{\tau-} + A_{\tau-} <\infty$ on $D$. \item\label{T:conv:f} $[X,X]_{\tau-}< \infty$ on $D$, $\limsup_{t\to\tau}X_t>-\infty$ on $D$, and $(\Delta X)^- \wedge X^-$ is extended locally integrable on $D$. \end{enumerate} If additionally $X$ is constant after $\tau_J=\inf\{t \geq 0:\Delta X_t=-1\}$, the above conditions are equivalent to the following condition: \begin{enumerate}[resume, label={\rm(\alph{*})}, ref={\rm(\alph{*})}] \item\label{T:conv:g} Either $\lim_{t\to\tau}{\mathcal E}(X)_t \text{ exists in } \mathbb{R}\setminus\{0\}$ or $\tau_J < \tau$ on $D$, and $(\Delta X)^- \wedge X^-$ is extended locally integrable on $D$. \end{enumerate} \end{theorem} \begin{remark} \label{R:4.3} We make the following observations concerning Theorem~\ref{T:conv}. As in the theorem, we suppose Assumption~\ref{A:1} holds and fix $D\in{\mathcal F}$: \begin{itemize} \item For any local supermartingale $X$, the jump process $\Delta X$ is locally integrable. This is however not enough to obtain good convergence theorems as the examples in Section~\ref{S:examp} show. The crucial additional assumption is that localization is in the extended sense. In Subsections~\ref{A:SS:lack} and \ref{A:SS:one}, several examples are collected that illustrate that the conditions of Therorem~\ref{T:conv} are non-redundant, in the sense that the implications fail for some local supermartingale $X$ if some of the conditions is omitted. \item In \ref{T:conv:c}, we may replace $x^2\boldsymbol 1_{|x|\le \kappa}*\nu^X_{\tau-}$ by $x^2\boldsymbol 1_{|x|\le \kappa}*\mu^X_{\tau-}$, where $\kappa$ is any predictable extended locally integrable process. This follows from a localization argument and Lemma~\ref{L:BJ} below. \item If any of the conditions \ref{T:conv:a}--\ref{T:conv:f} holds then $\Delta X$ is extended locally integrable on $D$. This is a by-product of the proof of the theorem. The extended local integrability of $\Delta X$ also follows, {\em a posteriori}, from Lemma~\ref{L:ELI}\ref{L:ELI:3} since \ref{T:conv:b'} \& \ref{T:conv:b''} imply that $X$ is extended locally integrable on~$D$. \item If any of the conditions \ref{T:conv:a}--\ref{T:conv:f} holds and if $X = M^\prime - A^\prime$ for some local supermartingale $M^\prime$ and some nondecreasing (not necessarily predictable) process $A'$ with $A'_0 = 0$, then $\lim_{t\to\tau} M^\prime_t$ exists in~$\mathbb{R}$ on $D$ and $A^\prime_{\tau-} < \infty$ on $D$. Indeed, $M^\prime \geq X$ and thus the implication {\ref{T:conv:b'}} $\Longrightarrow$ {\ref{T:conv:a}} applied to $M^\prime$ yields that $\lim_{t \to \tau} M^\prime_t$ exists in $\mathbb{R}$, and therefore also $A_{\tau-}^\prime<\infty$. \item One might conjecture that Theorem~\ref{T:conv} can be generalized to special semimartingales $X=M+A$ on~$\lc0,\tau[\![$ by replacing $A$ with its total variation process ${\rm Var}(A)$ in {\ref{T:conv:b''}} and {\ref{T:conv:c}}. However, such a generalization is not possible in general. As an illustration of what can go wrong, consider the deterministic finite variation process $X_t=A_t=\sum_{n=1}^{[t]}(-1)^n n^{-1}$, where $[t]$ denotes the largest integer less than or equal to~$t$. Then $\lim_{t\to \infty}X_t$ exists in~$\mathbb{R}$, being an alternating series whose terms converge to zero. Thus {\ref{T:conv:a}}--{\ref{T:conv:b'}} \& \ref{T:conv:f} hold with $D=\Omega$. However, the total variation ${\rm Var}(A)_\infty=\sum_{n=1}^\infty n^{-1}$ is infinite, so {\ref{T:conv:b''}} \& {\ref{T:conv:c}} do not hold with $A$ replaced by ${\rm Var}(A)$. Related questions are addressed by~\cite{CS:2005}. \item One may similarly ask about convergence of local martingales of the form $X=x*(\mu-\nu)$ for some integer-valued random measure $\mu$ with compensator $\nu$. Here nothing can be said in general in terms of $\mu$ and $\nu$; for instance, if $\mu$ is already predictable then $X=0$. \qed \end{itemize} \end{remark} Therorem~\ref{T:conv} is stated in a general form and its power appears when one considers specific events $D \in \mathcal F$. For example, we may let $D = \{\lim_{t\to\tau}X_t \text{ exists in }\mathbb{R}\}$ or $D = \{\liminf_{t\to\tau}X_t>-\infty\}$. Choices of this kind lead directly to the following corollary. \begin{corollary}[Extended local integrability from below] \label{C:conv2} Suppose Assumption~\ref{A:1} holds and $(\Delta X)^- \wedge X^-$ is extended locally integrable on $\{ \text{$\limsup_{t\to\tau}X_t>-\infty$} \}$. Then the following events are almost surely equal: \begin{align} \label{T:conv2:1} &\Big\{ \text{$\lim_{t\to\tau}X_t$ exists in $\mathbb{R}$} \Big\}; \\ &\Big\{ \text{$\liminf_{t\to\tau}X_t>-\infty$}; \Big\};\label{T:conv2:6}\\ \label{T:conv2:2} &\Big\{ \text{$[X^c,X^c]_{\tau-} + (x^2 \wedge |x|) * \nu_{\tau-}^X + A_{\tau-} <\infty$} \Big\}; \\ &\Big\{ \text{$[X,X]_{\tau-} <\infty$} \Big\} \bigcap \Big\{\text{$\limsup_{t\to\tau}X_t > -\infty$} \Big\}.\label{T:conv2:5} \end{align} \end{corollary} \begin{proof} The statement follows directly from Therorem~\ref{T:conv}, where for each inclusion the appropriate event $D$ is fixed. \end{proof} We remark that the identity \eqref{T:conv2:1} $=$ \eqref{T:conv2:6} appears already in Theorem~5.19 of \citet{Jacod_book} under slightly more restrictive assumptions, along with the equality \begin{equation} \label{eq:XMA} \Big\{ \text{$\lim_{t\to\tau}X_t$ exists in $\mathbb{R}$} \Big\} = \Big\{ \text{$\lim_{t\to\tau}M_t$ exists in $\mathbb{R}$} \Big\} \bigcap \Big\{ A_{\tau-} < \infty \Big\}. \end{equation} Corollary~\ref{C:conv2} yields that this equality in fact holds under assumptions strictly weaker than in \citet{Jacod_book}. Note, however, that some assumption is needed; see Example~\ref{ex:semimartingale}. Furthermore, a special case of the equivalence \ref{T:conv:f} $\Longleftrightarrow$ \ref{T:conv:g} in Therorem~\ref{T:conv} appears in Proposition~1.5 of \citet{Lepingle_Memin_Sur}. Moreover, under additional integrability assumptions on the jumps, Section~4 in \cite{Kabanov/Liptser/Shiryaev:1979} provides related convergence conditions. In general, however, we could not find any of the implications in Therorem~\ref{T:conv}---except, of course, the trivial implication \ref{T:conv:a} $\Longrightarrow$ \ref{T:conv:a'}---in this generality in the literature. Some of the implications are easy to prove, some of them are more involved. Some of these implications were expected, while others were surprising to us; for example, the limit superior in \ref{T:conv:f} is needed even if~$A=0$ so that~$X$ is a local martingale on $\lc0,\tau[\![$. Of course, whenever the extended local integrability condition appears, then, somewhere in the corresponding proof, so does a reference to the classical supermartingale convergence theorem, which relies on Doob's upcrossing inequality. \begin{corollary}[Extended local integrability] \label{C:convergence_QV} Under Assumption~\ref{A:1}, if $|X| \wedge \Delta X$ is extended locally integrable we have, almost surely, \begin{align*} \Big\{ \text{$\lim_{t\to\tau}X_t$ exists in $\mathbb{R}$} \Big\} = \Big\{ \text{$[X,X]_{\tau-} <\infty$} \Big\} \bigcap \Big\{ A_{\tau-}<\infty \Big\}. \end{align*} \end{corollary} \begin{proof} Note that $\{[X,X]_{\tau-}<\infty\}=\{[M,M]_{\tau-}<\infty\}$ on $\{A_{\tau-}<\infty\}$. Thus, in view of~\eqref{eq:XMA}, it suffices to show that $\{ \text{$\lim_{t\to\tau}M_t$ exists in $\mathbb{R}$} \} = \{ \text{$[M,M]_{\tau-} <\infty$}\}$. The inclusion ``$\subset$'' is immediate from \eqref{T:conv2:1} $\subset$ \eqref{T:conv2:5} in Corollary~\ref{C:conv2}. The reverse inclusion follows noting that \begin{align*} \Big\{ \text{$[M,M]_{\tau-} <\infty$} \Big\} &= \left(\Big\{ \text{$[M,M]_{\tau-} <\infty$} \Big\} \cap \Big\{ \limsup_{t\to \tau } M_t > -\infty \Big\}\right)\\ &\qquad \cup \left(\Big\{ \text{$[M,M]_{\tau-} <\infty$} \Big\} \cap \Big\{ \limsup_{t\to \tau } M_t = -\infty \Big\} \cap \Big\{ \limsup_{t\to \tau } (-M_t) > -\infty \Big\}\right) \end{align*} and applying the inclusion \eqref{T:conv2:5} $\subset$ \eqref{T:conv2:1} once to $M$ and once to $-M$. \end{proof} \begin{corollary}[$L^1$--boundedness] \label{C:conv001} Suppose Assumption~\ref{A:1} holds, and let $f:\mathbb{R}\to\mathbb{R}_+$ be any nondecreasing function with $f(x)\ge x$ for all sufficiently large $x$. Then the following conditions are equivalent: \begin{enumerate}[label={\rm(\alph*)},ref={\rm(\alph*)}] \item\label{T:conv1:a} $\lim_{t\to\tau}X_t$ exists in $\mathbb{R}$ and $(\Delta X)^- \wedge X^-$ is extended locally integrable. \item\label{T:conv1:c} $A_{\tau-}<\infty$ and for some extended locally integrable optional process $U$, \begin{align} \label{eq:T:conv:exp} \sup_{\sigma \in \mathcal{T}} \mathbb{E}\left[f(X_\sigma - U_\sigma) \boldsymbol 1_{\{\sigma<\tau\}} \right] < \infty. \end{align} \item\label{T:conv1:d} For some extended locally integrable optional process $U$, \eqref{eq:T:conv:exp} holds with $x\mapsto f(x)$ replaced by $x\mapsto f(-x)$. \item\label{C:conv001:e} The process $\overline X=X\boldsymbol 1_{\lc0,\tau[\![}+(\limsup_{t\to\tau}X_t)\boldsymbol 1_{[\![\tau,\infty[\![}$, extended to $[0,\infty]$ by $\overline X_\infty = \limsup_{t\to\tau}X_t$, is a semimartingale on $[0,\infty]$ and $(\Delta X)^- \wedge X^-$ is extended locally integrable. \item\label{C:conv001:d} The process $\overline X=X\boldsymbol 1_{\lc0,\tau[\![}+(\limsup_{t\to\tau}X_t)\boldsymbol 1_{[\![\tau,\infty[\![}$, extended to $[0,\infty]$ by $\overline X_\infty = \limsup_{t\to\tau}X_t$, is a special semimartingale on $[0,\infty]$. \end{enumerate} \end{corollary} \begin{proof} {\ref{T:conv1:a}} $\Longrightarrow$ {\ref{T:conv1:c}} \& {\ref{T:conv1:d}}: The implication {\ref{T:conv:a}} $\Longrightarrow$ {\ref{T:conv:b'}} \& {\ref{T:conv:b''}} in Theorem~\ref{T:conv} yields that $X$ is extended locally integrable, so we may simply take $U=X$. {\ref{T:conv1:c}} $\Longrightarrow$ {\ref{T:conv1:a}}: We have $f(x)\ge \1{x\ge \kappa}x^+$ for some constant $\kappa \geq 0$ and all $x\in\mathbb{R}$. Hence~\eqref{eq:T:conv:exp} holds with $f(x)$ replaced by $x^+$. Lemma~\ref{L:ELI}\ref{L:ELI:6} then implies that $(X-U)^+$ is extended locally integrable. Since $X^+\le (X-U)^++U^+$, we have $X^+$ is extended locally integrable. The implication \ref{T:conv:b''} $\Longrightarrow$ \ref{T:conv:a} in Theorem~\ref{T:conv} now yields~\ref{T:conv1:a}. {\ref{T:conv1:d}} $\Longrightarrow$ {\ref{T:conv1:c}}: We now have $f(x)\ge \1{x\le -\kappa}x^-$ for some constant $\kappa \geq 0$ and all $x\in\mathbb{R}$, whence as above, $(X-U)^-$ is extended locally integrable. Since $M^-\le(M-U)^-+U^-\le(X-U)^-+U^-$, it follows that $M^-$ is extended locally integrable. The implication \ref{T:conv:b'} $\Longrightarrow$ \ref{T:conv:a}~\&~\ref{T:conv:b''} in Theorem~\ref{T:conv} yields that $\lim_{t\to\tau}M_t$ exists in $\mathbb{R}$ and that $M$ is extended locally integrable. Hence $A=(U-X+M-U)^+\le(X-U)^-+|M|+|U|$ is extended locally integrable, so Lemma~\ref{L:ELI}\ref{L:ELI:4} yields $A_{\tau-}<\infty$. Thus~{\ref{T:conv1:c}} holds. \ref{T:conv1:a} $\Longrightarrow$ \ref{C:conv001:e}: By \eqref{eq:XMA}, $A$ converges. Moreover, since $M \geq X$, we have $(\Delta M)^-$ is extended locally integrable by Remark~\ref{R:4.3}, say with localizing sequence $(\rho_n)_{n \in \mathbb{N}}$. Now, it is sufficient to prove that $M^{\rho_n}$ is a local martingale on $[0,\infty]$ for each $n \in \mathbb{N}$, which, however, follows from Lemma~\ref{L:SMC} below. \ref{C:conv001:e} $\Longrightarrow$ \ref{T:conv1:a}: Obvious. \ref{T:conv1:a} \& \ref{C:conv001:e} $\Longleftrightarrow$ \ref{C:conv001:d}: This equivalence follows from Proposition~II.2.29 in~\cite{JacodS}, in conjunction with the equivalence \ref{T:conv:a} $\Longleftrightarrow$ \ref{T:conv:c} in Theorem~\ref{T:conv}. \end{proof} Examples~\ref{E:P1} and \ref{ex:semimartingale} below illustrate that the integrability condition is needed in order that \ref{C:conv001}\ref{T:conv1:a} imply the semimartingale property of $X$ on the extended axis. These examples also show that the integrability condition in Corollary~\ref{C:conv001}\ref{C:conv001:e} is not redundant. \begin{remark} \label{R:bed implied} In Corollary~\ref{C:conv001}, convergence implies not only $L^1$--integrability but also boundedness. Indeed, let $g:\mathbb{R}\to\mathbb{R}_+$ be either $x\mapsto f(x)$ or $x\mapsto f(-x)$. If any of the conditions \ref{T:conv1:a}--\ref{T:conv1:d} in Corollary~\ref{C:conv001} holds then there exists an nondecreasing extended locally integrable optional process $U$ such that the family \begin{align*} \left(g(X_\sigma - U_\sigma) \boldsymbol 1_{\{\sigma<\tau\}} \right)_{\sigma \in \mathcal{T}} \qquad \text{is bounded.} \end{align*} To see this, note that if {\ref{T:conv1:a}} holds then $X$ is extended locally integrable. If $g$ is $x\mapsto f(x)$, let $U=\sup_{t \leq \cdot} X_t$, whereas if $g$ is $x\mapsto f(-x)$, let $U=\inf_{t \leq \cdot} X_t$. In either case, $U$ is nondecreasing and extended locally integrable and $(g(X_\sigma-U_\sigma))_{\sigma\in{\mathcal T}}$ is bounded.\qed \end{remark} With a suitable choice of $f$ and additional requirements on $U$, condition~\eqref{eq:T:conv:exp} has stronger implications for the tail integrability of the compensator $\nu^X$ than can be deduced, for instance, from Theorem~\ref{T:conv} directly. The following result records the useful case where $f$ is an exponential. \begin{corollary}[Exponential integrability of~$\nu^X$] \label{C:conv1} Suppose Assumption~\ref{A:1} holds. If $A_{\tau-}<\infty$ and~\eqref{eq:T:conv:exp} holds with some~$U$ that is extended locally bounded and with $f(x)=e^{cx}$ for some $c\ge1$, then \begin{equation}\label{T:conv:nu} (e^x-1-x) * \nu^X_{\tau-} < \infty. \end{equation} \end{corollary} \begin{proof} In view of Lemma~\ref{L:ELI}\ref{L:ELI:4} we may assume by localization that $A=U=0$ and by Jensen's inequality that $c = 1$. Lemma~\ref{L:ELI}\ref{L:ELI:6} then implies that $e^X$ and hence $X^+$ is extended locally integrable. Thus by Theorem~\ref{T:conv}, $\inf_{t<\tau} X_t >-\infty$. It\^o's formula yields \begin{align*} e^X &= 1 + e^{X_-} \cdot X + \frac{1}{2} e^{X_-}\cdot [X^c,X^c] + \left(e^{X_-}(e^x-1-x)\right)*\mu^X. \end{align*} The second term on the right-hand side is a local martingale on $\lc0,\tau[\![$, so we may find a localizing sequence $(\rho_n)_{n\in\mathbb{N}}$ with $\rho_n<\tau$. Taking expectations and using the defining property of the compensator~$\nu^X$ as well as the associativity of the stochastic integral yield \[ \mathbb{E}\left[ e^{X_{\rho_n}} \right] = 1+ \mathbb{E}\left[ e^{X_-}\cdot\left(\frac{1}{2} [X^c,X^c] + (e^x-1-x)*\nu^X\right)_{\rho_n} \right] \] for each $n \in \mathbb{N}$. Due to~\eqref{eq:T:conv:exp}, the left-hand side is bounded by a constant that does not depend on~$n \in \mathbb{N}$. We now let~$n$ tend to infinity and recall that $\inf_{t<\tau} X_t >-\infty$ to deduce by the monotone convergence theorem that~\eqref{T:conv:nu} holds. \end{proof} \begin{remark} \label{R:implication} Extended local integrability of $U$ is not enough in Corollary~\ref{C:conv1}. For example, consider an integrable random variable $\Theta$ with $\mathbb{E}[\Theta]= 0$ and $\mathbb{E}[e^\Theta] = \infty$ and the process $X = \Theta \boldsymbol 1_{[\![ 1, \infty[\![}$ under its natural filtration. Then $X$ is a martingale. Now, with $U = - X$, \eqref{eq:T:conv:exp} holds with $f(x)=e^x$, but $(e^x - 1 - x) * \nu^X_{\infty-} = \infty$. \qed \end{remark} \subsection{Convergence results with jumps bounded below} \label{S:conv locmg} We now specialize to the case where $X$ is a local martingale on a stochastic interval with jumps bounded from below. The aim is to study a related process $Y$, which appears naturally in connection with the nonnegative local martingale ${\mathcal E}(X)$. We comment on this connection below. \begin{assumption} \label{A:2} Let $\tau$ be a foretellable time, and $X$ a local martingale on~$[\![ 0,\tau[\![$ with $\Delta X>-1$. Suppose further that $(x-\log(1+x))*\nu^X$ is finite-valued, and define \begin{equation*} Y=X^c+\log(1+x)*(\mu^X-\nu^X) \end{equation*} and \begin{align*} \gamma_t= - \int \log(1+x) \nu^X(\{t\},\mathrm{d} x) \end{align*} for all $t<\tau$. \end{assumption} The significance of the process $Y$ originates with the identity \begin{align} \label{eq:VV} {\mathcal E}(X)= e^{Y-V} \quad\text{on $\lc0,\tau[\![$,} \quad \text{where} \quad V=\frac{1}{2}[X^c,X^c]+(x-\log(1+x))*\nu^X. \end{align} Thus $Y$ is the local martingale part and $-V$ is the predictable finite variation part of the local supermartingale $\log {\mathcal E}(X)$. The process $V$ is called the {\em exponential compensator} of $Y$, and $Y-V$ is called the {\em logarithmic transform} of $X$. These notions play a central role in~\cite{Kallsen_Shir}. Observe that the jumps of $Y$ can be expressed as \begin{equation} \label{eq:DY} \Delta Y_t = \log(1+\Delta X_t) + \gamma_t \end{equation} for all $t<\tau$. Jensen's inequality and the fact that $\nu^X(\{t\},\mathbb{R})\le 1$ imply that $\gamma\ge 0$. If $X$ is quasi-left continuous, then $\gamma\equiv 0$. In the spirit of our previous results, we now present a theorem that relates convergence of the processes $X$ and $Y$ to the finiteness of various derived quantities. \begin{theorem}[Joint convergence of a local martingale and its logarithmic transform] \label{T:convYX} Suppose Assumption~\ref{A:2} holds, and fix $\eta\in(0,1)$ and $\kappa>0$. Then the following events are almost surely equal: \begin{align} &\Big\{ \lim_{t\to\tau}X_t \text{ exists in }\mathbb{R}\Big\} \bigcap \Big\{ \lim_{t\to\tau}Y_t \text{ exists in }\mathbb{R}\Big\} \label{T:convYX:1};\\ &\Big\{\frac{1}{2} [X^c,X^c]_{\tau-}+ (x - \log(1+x)) * \nu^X_{\tau-} < \infty\Big\} ; \label{T:convYX:2} \\ &\Big\{ \lim_{t\to\tau}X_t \text{ exists in }\mathbb{R}\Big\} \bigcap \Big\{ -\log(1+x)\boldsymbol 1_{x< -\eta}*\nu^X_{\tau-} < \infty\Big\} \label{T:convYX:3};\\ &\Big\{ \lim_{t\to\tau}Y_t \text{ exists in }\mathbb{R}\Big\} \bigcap \Big\{x\boldsymbol 1_{x>\kappa} * \nu^X_{\tau-} < \infty\Big\} . \label{T:convYX:4} \end{align} \end{theorem} \begin{lemma} \label{L:convYX} Suppose Assumption~\ref{A:2} holds. For any event $D \in \mathcal F$ with $x\boldsymbol 1_{x>\kappa} * \nu^X_{\tau-} < \infty$ on $D$ for some $\kappa>0$, the following three statements are equivalent: \begin{enumerate}[label={\rm(\alph{*})}, ref={\rm(\alph{*})}] \item\label{T:convYX:a} $\lim_{t\to\tau}Y_t$ exists in $\mathbb{R}$ on $D$. \item\label{T:convYX:b'} $Y^-$ is extended locally integrable on $D$. \item\label{T:convYX:b''} $Y^+$ is extended locally integrable on $D$. \end{enumerate} \end{lemma} \begin{proof} The implications follow from Theorem~\ref{T:conv}. Only that \ref{T:convYX:a} implies \ref{T:convYX:b'} \& \ref{T:convYX:b''} needs an argument, and it suffices to show that $(\Delta (-Y))^-$ is extended locally integrable on $D$. By \eqref{eq:DY} we have $(\Delta(-Y))^-\le(\Delta X)^++\gamma$; Lemma~\ref{L:ELI}\ref{L:ELI:5} implies that $(\Delta X)^+$ is extended locally integrable; and Lemma~\ref{L:convYX:1} below and Lemma~\ref{L:ELI}\ref{L:ELI:6} imply that $\gamma$ is extended locally bounded on~$D$. \end{proof} \begin{corollary}[$L^1$--boundedness] \label{C:convYX2} Suppose Assumption~\ref{A:2} holds, and fix $c\ne0$, $\eta\in(0,1)$, and $\kappa>0$. The following conditions are equivalent: \begin{enumerate}[label={\rm(\alph*)},ref={\rm(\alph*)}] \item \label{T:convYX2:a} $\lim_{t\to\tau}X_t$ exists in $\mathbb{R}$ and $-\log(1+x)\boldsymbol 1_{x< -\eta}*\nu^X_{\tau-} < \infty$. \item\label{T:convYX2:b} $x\boldsymbol 1_{x>\kappa} * \nu^X_{\tau-}< \infty$ and for some extended locally integrable optional process $U$ on $\lc0,\tau[\![$ we have \begin{equation} \label{eq:T:convYX:eYp} \sup_{\sigma \in \mathcal{T}} \mathbb{E}\left[e^{cY_\sigma - U_\sigma} \1{\sigma<\tau} \right] < \infty. \end{equation} \end{enumerate} If $c\ge 1$, these conditions are implied by the following: \begin{enumerate}[label={\rm(\alph*)},ref={\rm(\alph*)},resume] \item\label{T:convYX2:c} \eqref{eq:T:convYX:eYp} holds for some extended locally bounded optional process $U$ on $\lc0,\tau[\![$. \end{enumerate} Finally, the conditions \ref{T:convYX2:a}--\ref{T:convYX2:b} imply that $(e^{cY_\sigma-U_\sigma})_{\sigma\in{\mathcal T}}$ is bounded for some extended locally integrable optional process $U$ on $\lc0,\tau[\![$. \end{corollary} \begin{proof} The equivalence of \ref{T:convYX2:a} and \ref{T:convYX2:b} is obtained from \eqref{T:convYX:3} = \eqref{T:convYX:4} in Theorem~\ref{T:convYX}. Indeed, Corollary~\ref{C:conv001} with $X$ replaced by $Y$ and $f(x)=e^{cx}$, together with Lemma~\ref{L:convYX}, yield that \ref{T:convYX2:b} holds if and only if \eqref{T:convYX:4} has full probability. In order to prove that \ref{T:convYX2:c} implies~\ref{T:convYX2:b} we assume that~\eqref{eq:T:convYX:eYp} holds with $c\geq 1$ and $U$ extended locally bounded. Corollary~\ref{C:conv1} yields \[ \left(1-\frac{1}{\kappa}\log (1+\kappa)\right)\, (e^y-1)\boldsymbol 1_{y>\log (1+\kappa)}*\nu^Y_{\tau-}\le(e^y-1-y)*\nu^Y_{\tau-}<\infty, \] so by a localization argument using Lemma~\ref{L:ELI}\ref{L:ELI:4} we may assume that $(e^y-1)\boldsymbol 1_{y>\log (1+\kappa)}*\nu^Y_{\tau-}\le \kappa_1$ for some constant $\kappa_1>0$. Now, \eqref{eq:DY} yields \[ \Delta X \1{\Delta X>\kappa} = \left(e^{\Delta Y - \gamma}-1\right)\1{e^{\Delta Y}>(1+\kappa)e^\gamma} \le (e^{\Delta Y}-1)\1{\Delta Y>\log (1+\kappa)}, \] whence $\mathbb{E}[x\boldsymbol 1_{x>\kappa}*\nu^X_{\tau-}]\le \mathbb{E}[(e^y-1)\boldsymbol 1_{y>\log (1+\kappa)}*\nu^Y_{\tau-}]\le\kappa_1$. Thus~\ref{T:convYX2:b} holds. The last statement of the corollary follows as in Remark~\ref{R:bed implied} after recalling Lemma~\ref{L:convYX}. \end{proof} \subsection{Some auxiliary results} In this subsection, we collect some observations that will be useful for the proofs of the convergence theorems of the previous subsection. \begin{lemma}[Supermartingale convergence] \label{L:SMC} Under Assumption~\ref{A:1}, suppose $\sup_{n \in \mathbb{N}} \mathbb{E}[X^-_{\tau_n}]<\infty$. Then the limit $G=\lim_{t\to\tau}X_t$ exists in~$\mathbb{R}$ and the process $\overline X=X\boldsymbol 1_{\lc0,\tau[\![} + G\boldsymbol 1_{[\![\tau,\infty[\![}$, extended to $[0,\infty]$ by $\overline X_\infty=G$, is a supermartingale on $[0,\infty]$ and extended locally integrable. If, in addition, $X$ is a local martingale on $[\![ 0,\tau[\![$ then $\overline X$ is a local martingale. \end{lemma} \begin{proof} Supermartingale convergence implies that $G$ exists; see the proof of Proposition~A.4 in~\citet{CFR2011} for a similar statement. Fatou's lemma, applied as in Theorem~1.3.15 in \citet{KS1}, yields the integrability of $\overline X_\rho$ for each $[0,\infty]$--valued stopping time $\rho$, as well as the supermartingale property of $\overline{X}$. Now, define a sequence of stopping times $(\rho_m)_{m \in \mathbb{N}}$ by \[ \rho_m = \inf\{ t\ge 0: |\overline{X}_t| > m \} \] and note that $\bigcup_{m\in\mathbb{N}}\{\rho_m=\infty\}=\Omega$. Thus, $\overline{X}$ is extended locally integrable, with the corresponding sequence $(|\overline{X}_{\rho_m}| + m)_{m \in \mathbb{N}}$ of integrable random variables. Assume now that $X$ is a local martingale and, without loss of generality, that $\overline{X}^{\tau_n}$ is a uniformly integrable martingale for each $n \in \mathbb{N}$. Fix $m \in \mathbb{N}$ and note that $\lim_{n\to\infty} \overline{X}_{\rho_m\wedge\tau_n}=\overline{X}_{\rho_m}$. Next, the inequality $|\overline{X}_{\rho_m \wedge \tau_n}| \leq |\overline{X}_{\rho_m}| + m$ for each $n \in \mathbb{N}$ justifies an application of dominated convergence as follows: \[ E\left[\overline{X}_{\rho_m}\right] = E\left[\lim_{n \to \infty} \overline{X}_{\rho_m \wedge \tau_n}\right] = \lim_{n \to \infty} E\left[\overline{X}_{\rho_m \wedge \tau_n}\right] = 0. \] Hence, $\overline{X}$ is a local martingale, with localizing sequence $(\rho_m)_{m \in \mathbb{N}}$. \end{proof} For the proof of the next lemma, we are not allowed to use Corollary~\ref{C:convergence_QV}, as it relies on Theorem~\ref{T:conv}, which we have not yet proved. \begin{lemma}[Continuous case] \label{L:CC} Let $X$ be a continuous local martingale on $\lc0,\tau[\![$. If $[X,X]_{\tau-}<\infty$ then the limit $\lim_{t\to\tau}X_t$ exists in $\mathbb{R}$. \end{lemma} \begin{proof} See Exercise~IV.1.48 in \citet{RY}. \end{proof} The next lemma will serve as a tool to handle truncated jump measures. \begin{lemma}[Bounded jumps] \label{L:BJ} Let $\mu$ be an integer-valued random measure such that $\mu(\mathbb{R}_+ \times [-1,1]^c) = 0$, and let $\nu$ be its compensator. Assume either $x^2*\mu_{\infty-}$ or $x^2*\nu_{\infty-}$ is finite. Then so is the other one, we have $x\in G_{\rm loc}(\mu)$, and the limit $\lim_{t\to\infty}x*(\mu-\nu)_t$ exists in $\mathbb{R}$. \end{lemma} \begin{proof} First, the condition on the support of $\mu$ implies that both $x^2*\mu$ and $x^2*\nu$ have jumps bounded by one. Now, let $\rho_n$ be the first time $x^2*\nu$ crosses some fixed level $n \in \mathbb{N}$, and consider the local martingale $F=x^2*\mu-x^2*\nu$. Since $F^{\rho_n}\ge-n-1$, the supermartingale convergence theorem implies that $F^{\rho_n}_{\infty-}$ exists in $\mathbb{R}$, whence $x^2*\mu_{\infty-}=F_{\infty-}+x^2*\nu_{\infty-}$ exists and is finite on $\{\rho_n=\infty\}$. This yields \begin{equation*} \left\{ x^2*\nu_{\infty-}<\infty\right\} \subset\left\{ x^2*\mu_{\infty-}<\infty\right\}. \end{equation*} The reverse inclusion is proved by interchanging $\mu$ and $\nu$ in the above argument. Next, the local boundedness of $x^2*\nu$ implies that $x*(\mu-\nu)$ is well-defined and a local martingale with $\langle x*(\mu-\nu),x*(\mu-\nu)\rangle\le x^2*\nu$; see Theorem~II.1.33 in \citet{JacodS}. Hence, for each $n \in \mathbb{N}$, with $\rho_n$ as above, $x*(\mu-\nu)^{\rho_n}$ is a uniformly integrable martingale and thus convergent. Therefore $x*(\mu-\nu)$ is convergent on the set $\{\rho_n=\infty\}$, which completes the argument. \end{proof} \subsection{Proof of Theorem~\ref{T:conv}} We start by proving that {\ref{T:conv:a}} yields that $\Delta X$ is extended locally integrable on $D$. By localization, in conjunction with Lemma~\ref{L:ELI}\ref{L:ELI:2}, we may assume that $(\Delta X)^-\wedge X^-\le\Theta$ for some integrable random variable $\Theta$ and that $\sup_{t<\tau} |X_t| < \infty$. With $\rho_n=\inf\{t \geq 0: X_t\le-n\}$ we have $X^{\rho_n}\ge -n-(\Delta X_{\rho_n})^-\1{\rho_n<\tau}$ and $X^{\rho_n}\ge -n-X_{\rho_n}^-\1{\rho_n<\tau}$. Hence $X^{\rho_n} \geq -n-\Theta$ and thus, by Lemma~\ref{L:SMC}, $X^{\rho_n}$ is extended locally integrable and Lemma~\ref{L:ELI}\ref{L:ELI:3} yields that $\Delta X^{\rho_n}$ is as well, for each $n \in \mathbb{N}$. We have $\bigcup_{n\in\mathbb{N}}\{\rho_n=\tau\}=\Omega$, and another application of Lemma~\ref{L:ELI}\ref{L:ELI:2} yields the implication. We now verify the claimed implications. {\ref{T:conv:a}} $\Longrightarrow$ {\ref{T:conv:a'}}: Obvious. {\ref{T:conv:a'}} $\Longrightarrow$ {\ref{T:conv:a}}: By localization we may assume that $(\Delta X)^-\wedge X^-\le\Theta$ for some integrable random variable $\Theta$ and that $\sup_{t<\tau} X_t^- < \infty$ on $\Omega$. With $\rho_n=\inf\{t \geq 0: X_t\le-n\}$ we have $X^{\rho_n} \geq -n-\Theta$, for each $n \in \mathbb{N}$. The supermartingale convergence theorem (Lemma~\ref{L:SMC}) now implies that $X$ converges. {\ref{T:conv:a}} $\Longrightarrow$ {\ref{T:conv:b'}}: This is an application of Lemma~\ref{L:ELI}\ref{L:ELI:3}, after recalling that {\ref{T:conv:a}} implies that $\Delta X$ is extended locally integrable on $D$. {\ref{T:conv:b'}} $\Longrightarrow$ {\ref{T:conv:a}}: This is an application of a localization argument and the supermartingale convergence theorem stated in Lemma~\ref{L:SMC}. {\ref{T:conv:a}} $\Longrightarrow$ {\ref{T:conv:b''}} \& {\ref{T:conv:c}} \& {\ref{T:conv:f}}: By localization, we may assume that $|\Delta X|\le \Theta$ for some integrable random variable $\Theta$ and that $X = X^{\rho}$ with $\rho=\inf\{t \geq 0:|X_t|\ge \kappa\}$ for some fixed $\kappa \geq 0$. Next, observe that $X\ge -\kappa-\Theta$. Lemma~\ref{L:SMC} yields that $G=\lim_{t\to \tau}X_t$ exists in $\mathbb{R}$ and that the process $\overline X=X\boldsymbol 1_{\lc0,\tau[\![}+G\boldsymbol 1_{[\![\tau,\infty[\![}$, extended to $[0,\infty]$ by $\overline X_\infty=G$, is a supermartingale on $[0,\infty]$. Let $\overline X=\overline M-\overline A$ denote its canonical decomposition. Then $A_{\tau-}=\overline A_\infty<\infty$ and $[X,X]_{\tau-}= [\overline X,\overline X]_\infty < \infty$. Moreover, since $\overline X$ is a special semimartingale on $[0,\infty]$, Proposition~II.2.29 in~\cite{JacodS} yields $(x^2\wedge|x|)*\nu^X_{\tau-}=(x^2\wedge|x|)*\nu^{\overline X}_\infty<\infty$. Thus~{\ref{T:conv:c}} and~{\ref{T:conv:f}} hold. Now, {\ref{T:conv:b''}} follows again by an application of Lemma~\ref{L:ELI}\ref{L:ELI:3}. {\ref{T:conv:b''}} $\Longrightarrow$ {\ref{T:conv:a}}: By Lemma~\ref{L:ELI}\ref{L:ELI:4} we may assume that $A = 0$, so that $-X$ is a local supermartingale. The result then follows again from Lemma~\ref{L:SMC} and Lemma~\ref{L:ELI}\ref{L:ELI:3}. {\ref{T:conv:c}} $\Longrightarrow$ {\ref{T:conv:a}}: The process $B=[X^c,X^c]+(x^2\wedge|x|)*\nu^X+A$ is predictable and converges on~$D$. Hence, by Lemma~\ref{L:ELI}\ref{L:ELI:4}, $B$ is extended locally bounded on $D$. By localization we may thus assume that $B\le\kappa$ for some constant $\kappa>0$. Lemma~\ref{L:CC} implies that $X^c$ converges and Lemma~\ref{L:BJ} implies that $x\boldsymbol 1_{|x|\le 1}*(\mu^X-\nu^X)$ converges. Furthermore, \[ \mathbb{E}\left[\, |x|\boldsymbol 1_{|x|>1}*\mu^X_{\tau-}\, \right] = \mathbb{E}\left[\, |x|\boldsymbol 1_{|x|>1}*\nu^X_{\tau-}\, \right] \le \kappa, \] whence $|x|\boldsymbol 1_{|x|>1}*(\mu^X-\nu^X)=|x|\boldsymbol 1_{|x|>1}*\mu^X-|x|\boldsymbol 1_{|x|>1}*\nu^X$ converges. We deduce that~$X$ converges. It now suffices to show that $\Delta X$ is extended locally integrable. Since \begin{align*} \sup_{t<\tau} |\Delta X_t| \leq 1 + |x|\boldsymbol 1_{|x|\ge 1}*\mu^X_{\tau-}, \end{align*} we have $\mathbb{E}[\sup_{t<\tau} |\Delta X_t|] \le 1+\kappa$. {\ref{T:conv:f}} $\Longrightarrow$ {\ref{T:conv:a}}: By a localization argument we may assume that $(\Delta X)^- \wedge X^-\leq \Theta$ for some integrable random variable~$\Theta$. Moreover, since $[X,X]_{\tau-}<\infty$ on $D$, $X$ can only have finitely many large jumps on $D$. Thus after further localization we may assume that $X = X^{\rho}$, where $\rho=\inf\{t \geq 0:|\Delta X_t|\ge \kappa_1\}$ for some large $\kappa_1>0$. Now, Lemmas~\ref{L:CC} and~\ref{L:BJ} imply that $X' = X^c + x \boldsymbol 1_{|x| \leq \kappa_1} * (\mu^X-\nu^X)$ converges on $D$. Hence Lemma~\ref{L:ELI}\ref{L:ELI:3} and a further localization argument let us assume that $|X'|\le\kappa_2$ for some constant $\kappa_2>0$. Define $\widehat X = x \boldsymbol 1_{x<-\kappa_1} * (\mu^X-\nu^X)$ and suppose for the moment we know that~$\widehat X$ converges on~$D$. Consider the decomposition \begin{equation} \label{eq:T:conv:ftob} X = X' + \widehat X + x\boldsymbol 1_{x>\kappa_1}*\mu^X-x\boldsymbol 1_{x>\kappa_1}*\nu^X - A. \end{equation} The first two terms on the right-hand side converge on $D$, as does the third term since $X=X^\rho$. However, since $\limsup_{t\to\tau}X_t>-\infty$ on $D$ by hypothesis, this forces also the last two terms to converge on~$D$, and we deduce~\ref{T:conv:a} as desired. It remains to prove that $\widehat X$ converges on~$D$, and for this we will rely repeatedly on the equality $X=X^\rho$ without explicit mentioning. In view of~\eqref{eq:T:conv:ftob} and the bound $|X'|\le\kappa_2$, we have \[ \widehat X \geq X - \kappa_2 - x \boldsymbol 1_{x>\kappa_1} * \mu^X = X - \kappa_2 -(\Delta X_\rho)^+\boldsymbol 1_{[\![\rho,\tau[\![}. \] Moreover, by definition of $\widehat X$ and $\rho$ we have $\widehat X\ge 0$ on $\lc0,\rho[\![$; hence $\widehat X\ge \Delta X_\rho\boldsymbol 1_{[\![\rho,\tau[\![}$. We deduce that on $\{\rho<\tau\text{ and }\Delta X_\rho<0\}$ we have $\widehat X^-\le X^-+\kappa_2$ and $\widehat X^-\le(\Delta X)^-$. On the complement of this event, it follows directly from the definition of $\widehat X$ that $\widehat X\ge0$. To summarize, we have $\widehat X^- \leq (\Delta X)^- \wedge X^- + \kappa_2 \leq \Theta + \kappa_2.$ Lemma~\ref{L:SMC} now implies that $\widehat X$ converges, which proves the stated implication. {\ref{T:conv:a}} \& {\ref{T:conv:f}} $\Longrightarrow$ {\ref{T:conv:g}}: We now additionally assume that $X$ is constant on $[\![\tau_J,\tau[\![$. First, note that ${\mathcal E}(X)$ changes sign finitely many times on $D$ since $ \boldsymbol 1_{x < -1} * \mu_{\tau-}^X \leq x^2 *\mu_{\tau-}^X < \infty$ on $D$. Therefore, it is sufficient to check that $\lim_{t\to\tau}|{\mathcal E}(X)_t|$ exists in $(0,\infty)$ on $D \cap \{\tau_J = \infty\}.$ However, this follows from the fact that $\log|{\mathcal E}(X)|=X - [X^c,X^c] /2 - (x - \log |1+x|) * \mu^X$ on $[\![ 0, \tau_J[\![$ and the inequality $x-\log(1+x)\le x^2$ for all $x\ge-1/2$. {\ref{T:conv:g}} $\Longrightarrow$ {\ref{T:conv:a'}}: Note that we have $\lim_{t \to \tau} X_t - [X^c,X^c]_t/2 - (x - \log (1+x))\boldsymbol 1_{x>-1} * \mu_t^X$ exists in $\mathbb{R}$ on $D$, which then yields the implication. \qed \subsection{Proof of Theorem~\ref{T:convYX}} The proof relies on a number of intermediate lemmas. We start with a special case of Markov's inequality that is useful for estimating conditional probabilities in terms of unconditional probabilities. This inequality is then applied in a general setting to control conditional probabilities of excursions of convergent processes. \begin{lemma}[A Markov type inequality] \label{L:cprob} Let ${\mathcal G}\subset{\mathcal F}$ be a sub-sigma-field, and let $G\in{\mathcal G}$, $F\in{\mathcal F}$, and $\delta>0$. Then \[ \mathbb{P}\left(\boldsymbol 1_G\,\mathbb{P}( F\mid{\mathcal G} ) \geq \delta \right)\ \le\ \frac{1}{\delta}\mathbb{P}(G \cap F). \] \end{lemma} \begin{proof} We have $\mathbb{P}(G\cap F) = \mathbb{E}\left[ \boldsymbol 1_G\,\mathbb{P}(F\mid{\mathcal G}) \right] \ge \delta\, \mathbb{P}\left(\boldsymbol 1_G\,\mathbb{P}(F\mid{\mathcal G})\geq \delta\right)$. \end{proof} \begin{lemma} \label{L:cprob2} Let $\tau$ be a foretellable time, let $W$ be a measurable process on $\lc0,\tau[\![$, and let $(\rho_n)_{n\in\mathbb{N}}$ be a nondecreasing sequence of stopping times with $\lim_{n\to\infty}\rho_n\ge\tau$. Suppose the event \[ C=\Big\{\lim_{t\to\tau} W_t=0 \text{ and } \rho_n<\tau \text{ for all }n\in\mathbb{N}\Big\} \] lies in ${\mathcal F}_{\tau-}$. Then for each $\varepsilon>0$, \[ \mathbb{P}\left( W_{\rho_n} \le \varepsilon \mid {\mathcal F}_{\rho_n-}\right) \ge \frac{1}{2} \quad\text{for infinitely many }n\in\mathbb{N} \] holds almost surely on $C$. \end{lemma} \begin{proof} By Theorem~IV.71 in~\cite{Dellacherie/Meyer:1978}, $\tau$ is almost surely equal to some predictable time $\tau'$. We may thus assume without loss of generality that $\tau$ is already predictable. Define events $F_n = \{W_{\rho_n} > \varepsilon \text{ and } \rho_n<\tau\}$ and $G_n = \{\mathbb{P}(C\mid{\mathcal F}_{\rho_n-}) > 1/2\}$ for each $n \in \mathbb{N}$ and some fixed $\varepsilon > 0$. By Lemma~\ref{L:cprob}, we have \begin{equation}\label{eq:L:cprob2} \mathbb{P}\left( \boldsymbol 1_{G_n} \mathbb{P}( F_n \mid {\mathcal F}_{\rho_n-}) > \frac{1}{2} \right) \le 2\,\mathbb{P}(G_n\cap F_n) \le 2\,\mathbb{P}(F_n\cap C) + 2\,\mathbb{P}(G_n\cap C^c). \end{equation} Clearly, we have $\lim_{n\to \infty}\mathbb{P}(F_n\cap C)= 0$. Also, since $\rho_\infty=\lim_{n\to\infty}\rho_n\ge\tau$, we have $\lim_{n\to\infty}\mathbb{P}(C\mid{\mathcal F}_{\rho_n-})=\mathbb{P}(C\mid{\mathcal F}_{\rho_\infty-})=\boldsymbol 1_C$. Thus $\boldsymbol 1_{G_n}=\boldsymbol 1_C$ for all sufficiently large $n \in \mathbb{N}$, and hence $\lim_{n\to\infty}\mathbb{P}(G_n\cap C^c)=0$ by bounded convergence. The left-hand side of~\eqref{eq:L:cprob2} thus tends to zero as $n$ tends to infinity, so that, passing to a subsequence if necessary, the Borel-Cantelli lemma yields $\boldsymbol 1_{G_n} \mathbb{P}( F_n \mid {\mathcal F}_{\rho_n-})\le1/2$ for all but finitely many $n \in \mathbb{N}$. Thus, since $\boldsymbol 1_{G_n}=\boldsymbol 1_C$ eventually, we have $\mathbb{P}( F_n \mid {\mathcal F}_{\rho_n-})\le1/2$ for infinitely many $n \in \mathbb{N}$ on $C$. Since $\tau$ is predictable we have $\{\rho_n<\tau\}\in{\mathcal F}_{\rho_n-}$ by Theorem~IV.73(b) in \cite{Dellacherie/Meyer:1978}. Thus $\mathbb{P}( F_n \mid {\mathcal F}_{\rho_n-})=\mathbb{P}( W_{\rho_n}>\varepsilon \mid {\mathcal F}_{\rho_n-})$ on $C$, which yields the desired conclusion. \end{proof} Returning to the setting of Theorem~\ref{T:convYX}, we now show that $\gamma$ vanishes asymptotically on the event~\eqref{T:convYX:4}. \begin{lemma} \label{L:convYX:1} Under Assumption~\ref{A:2}, we have $\lim_{t\to\tau}\gamma_t=0$ on~\eqref{T:convYX:4}. \end{lemma} \begin{proof} As in the proof of Lemma~\ref{L:cprob} we may assume that $\tau$ is predictable. We now argue by contradiction. To this end, assume there exists $\varepsilon>0$ such that $\mathbb{P}(C)>0$ where $C=\{\gamma_t\ge 2 \varepsilon\text{ for infinitely many $t$}\} \cap \eqref{T:convYX:4}$. Let $(\rho_n)_{n\in\mathbb{N}}$ be a sequence of predictable times covering the predictable set $\{\gamma\ge2\varepsilon\}$. By \eqref{eq:DY} and since $X$ and $Y$ are c\`adl\`ag, any compact subset of $[0,\tau)$ can only contain finitely many time points~$t$ for which $\gamma_t\ge 2\varepsilon$. We may thus take the $\rho_n$ to satisfy $\rho_n<\rho_{n+1}<\tau$ on $C$ for all $n\in\mathbb{N}$, as well as $\lim_{n\to\infty}\rho_n\ge\tau$. We now have, for each $n\in\mathbb{N}$ on $\{\rho_n<\tau\}$, \begin{align*} 0 &= \int x\nu^X(\{\rho_n\}, \mathrm{d} x) \le -(1-e^{-\varepsilon}) \mathbb{P}\left(\Delta X_{\rho_n} \le e^{-\varepsilon}-1\mid{\mathcal F}_{\rho_n-}\right) + \int x\boldsymbol 1_{x>0}\, \nu^X(\{\rho_n\},\mathrm{d} x) \\ &\le -(1-e^{-\varepsilon}) \mathbb{P}\left(\Delta Y_{\rho_n} \le \varepsilon\mid{\mathcal F}_{\rho_n-}\right) + \int x\boldsymbol 1_{x>0}\, \nu^X(\{\rho_n\},\mathrm{d} x), \end{align*} where the equality uses the local martingale property of $X$, the first inequality is an elementary bound involving Equation~II.1.26 in~\citet{JacodS}, and the second inequality follows from~\eqref{eq:DY}. Thus on $C$, \begin{equation} \label{eq:convYX:1:1} x \boldsymbol 1_{x \geq 0 \vee (e^{\varepsilon-\gamma}-1)}*\nu^X_{\tau-} \ge \sum_{n\in\mathbb{N}}\int x\boldsymbol 1_{x>0}\, \nu^X(\{\rho_n\},\mathrm{d} x) \ge (1-e^{-\varepsilon})\sum_{n\in\mathbb{N}}\mathbb{P}\left(\Delta Y_{\rho_n} \le \varepsilon\mid{\mathcal F}_{\rho_n-}\right). \end{equation} With $W = \Delta Y$, Lemma~\ref{L:cprob2} implies that the right-hand side of~\eqref{eq:convYX:1:1} is infinite almost surely on $C$. We now argue that the left-hand is finite almost surely on \eqref{T:convYX:4} $\supset C$, yielding the contradiction. To this end, since $\lim_{t\to\tau}\Delta Y_t=0$ on~\eqref{T:convYX:4}, we have $\boldsymbol 1_{x > e^{\varepsilon - \gamma} - 1} * \mu^X_{\tau-} < \infty$ on~\eqref{T:convYX:4}. Lemma~\ref{L:BJ} and an appropriate localization argument applied to the random measure \[ \mu=\boldsymbol 1_{0 \vee (e^{\varepsilon-\gamma}-1)\le x\le \kappa} \boldsymbol 1_{[\![ 0, \tau[\![}\,\mu^X \] yield $x \boldsymbol 1_{0 \vee (e^{\varepsilon-\gamma}-1)\le x\le \kappa}*\nu^X_{\tau-}<\infty$; here $\kappa$ is as in Theorem~\ref{T:convYX}. Since also $x\boldsymbol 1_{x>\kappa}*\nu^X_{\tau-}<\infty$ on~\eqref{T:convYX:4} by definition, the left-hand side of~\eqref{eq:convYX:1:1} is finite. \end{proof} \begin{lemma} \label{L:convYX:0 new} Fix $\varepsilon \in (0,1)$. Under Assumption~\ref{A:2}, we have \[ [X^c, X^c]_{\tau-} + (\log(1+x)+\gamma)^2 \boldsymbol 1_{|x| \leq \varepsilon} * \nu^X_{\tau-} - \log(1+x) \boldsymbol 1_{x \leq -\varepsilon} * \nu^X_{\tau-} + x \boldsymbol 1_{x \geq \varepsilon} * \nu^X_{\tau-} < \infty \] on~\eqref{T:convYX:4}. \end{lemma} \begin{proof} As in Lemma~\ref{L:convYX} we argue that the condition \ref{T:conv:a} in Theorem~\ref{T:conv} holds with $X$ replaced by $-Y$. Using the equivalence with Theorem~\ref{T:conv}\ref{T:conv:c}, we obtain that $[X^c, X^c]_{\tau-} = [Y^c, Y^c]_{\tau-} < \infty$ and \begin{align}\label{eq: L:convYX:0 new proof} \Big( (\log(1+x)+\gamma)^2 \wedge |\log(1+x)+\gamma|\Big) * \nu^X_{\tau-} + \sum_{s<\tau} (\gamma_s^2\wedge\gamma_s)\1{\Delta X_s=0} = (y^2\wedge|y|)*\nu^Y_{\tau-} < \infty \end{align} on \eqref{T:convYX:4}, where the equality in~\eqref{eq: L:convYX:0 new proof} follows from \eqref{eq:DY}. Now, by localization, Lemma~\ref{L:convYX:1}, and Lemma~\ref{L:ELI}\ref{L:ELI:4}, we may assume that $\sup_{t < \tau} \gamma_t$ is bounded. We then obtain from~\eqref{eq: L:convYX:0 new proof} that $(\log(1+x)+\gamma)^2 \boldsymbol 1_{|x| \leq \varepsilon} * \nu^X_{\tau-} < \infty$ on \eqref{T:convYX:4}. Next, note that \begin{align*} -\log(1+x)\boldsymbol 1_{x\le -\varepsilon} * \nu^X_{\tau-} &= -\log(1+x)\boldsymbol 1_{x\le-\varepsilon} \boldsymbol 1_{\{\gamma<-\log(1-\varepsilon)/2\}} * \nu^X_{\tau-} \\ &\qquad + \sum_{t < \tau} \int -\log(1+x)\boldsymbol 1_{x\le-\varepsilon} \boldsymbol 1_{\{\gamma\geq -\log(1-\varepsilon)/2\}}\, \nu^X(\{t\},\mathrm{d} x) < \infty \end{align*} on \eqref{T:convYX:4}. Indeed, an argument based on \eqref{eq: L:convYX:0 new proof} shows that the first summand is finite. The second summand is also finite since it consists of finitely many terms due to Lemma~\ref{L:convYX:1}, each of which is finite. The latter follows since $(x-\log(1+x))*\nu^X$ is a finite-valued process by assumption and $\int|x|\nu^X(\{t\},dx)<\infty$ for all $t<\tau$ due to the local martingale property of $X$. Finally, a calculation based on \eqref{eq: L:convYX:0 new proof} yields $x\boldsymbol 1_{\varepsilon\le x\le \kappa}*\nu^X_{\tau-}<\infty$ on the event \eqref{T:convYX:4}, where $\kappa$ is as in Theorem~\ref{T:convYX}. This, together with the definition of \eqref{T:convYX:4}, implies that $x \boldsymbol 1_{x \geq \varepsilon} * \nu^X_{\tau-} < \infty$ there, completing the proof. \end{proof} We are now ready to verify the claimed inclusions of Theorem~\ref{T:convYX}. \eqref{T:convYX:1} $\subset$ \eqref{T:convYX:2}: The implication \ref{T:conv:a} $\Longrightarrow$ \ref{T:conv:g} of Theorem~\ref{T:conv} shows that ${\mathcal E}(X)_{\tau-}>0$ on~\eqref{T:convYX:1}. The desired inclusion now follows from~\eqref{eq:VV}. \eqref{T:convYX:2} $\subset$ \eqref{T:convYX:1}: By the inclusion \eqref{T:conv2:2} $\subset$ \eqref{T:conv2:1} of Corollary~\ref{C:conv2} and the implication \ref{T:conv:a} $\Longrightarrow$ \ref{T:conv:g} of Theorem~\ref{T:conv}, $X$ converges and ${\mathcal E}(X)_{\tau-}>0$ on \eqref{T:convYX:2}. Hence by \eqref{eq:VV}, $Y$ also converges on \eqref{T:convYX:2}. \eqref{T:convYX:1} $\cap$ \eqref{T:convYX:2} $\subset$ \eqref{T:convYX:3}: Obvious. \eqref{T:convYX:3} $\subset$ \eqref{T:convYX:2}: The inclusion \eqref{T:conv2:1} $\subset$ \eqref{T:conv2:2} of Corollary~\ref{C:conv2} implies $[X^c,X^c]_{\tau-}+(x^2\wedge|x|)*\nu^X_{\tau-}<\infty$ on~\eqref{T:convYX:3}. Since also $-\log(1+x)\boldsymbol 1_{x\le -\eta}*\nu^X_{\tau-} < \infty$ on~\eqref{T:convYX:3} by definition, the desired inclusion follows. \eqref{T:convYX:1} $\cap$ \eqref{T:convYX:2} $\subset$ \eqref{T:convYX:4}: Obvious. \eqref{T:convYX:4} $\subset$ \eqref{T:convYX:1}: We need to show that $X$ converges on \eqref{T:convYX:4}. By Theorem~\ref{T:conv} it is sufficient to argue that $[X^c,X^c]_{\tau-}+(x^2\wedge|x|)*\nu^X_{\tau-}<\infty$ on \eqref{T:convYX:4}. Lemma~\ref{L:convYX:0 new} yields directly that $[X^c,X^c]_{\tau-}< \infty$, so we focus on the jump component. To this end, using that $\int x\nu^X(\{t\},dx)=0$ for all $t<\tau$, we first observe that, for fixed $\varepsilon\in (0,1)$, \begin{align*} \gamma_t &= \int (x-\log(1+x)) \,\nu^X(\{t\},\mathrm{d} x) \\ &\le \frac{1}{1-\varepsilon} \int x^2\boldsymbol 1_{|x|\le \varepsilon}\,\nu^X(\{t\},\mathrm{d} x) + \int x\boldsymbol 1_{x>\varepsilon}\,\nu^X(\{t\},\mathrm{d} x) + \int -\log(1+x)\boldsymbol 1_{x<-\varepsilon}\,\nu^X(\{t\},\mathrm{d} x) \end{align*} for all $t<\tau$. Letting $\Theta_t$ denote the last two terms for each $t < \tau$, Lemma~\ref{L:convYX:0 new} implies that $\sum_{t<\tau}\Theta_t<\infty$, and hence also $\sum_{t<\tau}\Theta_t^2<\infty$, hold on~\eqref{T:convYX:4}. Furthermore, the inequality $(a+b)^2\le 2a^2 + 2b^2$ yields that \begin{align} \label{eq: sum gamma2} \sum_{t<\tau_n} \gamma_t^2 &\le \frac{2}{(1-\varepsilon)^2} \sum_{t<\tau_n} \left( \int x^2\boldsymbol 1_{ |x| \le \varepsilon}\,\nu^X(\{t\},dx) \right)^2 + 2 \sum_{t<\tau_n} \Theta_t^2 \le \frac{2 \varepsilon^2}{(1-\varepsilon)^2} x^2\boldsymbol 1_{|x|\le \varepsilon} * \nu^X_{\tau_n} + 2 \sum_{t<\tau} \Theta_t^2 \end{align} for all $n \in \mathbb{N}$, where $(\tau_n)_{n\in\mathbb{N}}$ denotes an announcing sequence for~$\tau$. Also observe that, for all $n \in \mathbb{N}$, \begin{align*} \frac{1}{16} x^2 \boldsymbol 1_{|x|\le \varepsilon} * \nu^X_{\tau_n} &\leq (\log(1+x))^2 \boldsymbol 1_{|x|\leq \varepsilon} * \nu^X_{\tau_n} \leq 2 (\log(1+x) + \gamma)^2 \boldsymbol 1_{|x| \leq \varepsilon} * \nu^X_{\tau_n} + 2 \sum_{t \leq \tau_n} \gamma_t^2, \end{align*} which yields, thanks to \eqref{eq: sum gamma2}, \begin{align*} \left(\frac{1}{16} - \frac{4 \varepsilon^2}{(1-\varepsilon)^2} \right) x^2 \boldsymbol 1_{|x|\le \varepsilon} * \nu^X_{\tau_n} \leq 2 (\log(1+x) + \gamma)^2 \boldsymbol 1_{|x| \le \varepsilon} * \nu^X_{\tau_n} + 4 \sum_{t<\tau} \Theta_t^2. \end{align*} Choosing $\varepsilon$ small enough and letting $n$ tend to infinity, we obtain that $x^2 \boldsymbol 1_{|x|\le \varepsilon} * \nu^X_{\tau} < \infty$ on \eqref{T:convYX:4} thanks to Lemma~\ref{L:convYX:0 new}. The same lemma also yields $|x| \boldsymbol 1_{|x|\geq \varepsilon} * \nu^X_{\tau} < \infty$, which concludes the proof. \qed \section{Novikov-Kazamaki conditions} \label{S:NK} We now apply our convergence results to prove general Novikov-Kazamaki type conditions. Throughout this section we fix a nonnegative local martingale $Z$ with $Z_0=1$, and define $\tau_0=\inf\{t \geq 0:Z_t=0\}$. We assume that $Z$ does not jump to zero, meaning that $Z_{\tau_0-}=0$ on $\{\tau_0<\infty\}$. The stochastic logarithm $M={\mathcal L}(Z)$ is then a local martingale on $\lc0,\tau_0[\![$ with $\Delta M>-1$; see Appendix~\ref{A:SE}. We let $\tau_\infty=\lim_{n\to \infty}\inf\{t \geq 0:Z_t\ge n\}$ denote the explosion time of $Z$; clearly, $\mathbb{P}(\tau_\infty<\infty)=0$. To distinguish between different probability measures we now write $\mathbb{E}_R[\,\cdot\,]$ for the expectation operator under a probability measure $R$. \subsection{General methodology} \label{S:method} The idea of our approach is to use $Z$ as the density process of a measure change, without knowing {\em a priori} whether~$Z$ is a uniformly integrable martingale. This can be done whenever the filtration $\mathbb{F}$ is the right-continuous modification of a standard system, for instance if $\mathbb{F}$ is the right-continuous canonical filtration on the space of (possibly explosive) paths; see \citet{F1972} and \citet{Perkowski_Ruf_2014}. This assumption rules out that $\mathbb{F}$ is augmented with the $\mathbb{P}$--nullsets, which is one reason for avoiding the ``usual conditions'' in the preceding theory. We assume $\mathbb{F}$ has this property, and emphasize that no generality is lost; see Appendix~\ref{app:embed}. The resulting measure, denoted $\mathbb{Q}$ throughout this section, is sometimes called the {\em F\"ollmer measure}; Appendix~\ref{S:follmer} reviews some relevant facts. Its crucial property is that $Z$ explodes with positive $\mathbb{Q}$--probability if and only if $Z$ is not a uniformly integrable martingale under~$\mathbb{P}$. This is where our convergence results enter the picture: under Novikov-Kazamaki type conditions, they are used to exclude explosions under~$\mathbb{Q}$. The following ``code book'' contains the basic definitions and facts that are used to translate previous sections into the current setting. It will be used extensively throughout this section. \begin{enumerate} \item Consider the process \begin{equation} \label{eq:N} N = -M+\langle M^c,M^c\rangle+\frac{x^2}{1+x} * \mu^M \end{equation} on $\lc0,\tau_0[\![$. Theorem~\ref{T:conv} in conjunction with Theorems~\ref{T:SE}, \ref{T:reciprocal}, and \ref{T numeraire} readily imply: \begin{itemize} \item $1/Z={\mathcal E}(N)$ and $N$ is a local martingale on $\lc0,\tau_\infty[\![$ under~$\mathbb{Q}$. \item $Z$ is a uniformly integrable martingale under~$\mathbb{P}$ if and only if $\mathbb{Q}(\lim_{t\to\tau_\infty}N_t\text{ exists in }\mathbb{R})=1$. \end{itemize} \item Consider the process \begin{equation} \label{eq:L} L = -M+\langle M^c,M^c\rangle+(x-\log(1+x)) * \mu^M + ((1+x)\log(1+x)-x)*\nu^M \end{equation} on $\lc0,\tau_0[\![$. This is always well-defined, but the last term may be infinite-valued. By Theorem~\ref{T:reciprocal} the last term equals $(y-\log(1+y))*\widehat\nu^N$, where $\widehat\nu^N=\nu^N/(1+y)$. If it is finite-valued, we have: \begin{itemize} \item $\widehat\nu^N$ is the predictable compensator of~$\mu^N$ under~$\mathbb{Q}$; see Lemma~\ref{L:predc}. \item $L=N^c+\log(1+y)*(\mu^N-\widehat\nu^N)$. Note that $\log(1+y)$ lies in $G_{\rm loc}(\mu^N)$ under~$\mathbb{Q}$ since both $y$ and $y-\log(1+y)$ do. \item We are in the setting of Assumption~\ref{A:2} under~$\mathbb{Q}$, with $\tau=\tau_\infty$, $X=N$ (hence $\nu^X=\widehat\nu^N$), and $Y=L$. \end{itemize} \end{enumerate} With this ``code book'' at our disposal we may now give a quick proof of the following classical result due to \citet{Lepingle_Memin_Sur}. The proof serves as an illustration of the general technique and as a template for proving the more sophisticated results presented later on. \begin{theorem}[The \citet{Lepingle_Memin_Sur} conditions] \label{T:LepMem} On $\lc0,\tau_0[\![$, define the nondecreasing processes \begin{align} A &= \frac{1}{2}\langle M^c,M^c\rangle + \Big(\log(1+x) - \frac{x}{1+x}\Big)*\mu^M; \label{eq:LM_A} \\ B &= \frac{1}{2}\langle M^c,M^c\rangle + ((1+x)\log(1+x) - x)*\nu^M. \label{eq:LM_B} \end{align} If either $\mathbb{E}_\mathbb{P}[e^{A_{\tau_0-}}]<\infty$ or $\mathbb{E}_\mathbb{P}[e^{B_{\tau_0-}}]<\infty$, then $Z$ is a uniformly integrable martingale. \end{theorem} \begin{proof} We start with the criterion using $A$. A brief calculation gives the identity \[ A = \left( M - \frac{1}{2}\langle M^c,M^c\rangle - (x-\log(1+x))*\mu^M \right) + \left( -M + \langle M^c,M^c\rangle + \frac{x^2}{1+x}*\mu^M \right) =\log Z + N. \] Thus, using also that $A$ is nondecreasing, we obtain \[ \infty > \sup_{\sigma\in{\mathcal T}} \mathbb{E}_\mathbb{P}\left[e^{A_\sigma}\1{\sigma<\tau_0}\right] = \sup_{\sigma\in{\mathcal T}} \mathbb{E}_\mathbb{Q}\left[e^{N_\sigma}\1{\sigma<\tau_\infty}\right]. \] The implication \ref{T:conv1:c} $\Longrightarrow$ \ref{T:conv1:a} in Corollary~\ref{C:conv001} now shows that $\lim_{t\to\tau_\infty}N_t$ exists in~$\mathbb{R}$, $\mathbb{Q}$--almost surely. Thus $Z$ is a uniformly integrable martingale under~$\mathbb{P}$. The criterion using $B$ is proved similarly. First, the assumption implies that~$B$ and hence~$L$ are finite-valued. Theorem~\ref{T:reciprocal} and a calculation yield \[ B = \frac{1}{2}\langle N^c,N^c\rangle + ((1+\phi(y))\log(1+\phi(y)) - \phi(y))*\nu^N =\frac{1}{2}\langle N^c,N^c\rangle + (y-\log(1+y))*\widehat\nu^N, \] where $\phi$ is the involution in~\eqref{eq:phi}. Observing that $L = N - (y - \log(1+y))* (\mu^N-\widehat\nu^N)$, we obtain $-\log Z = \log {\mathcal E}(N) = L - B$, and hence \[ \infty > \sup_{\sigma\in{\mathcal T}} \mathbb{E}_\mathbb{P}\left[e^{B_\sigma}\1{\sigma<\tau_0}\right] = \sup_{\sigma\in{\mathcal T}} \mathbb{E}_\mathbb{Q}\left[e^{L_\sigma}\1{\sigma<\tau_\infty}\right]. \] The implication \ref{T:convYX2:c} $\Longrightarrow$ \ref{T:convYX2:a} in Corollary~\ref{C:convYX2} now shows that $N$ converges $\mathbb{Q}$--almost surely. \end{proof} \begin{remark}We make the following observations concerning Theorem~\ref{T:LepMem}: \begin{itemize} \item Note that we have the following representation by Theorem~\ref{T:reciprocal}, with $\widehat{\nu}^N = \nu^N/(1+y)$. \begin{align*} A &= \frac{1}{2}\langle N^c,N^c\rangle + (y- \log(1+y))*\mu^N;\\ B &= \frac{1}{2}\langle N^c,N^c\rangle + (y-\log(1+y) )*\widehat\nu^N. \end{align*} Thus, by Lemma~\ref{L:predc}, $B$ is the predictable compensator (if it exists) of $A$ under the F\"ollmer measure $\mathbb{Q}$; see also Remarque~III.12 in \citet{Lepingle_Memin_Sur}. \item Any of the two conditions in Theorem~\ref{T:LepMem} implies that $Z_\infty>0$ and thus $\tau_0 = \infty$, thanks to Theorem~\ref{T:conv}. This has already been observed in Lemmes~III.2 and III.8 in \citet{Lepingle_Memin_Sur}. \item For the condition involving $B$, \citet{Lepingle_Memin_Sur} allow $Z$ to jump to zero. This case can be treated using our approach as well, albeit with more intricate arguments. For reasons of space, we focus on the case where $Z$ does not jump to zero. \qed \end{itemize} \end{remark} \citet{Protter_Shimbo} and \citet{Sokol2013_optimal} observe that if $\Delta M \geq -1+\delta$ for some $\delta \geq 0$ then the expressions $\log(1+x) - x/(1+x)$ and $(1+x) \log(1+x) - x$ appearing in \eqref{eq:LM_A} and \eqref{eq:LM_B}, respectively, can be bounded by simplified (and more restrictive) expressions. \subsection{An abstract characterization and its consequences} In a related paper, \citet{Lepingle_Memin_integrabilite} embed the processes $A$ and $B$ from Theorem~\ref{T:LepMem} into a parameterized family of processes $A^a$ and $B^a$, which can be defined on $\lc0,\tau_0[\![$ for each $a\in\mathbb{R}$ by \begin{align}\label{eq:A} A^{a} &= aM + \left( \frac{1}{2}-a\right) [M^c,M^c] + \left( \log(1+x) - \frac{ax^2+x}{1+x}\right) * \mu^M; \\ \nonumber B^{a} &= aM + \left( \frac{1}{2}-a\right) [M^c,M^c] - a\left( x-\log(1+x) \right) * \mu^M +(1-a)\left((1+x)\log(1+x) - x\right) * \nu^M. \nonumber \end{align} Note that $A^0=A$ and $B^0=B$. They then prove that uniform integrability of $(e^{A^a_\sigma})_{\sigma\in{\mathcal T}}$ or $(e^{B^a_\sigma})_{\sigma\in{\mathcal T}}$ for some $a\in[0,1)$ implies that $Z$ is a uniformly integrable martingale. Our present approach sheds new light on this result and enables us to strengthen it. A key observation is that $A^a$ and $B^a$ satisfy the following identities, which extend those for $A$ and $B$ appearing in the proof of Theorem~\ref{T:LepMem}. Recall that $N$ and $L$ are given by~\eqref{eq:N} and~\eqref{eq:L}. \begin{lemma} \label{L:ABdecomp} The processes $A^a$ and $B^a$ satisfy, $\mathbb{Q}$--almost surely on $\lc0,\tau_\infty[\![$, \begin{align*} A^a &= \log Z + (1-a)N;\\ B^{a} &= \log Z + (1-a)L. \end{align*} \end{lemma} \begin{proof} The identities follow from Theorem~\ref{T:reciprocal} and basic computations that we omit here. \end{proof} We now state a general result giving necessary and sufficient conditions for $Z$ to be a uniformly integrable martingale. It is a direct consequence of the convergence results in Section~\ref{S:convergence}. In combination with Lemma~\ref{L:ABdecomp} this will yield improvements of the result by \citet{Lepingle_Memin_integrabilite} and give insight into how far such results can be generalized. \begin{theorem}[Abstract \citet{Lepingle_Memin_integrabilite} conditions] \label{T:NK} Let $f:\mathbb{R}\to\mathbb{R}_+$ be any nondecreasing function with $f(x)\ge x$ for all sufficiently large~$x$, and let $\epsilon \in \{-1,1\}$, $\kappa > 0$, and $\eta \in (0,1)$. Then the following conditions are equivalent: \begin{enumerate}[label={\rm(\alph*)},ref={\rm(\alph*)}] \item\label{T:NK:1} $Z$ is a uniformly integrable martingale. \item\label{T:NK:2} There exists an optional process $U$, extended locally integrable on $\lc0,\tau_\infty[\![$ under $\mathbb{Q}$, such that \begin{equation} \label{eq:4.7} \sup_{\sigma\in{\mathcal T}} \mathbb{E}_\mathbb{P}\left[ Z_\sigma f(\epsilon N_\sigma-U_\sigma)\1{\sigma < \tau_0}\right] < \infty. \end{equation} \end{enumerate} Moreover, the following conditions are equivalent: \begin{enumerate}[label={\rm(\alph*)},ref={\rm(\alph*)},resume] \item\label{T:NK:3} $Z$ is a uniformly integrable martingale and $(1+x)\log(1+x) \boldsymbol 1_{x>\kappa} * \nu^M$ is extended locally integrable on $\lc0,\tau_\infty[\![$ under $\mathbb{Q}$. \item\label{T:NK:4} $(1+x)\log(1+x) \boldsymbol 1_{x > \kappa} * \nu^M$ is finite-valued, $-x \boldsymbol 1_{x<-\eta} * \nu^M$ is extended locally integrable on $\lc0,\tau_\infty[\![$ under $\mathbb{Q}$, and \begin{align*} \sup_{\sigma \in \mathcal{T}} \mathbb{E}_\mathbb{P}\left[Z_\sigma f(\epsilon L_\sigma-U_\sigma)\right] < \infty \end{align*} for some optional process $U$, extended locally integrable on $\lc0,\tau_\infty[\![$ under~$\mathbb{Q}$. \end{enumerate} \end{theorem} \begin{important remark}[Characterization of extended local integrability] \label{R:ELI Q P} The extended local integrability under $\mathbb{Q}$ in Theorem~\ref{T:NK}\ref{T:NK:2}--\ref{T:NK:4} can also be phrased in terms of the ``model primitives'', that is, under $\mathbb{P}$. As this reformulation is somewhat subtle---in particular, extended local integrability under $\mathbb{P}$ is {\em not} equivalent---we have opted for the current formulation in terms of~$\mathbb{Q}$. A characterization under~$\mathbb{P}$ is provided in Appendix~\ref{App:Z}, which should be consulted by any reader who prefers to work exclusively under~$\mathbb{P}$. A crude, but simple, sufficient condition for $U$ to be extended locally integrable under~$\mathbb{Q}$ is that $U$ is bounded. \qed \end{important remark} \begin{proof}[Proof of Theorem~\ref{T:NK}] We use the ``code book'' from Subsection~\ref{S:method} freely. Throughout the proof, suppose $\epsilon=1$; the case $\epsilon=-1$ is similar. Observe that \eqref{eq:4.7} holds if and only if \eqref{eq:T:conv:exp} holds under~$\mathbb{Q}$ with $X=N$ and $\tau=\tau_\infty$. Since $\Delta N>-1$, Corollary~\ref{C:conv001} thus shows that \ref{T:NK:2} is equivalent to the convergence of $N$ under $\mathbb{Q}$, which is equivalent to~\ref{T:NK:1}. To prove the equivalence of \ref{T:NK:3} and \ref{T:NK:4}, first note that $((1+x)\log(1+x) - x) * \nu^M$ is finite-valued under either condition, and hence so is~$L$. Note also the equalities \[ (1+x)\log(1+x)\boldsymbol 1_{x\ge \kappa}*\nu^M=-\log(1+y)\boldsymbol 1_{y\le-\kappa/(1+\kappa)}*\widehat\nu^N \] and \[ -x\boldsymbol 1_{x<-\eta}*\nu^M = y\boldsymbol 1_{y>\eta/(1-\eta)}*\widehat\nu^N. \] With the identifications in the ``code book'', \ref{T:NK:3} now states that the event~\eqref{T:convYX:3} has full probability under~$\mathbb{Q}$. By Theorem~\ref{T:convYX} this is equivalent to the event~\eqref{T:convYX:4} having full probability under $\mathbb{Q}$. Due to Corollary~\ref{C:conv001} this is equivalent to \ref{T:NK:4}. \end{proof} \begin{remark} \label{R:Uspecs} We observe that, given condition~\ref{T:NK:1} or~\ref{T:NK:3} in Theorem~\ref{T:NK}, we may always choose $U$ to equal $(1-a)N$ or $(1-a)L$, respectively. \qed \end{remark} \begin{corollary}[Generalized \citet{Lepingle_Memin_integrabilite} conditions] \label{C:NK1} Fix $a\ne1$ and $\eta \in (0,1)$. The following condition is equivalent to Theorem~\ref{T:NK}\ref{T:NK:1}: \begin{enumerate}[label={\rm(b$'\;\!$)},ref={\rm(b$'\;\!$)}] \item\label{C:NK1:2} There exists an optional process $U$, extended locally integrable on $\lc0,\tau_\infty[\![$ under~$\mathbb{Q}$, such that \begin{equation} \label{eq:NPN} \sup_{\sigma\in{\mathcal T}} \mathbb{E}_\mathbb{P}\left[ e^{A^a_\sigma - U_\sigma} \1{\sigma<\tau_0}\right] < \infty. \end{equation} \end{enumerate} Moreover, the following conditions are equivalent to Theorem~\ref{T:NK}\ref{T:NK:3}: \begin{enumerate}[label={\rm(d$'\;\!$)},ref={\rm(d$'\;\!$)}] \item\label{C:NK1:4} $(1+x)\log(1+x) \boldsymbol 1_{x > \kappa} * \nu^M$ is finite-valued, $-x \boldsymbol 1_{x<-\eta} * \nu^M$ is extended locally integrable on $\lc0,\tau_\infty[\![$ under $\mathbb{Q}$, and \begin{align} \label{T:NK:eq1} \sup_{\sigma \in \mathcal{T}} \mathbb{E}_\mathbb{P}\left[e^{B^a_\sigma - U_\sigma}\1{\sigma<\tau_0}\right] < \infty \end{align} for some optional process $U$, extended locally integrable on $\lc0,\tau_\infty[\![$ under~$\mathbb{Q}$. \end{enumerate} \begin{enumerate}[label={\rm(d$''\;\!$)},ref={\rm(d$''\;\!$)}] \item\label{C:NK1:5} $(1+x)\log(1+x) \boldsymbol 1_{x > \kappa} * \nu^M$ is finite-valued and there exists an optional process $U$, extended locally integrable on~$\lc0,\tau_\infty[\![$ under~$\mathbb{Q}$, such that the family $(e^{B_\sigma^{a} - U_\sigma} \1{\sigma<\tau_0})_{\sigma\in{\mathcal T}}$ is uniformly integrable. \end{enumerate} If $a\leq 0$, these conditions are implied by the following: \begin{enumerate}[label={\rm(d$'''\;\!$)},ref={\rm(d$'''\;\!$)}] \item\label{C:NK1:6} \eqref{T:NK:eq1} holds for some optional process $U$ that is extended locally bounded on~$\lc0,\tau_\infty[\![$ under~$\mathbb{Q}$. \end{enumerate} \end{corollary} \begin{proof} In view of Lemma~\ref{L:ABdecomp}, the equivalences \ref{C:NK1:2} $\Longleftrightarrow$ Theorem~\ref{T:NK}\ref{T:NK:1} and \ref{C:NK1:4} $\Longleftrightarrow$ Theorem~\ref{T:NK}\ref{T:NK:3} follow by choosing $f(x)=e^{(1-a)x}$ for all $x \in \mathbb{R}$ and $\epsilon={\rm sign}(1-a)$ in Theorem~\ref{T:NK}. The condition \ref{C:NK1:5} is implied by Theorem~\ref{T:NK}\ref{T:NK:3} thanks to the last statement in Corollary~\ref{C:convYX2}. Now assume that \ref{C:NK1:5} holds and assume for contradiction that $N$ does not converge under $\mathbb{Q}$. The assumed uniform integrability implies that for any $\varepsilon > 0$ there exists $\kappa_1 > 0$ such that \begin{equation} \label{eq:C:NK1:001} \sup_{\sigma \in \mathcal{T}} \mathbb{E}_\mathbb{Q}\left[e^{(1-a) L_\sigma - U_\sigma}\boldsymbol 1_{\{\sigma<\tau_\infty\} \cap \{1/Z_\sigma \le 1/\kappa_1\}}\right] = \sup_{\sigma \in \mathcal{T}} \mathbb{E}_\mathbb{P}\left[e^{B^a_\sigma - U_\sigma}\boldsymbol 1_{\{\sigma<\tau_0\} \cap \{Z_\sigma\ge \kappa_1\}}\right] < \varepsilon, \end{equation} using again Lemma~\ref{L:ABdecomp}. Now, it follows from Corollary~\ref{C:conv001} with $X=L$ and $f(x) = e^{(1-a)x}$ for all $x\in\mathbb{R}$ that~$L$ converges under~$\mathbb{Q}$. Hence $\inf_{t<\tau} ((1-a)L_t-U_t) = -\Theta$ for some finite nonnegative random variable $\Theta$. Furthermore, since by assumption $N$ does not converge under $\mathbb{Q}$, there is an event $C$ with $\mathbb{Q}(C)>0$ such that $\sigma<\tau$ on $C$, where $\sigma=\inf\{t\geq 0:1/Z_t\le1/\kappa\}$. Consequently, the left-hand side of \eqref{eq:C:NK1:001} is bounded below by $\mathbb{E}_\mathbb{Q}[e^{-\Theta}\boldsymbol 1_C]>0$, independently of~$\varepsilon$. This gives the desired contradiction. Finally, \ref{C:NK1:6} implies that $(1+x)\log(1+x) \boldsymbol 1_{x > \kappa} * \nu^M$ is finite-valued. Hence by Corollary~\ref{C:convYX2} it also implies the remaining conditions, provided $a\le0$. \end{proof} \begin{remark} Without the assumption that $(1+x)\log(1+x) \boldsymbol 1_{x > \kappa} * \nu^M$ is finite-valued for some $\kappa>0$, the conditions \ref{C:NK1:4} and \ref{C:NK1:5} in Corollary~\ref{C:NK1} can be satisfied for all $a>1$ even if $Z$ is not a uniformly integrable martingale. On the other hand, there exist uniformly integrable martingales $Z$ for which \eqref{T:NK:eq1} does not hold for all $a<1$. These points are illustrated in Example~\ref{ex:5 16} below. \qed \end{remark} The implications Theorem~\ref{T:NK}\ref{T:NK:1} $\Longrightarrow$ Corollary~\ref{C:NK1}\ref{C:NK1:2} and Theorem~\ref{T:NK}\ref{T:NK:3} $\Longrightarrow$ Corollary~\ref{C:NK1}\ref{C:NK1:4}--\ref{C:NK1:6} seem to be new, even in the continuous case. The reverse directions imply several well-known criteria in the literature. Setting $a=0$ and $U=0$ in~\eqref{eq:NPN}, and using that $A^0$ is nondecreasing, we recover the first condition in Theorem~\ref{T:LepMem}. More generally, taking $a\in[0,1)$ and $U=0$ yields a strengthening of Theorem~I.1(5-$\alpha$) in \citet{Lepingle_Memin_integrabilite}. Indeed, the $L^1$-boundedness in~\eqref{T:NK:eq1}, rather than uniform integrability as assumed by L\'epingle and M\'emin (with $U=0$), suffices to conclude that $Z$ is a uniformly integrable martingale. This is however not the case when $A^a$ is replaced by $B^a$; the uniform integrability assumption in \ref{C:NK1:5} cannot the weakened to $L^1$-boundedness in general. Counterexamples to this effect are constructed in Subsection~\ref{SS:counter NK}. However, the implication Corollary~\ref{C:NK1}\ref{C:NK1:4} $\Longrightarrow$ Theorem~\ref{T:NK}\ref{T:NK:3}, which also seems to be new, shows that if the jumps of $M$ are bounded away from zero then uniform integrability can be replaced by $L^1$--boundedness. In a certain sense our results quantify how far the L\'epingle and M\'emin conditions are from being necessary: the gap is precisely captured by the extended locally integrable (under $\mathbb{Q}$) process~$U$. In practice it is not clear how to find a suitable process $U$. A natural question is therefore how well one can do by restricting to the case $U=0$. Theorem~\ref{T:NK} suggests that one should look for other functions $f$ than the exponentials chosen in Corollary~\ref{C:NK1}. The best possible choice is $f(x)=x\boldsymbol 1_{x>\kappa}$ for some $\kappa>0$, which, focusing on $\epsilon=1$ and $A=A^0$ for concreteness, leads to the criterion \[ \sup_{\sigma\in{\mathcal T}} \mathbb{E}_\mathbb{P}\left[ e^{A_\sigma} N_\sigma e^{-N_\sigma}\1{N_\sigma>\kappa}\right] <\infty. \] Here one only needs to control $A$ on the set where $N$ takes large positive values. Moreover, on this set one is helped by the presence of the small term $N e^{-N}$. We now state a number a further consequences of the above results, which complement and improve various criteria that have already appeared in the literature. Again we refer the reader to Remark~\ref{R:ELI Q P} for an important comment on the extended local integrability assumptions appearing below. \begin{corollary}[Kazamaki type conditions] \label{C:nonpred1} Each of the following conditions implies that $Z$ is a uniformly integrable martingale: \begin{enumerate} \item\label{C:nonpred1:1} The running supremum of ${A}^a$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under~$\mathbb{Q}$ for some $a\ne1$. \item\label{C:nonpred1:1'} The running supremum of ${B}^a$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under~$\mathbb{Q}$ for some $a\ne1$. \item\label{C:nonpred1:2} ${\displaystyle \sup_{\sigma \in \mathcal{T}} \mathbb{E}_\mathbb{P}\left[\exp\left( \frac{1}{2} M_\sigma + \left( \log(1+x) - \frac{x^2 +2 x}{2(1+x)}\right) \boldsymbol 1_{x<0} * \mu^M_\sigma \right) \1{\sigma<\tau_0} \right] < \infty.}$ \item\label{C:nonpred1:2'} ${\displaystyle \left(\exp\left( \frac{1}{2} M_\sigma + \frac{1}{2}\left( (1+x) \log(1+x) - x\right) * \nu^M_\sigma \right) \1{\sigma<\tau_0} \right)_{\sigma \in \mathcal{T}}}$ is uniformly integrable. \item\label{C:nonpred1:3} $M$ is a uniformly integrable martingale and \begin{align} \mathbb{E}_\mathbb{P}\left[\exp\left( \frac{1}{2} M_{\tau_0-} + \left( \log(1+x) - \frac{x^2 +2 x}{2(1+x)}\right) \boldsymbol 1_{x<0} * \mu^M_{\tau_0-} \right) \right] &< \infty. \label{eq:NPN3b} \end{align} \item\label{C:nonpred1:3'} $M$ is a uniformly integrable martingale and \begin{align*} \mathbb{E}_\mathbb{P}\left[\exp\left( \frac{1}{2} M_{\tau_0-} + \frac{1}{2}\left( (1+x) \log(1+x) - x\right) * \nu^M_{\tau_0-} \right) \right] &< \infty. \end{align*} \item\label{C:nonpred1:4} $M$ satisfies $\Delta M \geq -1+\delta$ for some $\delta > 0$ and \begin{align} \label{eq:NPN4} \sup_{\sigma \in \mathcal{T}} \mathbb{E}_\mathbb{P}\left[ \exp\left(\frac{M_\sigma}{1+\delta} - \frac{1-\delta}{2 + 2 \delta} [M^c,M^c]\right) \1{\sigma<\tau_0} \right] < \infty. \end{align} \end{enumerate} \end{corollary} \begin{proof} For~\ref{C:nonpred1:1} and~\ref{C:nonpred1:1'}, take $U=A^a$ and $U=B^a$ in Corollary~\ref{C:NK1}\ref{C:NK1:2} and~\ref{C:NK1:5}, respectively. For~\ref{C:nonpred1:2} and~\ref{C:nonpred1:2'}, take $U=0$ and $a=1/2$ in Corollary~\ref{C:NK1}\ref{C:NK1:2} and~\ref{C:NK1:5}, and use the inequalities $\log(1+x) \leq (x^2 + 2x)/(2 +2x)$ for all $x \geq 0$. For~\ref{C:nonpred1:3} and~\ref{C:nonpred1:3'}, note that if $M$ is a uniformly integrable martingale then the exponential processes in~\ref{C:nonpred1:2} and~\ref{C:nonpred1:2'} are submartingales, thanks to the inequality $(1+x)\log(1+x)-x\ge0$ for all $x > -1$. Thus~\ref{C:nonpred1:3} implies that~\ref{C:nonpred1:2} holds, and~\ref{C:nonpred1:3'} implies that~\ref{C:nonpred1:2'} holds. Finally, due to the inequality \[ \log(1+x) - \frac{x^2/(1+\delta) + x}{1+x} = \frac{1}{1+\delta} \int_0^x \frac{-y}{(1+y)^2} (1-\delta+y) \mathrm{d} y \ge 0 \] for all $x \ge -1+\delta$, \ref{C:nonpred1:4} implies that Corollary~\ref{C:NK1}\ref{C:NK1:2} holds with $a=1/(1+\delta)$ and $U=0$. \end{proof} \begin{remark} We make the following observations concerning Corollary~\ref{C:nonpred1}: \begin{itemize} \item The condition in Corollary~\ref{C:nonpred1}\ref{C:nonpred1:1} is sufficient but not necessary for the conclusion, as Example~\ref{ex:5 16} below illustrates. Similarly, it can be shown that the condition in Corollary~\ref{C:nonpred1}\ref{C:nonpred1:1'} is not necessary for $Z$ to be a uniformly integrable. However, the condition in Theorem~\ref{T:NK}\ref{T:NK:3} implies that $B^a$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under~$\mathbb{Q}$. In view of the ``code book'', this can be seen by piecing together Lemmas~\ref{L:convYX} and~\ref{L:ABdecomp}, Theorem~\ref{T:convYX}, and~\eqref{eq:VV}. \item The uniform integrability of $M$ is needed to argue that \eqref{eq:NPN3b} implies~\ref{C:nonpred1:2} in Corollary~\ref{C:nonpred1}. Even if $M$ is continuous this implication is false in general; see \citet{Ruf_Novikov} for examples. \qed \end{itemize} \end{remark} Corollary~\ref{C:nonpred1}\ref{C:nonpred1:3} appears already in Proposition~I.3 in \citet{Lepingle_Memin_integrabilite}. Corollary~\ref{C:nonpred1}\ref{C:nonpred1:4} with the additional assumption that $\delta \leq 1$ implies Proposition~I.6 in \citet{Lepingle_Memin_integrabilite}. Also, conditions \ref{C:nonpred1:3}, \ref{C:nonpred1:3'} and (a somewhat weaker version of) \ref{C:nonpred1:4} below have appeared in \cite{Yan_1982}. In particular, if $\Delta M \geq 0$ and $\delta = 1$, \eqref{eq:NPN4} yields Kazamaki's condition verbatim. \begin{example} \label{ex:5 16} Let $Y$ be a nonnegative random variable such that $\mathbb{E}_\mathbb{P}[Y]<\infty$ and $\mathbb{E}_\mathbb{P}[(1+Y)\log(1+Y)]=\infty$, let $\Theta$ be a $\{0,1\}$--valued random variable with $\mathbb{P}(\Theta=1)=1/(1+2\mathbb{E}_\mathbb{P}[Y])$, and let $W$ be standard Brownian motion. Suppose $Y$, $\Theta$, $W$ are pairwise independent. Now define \[ M = \left(Y\Theta - \frac{1}{2}(1-\Theta) + W - W_1\right)\boldsymbol 1_{[\![ 1, \infty[\![}. \] Then $M$ is a martingale under its natural filtration with $\Delta M\ge-1/2$, and the process $Z={\mathcal E}(M)$ is not a uniformly integrable martingale, as it tends to zero as $t$ tends to infinity. However, \[ ((1+x)\log(1+x) - x) * \nu^M_1=\mathbb{E}_\mathbb{P}[(1+Y)\log(1+Y)-Y]=\infty, \] which implies that conditions \ref{C:NK1:4}--\ref{C:NK1:5} in Corollary~\ref{C:NK1} are satisfied for any $a>1$, apart from the finiteness of $(1+x)\log(1+x) \boldsymbol 1_{x\geq \kappa}* \nu^M$ for some $\kappa>0$. Consider now the process $\widetilde Z=(Z_{t\wedge1})_{t\ge0} = (Y\Theta + \frac{1}{2}(1+\Theta)) \boldsymbol 1_{[\![ 1, \infty[\![}$. This is a uniformly integrable martingale. Nonetheless, \eqref{eq:4.7} fails for any $a<1$. We now consider the process $$\widetilde A^a = \left(\log(1+\Delta M_1) - (1-a) \frac{\Delta M_1}{1+\Delta M_1}\right) \boldsymbol 1_{[\![ 1, \infty[\![}$$ for each $a \in \mathbb{R}$ as in \eqref{eq:A}. Then $\mathbb{E}_\mathbb{P}[\widetilde A_1^a \widetilde Z_1] = \infty$, which implies that $\widetilde A^a$ is not extended locally integrable under~$\mathbb{Q}$ for each $a \in \mathbb{R}$, as can be deduced based on Lemma~\ref{L:extended locally}. In particular, the condition in Corollary~\ref{C:nonpred1}\ref{C:nonpred1:1} is not satisfied. \qed \end{example} \subsection{Further characterizations} We now present a number of other criteria that result from our previous analysis, most of which seem to be new. Again, the reader should keep Remark~\ref{R:ELI Q P} in mind. \begin{theorem}[Necessary and sufficient conditions based on extended localization]\label{T:further} Let $\epsilon \in \{-1,1\}$, $\eta \in (0,1)$, and $\kappa>0$. Then the following conditions are equivalent: \begin{enumerate}[label={\rm(\alph*)},ref={\rm(\alph*)}] \item\label{T:further:1} $Z$ is a uniformly integrable martingale. \item\label{T:further:2} $(\epsilon N)^+$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under $\mathbb{Q}$. \item\label{T:further:3} $[M^c, M^c] + (x^2 \wedge |x|) * \nu^M$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under $\mathbb{Q}$. \end{enumerate} Moreover, the following two conditions are equivalent: \begin{enumerate}[label={\rm(\alph*)},ref={\rm(\alph*)}, resume] \item\label{T:further:4} $Z$ is a uniformly integrable martingale and $((\Delta M)^- / (1+\Delta M))^2$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under $\mathbb{Q}$. \item\label{T:further:5} ${\displaystyle [M^c, M^c] + {(x/(1+x))^2} * \mu^M}$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under $\mathbb{Q}$. \end{enumerate} Furthermore, the following conditions are equivalent: \begin{enumerate}[label={\rm(\alph*)},ref={\rm(\alph*)},resume] \item\label{T:further:6} $Z$ is a uniformly integrable martingale and $(1+x)\log(1+x) \boldsymbol 1_{x>\kappa} * \nu^M$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under $\mathbb{Q}$. \item\label{T:further:7} ${\displaystyle [M^c,M^c] + ((1+x)\log(1+x)-x) * \nu^M}$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under $\mathbb{Q}$. \item\label{T:further:8} $(1+x)\log(1+x) \boldsymbol 1_{x>\kappa} * \nu^M$ is finite-valued, $-x \boldsymbol 1_{x<-\eta} * \nu^M$ is extended locally integrable on~$\lc0,\tau_\infty[\![$ under $\mathbb{Q}$, and $(\epsilon L)^+$ is extended locally integrable under $\mathbb{Q}$. \end{enumerate} \end{theorem} \begin{proof}[Proof of Theorem~\ref{T:further}] Once again we use the ``code book'' from Subsection~\ref{S:method} freely. A calculation using Theorem~\ref{T:reciprocal} yields \[ [M^c, M^c] + (x^2 \wedge |x|) * \nu^M = \langle N^c,N^c\rangle + \left(\frac{y^2}{1+y}\wedge|y|\right)*\widehat\nu^N. \] The equivalence of \ref{T:further:1}--\ref{T:further:3} now follows from Theorem~\ref{T:conv} using the inequalities $(y^2\wedge|y|)/2 \le (y^2/(1+y))\wedge|y| \le 2(y^2\wedge|y|)$. Since $(\Delta N)^+=(\Delta M)^-/(1+\Delta M)$ and $\Delta N>-1$, \ref{T:further:4} holds if and only if $N$ converges and $(\Delta N)^2$ is extended locally integrable under~$\mathbb{Q}$. By Corollary~\ref{C:convergence_QV} and Lemma~\ref{L:ELI}\ref{L:ELI:3}, this holds if and only if $[N,N]$ is extended locally integrable under~$\mathbb{Q}$. Since $[N,N]=\langle M^c,M^c\rangle+(x/(1+x))^2*\mu^M$, this is equivalent to \ref{T:further:5}. To prove the equivalence of \ref{T:further:6}--\ref{T:further:8}, first note that $((1+x)\log(1+x) - x) * \nu^M$ is finite-valued under either condition, and hence so is~$L$. Note also the equalities \begin{align*} (1+x)\log(1+x)\boldsymbol 1_{x\ge \kappa}*\nu^M &=-\log(1+y)\boldsymbol 1_{y\le-\kappa/(1+\kappa)}*\widehat\nu^N;\\ [M^c,M^c] + ((1+x)\log(1+x)-x) * \nu^M &= [N^c,N^c] + (y-\log(1+y)) * \widehat\nu^N;\\ -x\boldsymbol 1_{x<-\eta}*\nu^M &= y\boldsymbol 1_{y>\eta/(1-\eta)}*\widehat\nu^N. \end{align*} With the identifications in the ``code book'', \ref{T:further:6} now states that the event~\eqref{T:convYX:3} has full probability under~$\mathbb{Q}$. Moreover, \ref{T:further:7} states that the event~\eqref{T:convYX:2} has full probability under $\mathbb{Q}$. Thanks to Lemma~\ref{L:convYX}, \ref{T:further:8} states that \eqref{T:convYX:4} has full probability under $\mathbb{Q}$. Thus all three conditions are equivalent by Theorem~\ref{T:convYX}. \end{proof} \begin{remark} We make the following observations concerning Theorem~\ref{T:further}: \begin{itemize} \item If the jumps of $M$ are bounded away from $-1$, that is, $\Delta M \geq -1+\delta$ for some $\delta > 0$, then the second condition in Theorem~\ref{T:further}\ref{T:further:4} is automatically satisfied. \item If $[M^c, M^c]_{\tau_0-} + (x^2 \wedge |x|) * \nu^M_{\tau_0-} $ is extended locally integrable then $\tau_0 = \infty$ and $Z_\infty>0$ by Theorem~\ref{T:conv}. Contrast this with the condition in Theorem~\ref{T:further}\ref{T:further:3}. \qed \end{itemize} \end{remark} The implication \ref{T:further:3} $\Longrightarrow$ \ref{T:further:1} of Theorem~\ref{T:further} is proven in Theorem~12 in \cite{Kabanov/Liptser/Shiryaev:1979} if the process in Theorem~\ref{T:further}\ref{T:further:3} is not only extended locally integrable, but bounded. \section{Counterexamples} \label{S:examp} In this section we collect several examples of local martingales that illustrate the wide range of asymptotic behavior that can occur. This showcases the sharpness of the results in Section~\ref{S:convergence}. In particular, we focus on the role of the extended uniform integrability of the jumps. \subsection{Random walk with large jumps} \label{A:SS:lack} Choose a sequence $(p_n)_{n \in \mathbb{N}}$ of real numbers such that $p_n\in(0,1)$ and $\sum_{n =1}^\infty p_n < \infty$. Moreover, choose a sequence $(x_n)_{n \in \mathbb{N}}$ of real numbers. Then let $(\Theta_n)_{n \in \mathbb{N}}$ be a sequence of independent random variables with $\mathbb{P}(\Theta_n = 1) = p_n$ and $\mathbb{P}(\Theta_n = 0) = 1-p_n$ for all $n \in N$. Now define a process $X$ by \begin{align*} X_t = \sum_{n =1}^{[t]} x_n \left(1 - \frac{\Theta_n}{p_n}\right), \end{align*} where $[t]$ is the largest integer less than or equal to $t$, and let $\mathbb{F}$ be its natural filtration. Clearly $X$ is a locally bounded martingale. The Borel-Cantelli lemma implies that $\Theta_n$ is nonzero for only finitely many $n \in \mathbb{N}$, almost surely, whence for all sufficiently large $n \in \mathbb{N}$ we have $\Delta X_n = x_n$. By choosing a suitable sequence $(x_n)_{n \in \mathbb{N}}$ one may therefore achieve essentially arbitrary asymptotic behavior. This construction was inspired by an example due to George Lowther that appeared on his blog Almost Sure on December 20, 2009. \begin{lemma} \label{L:ex1} With the notation of this subsection, $X$ satisfies the following properties: \begin{enumerate} \item\label{L:ex1:1} $\lim_{t \to \infty} X_t$ exists in $\mathbb{R}$ if and only if $\lim_{m\to \infty} \sum_{n = 1}^m x_n$ exists in $\mathbb{R}$. \item\label{L:ex1:3} $(1 \wedge x^2) * \mu^X_{\infty-} < \infty$ if and only if $[X,X]_{\infty-} < \infty$ if and only if $\sum_{n = 1}^\infty x_n^2 < \infty$. \item\label{L:ex1:2} $X$ is a semimartingale on $[0,\infty]$ if and only if $(x^2\wedge|x|)*\nu^X_{\infty-}<\infty$ if and only if $X$ is a uniformly integrable martingale if and only if $\sum_{n = 1}^\infty |x_n| < \infty$. \end{enumerate} \end{lemma} \begin{proof} The statements in \ref{L:ex1:1} and \ref{L:ex1:3} follow from the Borel-Cantelli lemma. For \ref{L:ex1:2}, note that $|X_t| \le \sum_{n\in\mathbb{N}} |x_n| (1 + \Theta_n/p_n)$ for all $t\ge0$. Since \[ \mathbb{E}\left[\sum_{n = 1}^\infty |x_n| \left(1 + \frac{\Theta_n}{p_n}\right)\right]=2\sum_{n = 1}^\infty |x_n|, \] the condition $\sum_{n = 1}^\infty |x_n| < \infty$ implies that $X$ is a uniformly integrable martingale, which implies that~$X$ is a special semimartingale on $[0,\infty]$, or equivalently that $(x^2\wedge|x|)*\nu^X_{\infty-}<\infty$ (see Proposition~II.2.29 in~\cite{JacodS}), which implies that $X$ is a semimartingale on $[0,\infty]$. It remains to show that this implies $\sum_{n = 1}^\infty |x_n| < \infty$. We prove the contrapositive, and assume $\sum_{n = 1}^\infty |x_n| = \infty$. Consider the bounded predictable process $H = \sum_{n=1}^\infty (\boldsymbol 1_{x_n>0} - \boldsymbol 1_{x_n<0}) \boldsymbol 1_{[\![ n]\!]}$. If $X$ were a semimartingale on $[0,\infty]$, then $(H\cdot X)_{\infty-}$ would be well-defined and finite. However, by Borel-Cantelli, $H\cdot X$ has the same asymptotic behavior as $\sum_{n=1}^\infty |x_n|$ and thus diverges. Hence $X$ is not a semimartingale on~$[0,\infty]$. \end{proof} Martingales of the above type can be used to illustrate that much of Theorem~\ref{T:conv} and its corollaries fails if one drops extended local integrability of $(\Delta X)^-\wedge X^-$. We now list several such counterexamples. \begin{example} \label{E:P1} We use the notation of this subsection. \begin{enumerate} \item \label{E:P1:1} Let $x_n = (-1)^n/\sqrt{n}$ for all $n \in \mathbb{N}$. Then \[ \mathbb{P}\Big(\lim_{t \to \infty} X_t \text{ exists in $\mathbb{R}$}\Big) = \mathbb{P}\left([X,X]_{\infty-} = \infty\right) = \mathbb{P}\left(x^2 \boldsymbol 1_{|x|<1}* \nu^X_{\infty-} = \infty\right) = 1. \] Thus the implications \ref{T:conv:a} $\Longrightarrow$ \ref{T:conv:c} and \ref{T:conv:a} $\Longrightarrow$ \ref{T:conv:f} in Theorem~\ref{T:conv} fail without the integrability condition on $(\Delta X)^-\wedge X^-$. Furthermore, by setting $x_1=0$ but leaving $x_n$ for all $n \geq 2$ unchanged, and ensuring that $p_n\ne x_n/(1+x_n)$ for all $n \in \mathbb{N}$, we have $\Delta X\ne-1$. Thus, ${\mathcal E}(X)_t=\prod_{n=1}^{[t]} (1+\Delta X_n)$ is nonzero for all $t$. Since $\Delta X_n=x_n$ for all sufficiently large $n \in \mathbb{N}$, ${\mathcal E}(X)$ will eventually be of constant sign. Moreover, for any $n_0\in\mathbb{N}$ we have \[ \lim_{m\to\infty}\sum_{n=n_0}^m \log(1+x_n)\le \lim_{m\to\infty}\sum_{n=n_0}^m \left(x_n - \frac{x_n^2}{4}\right)=-\infty. \] It follows that $\mathbb{P}( \lim_{t\to\infty}{\mathcal E}(X)_t=0) = 1$, showing that the implication \ref{T:conv:a} $\Longrightarrow$ \ref{T:conv:g} in Theorem~\ref{T:conv} fails without the integrability condition on $(\Delta X)^-\wedge X^-$. \item Part~\ref{E:P1:1} illustrates that the implications \ref{T:conv:a'} $\Longrightarrow$ \ref{T:conv:c}, \ref{T:conv:a'} $\Longrightarrow$ \ref{T:conv:f}, and \ref{T:conv:a'} $\Longrightarrow$ \ref{T:conv:g} in Theorem~\ref{T:conv} fail without the integrability condition on $(\Delta X)^-\wedge X^-$. We now let $x_n = 1$ for all $n \in \mathbb{N}$. Then $\mathbb{P}(\lim_{t \to \infty} X_t = \infty) = 1$, which illustrates that also \ref{T:conv:a'} $\Longrightarrow$ \ref{T:conv:a} in that theorem fails without integrability condition. \item We now fix a sequence $(x_n)_{n \in \mathbb{N}}$ such that $|x_n| = 1/n$ but $g: m \mapsto \sum_{n = 1}^m x_n$ oscillates with $\liminf_{m \to \infty} g(m) = -\infty$ and $\limsup_{m \to \infty} g(m) = \infty$. This setup illustrates that \ref{T:conv:f} $\Longrightarrow$ \ref{T:conv:a} and \ref{T:conv:f} $\Longrightarrow$ \ref{T:conv:a'} in Theorem~\ref{T:conv} fail without the integrability condition on $(\Delta X)^- \wedge X^-$. Moreover, by Lemma~\ref{L:ex1}\ref{L:ex1:2} the implication \ref{T:conv:f} $\Longrightarrow$ \ref{T:conv:c} fails without the additional integrability condition. The same is true for the implication \ref{T:conv:f} $\Longrightarrow$ \ref{T:conv:g}, since $\log \mathcal{E}(X) \leq X$. \item Let $x_n=e^{(-1)^n/\sqrt{n}}-1$ and suppose $p_n\ne x_n/(1+x_n)$ for all $n \in \mathbb{N}$ to ensure $\Delta X\ne-1$. Then \[ \mathbb{P}\Big( \lim_{t\to\infty}{\mathcal E}(X)_t \text{ exists in }\mathbb{R}\setminus\{0\}\Big) = \mathbb{P}\Big(\lim_{t \to \infty} X_t = \infty\Big) = \mathbb{P}\Big([X,X]_{\infty-}= \infty\Big) = 1. \] Indeed, $\lim_{m\to\infty}\sum_{n=1}^m \log(1+x_n)=\lim_{m\to\infty}\sum_{n=1}^m (-1)^n/\sqrt{n}$ exists in $\mathbb{R}$, implying that ${\mathcal E}(X)$ converges to a nonzero limit. Moreover, \[ \lim_{m\to\infty}\sum_{n=1}^m x_n \ge \lim_{m\to\infty}\sum_{n=1}^m \left(\frac{(-1)^n}{\sqrt{n}}+\frac{1}{4n}\right)=\infty, \] whence $X$ diverges. Since $\sum_{n=1}^\infty x_n^2 \ge \sum_{n=1}^\infty 1/(4n)=\infty$, we obtain that $[X,X]$ also diverges. Thus the implications \ref{T:conv:g} $\Longrightarrow$ \ref{T:conv:a} and \ref{T:conv:g} $\Longrightarrow$ \ref{T:conv:f} in Theorem~\ref{T:conv} fail without the integrability condition on $(\Delta X)^-\wedge X^-$. So does the implication \ref{T:conv:g} $\Longrightarrow$ \ref{T:conv:c} due to Lemma~\ref{L:ex1}\ref{L:ex1:2}. Finally, note that the implication \ref{T:conv:g} $\Longrightarrow$ \ref{T:conv:a'} holds independently of any integrability conditions since $\log \mathcal{E}(X) \leq X$. \item Let $x_n=-1/n$ for all $n\in\mathbb{N}$. Then $[X,X]_{\infty-}<\infty$ and $(\Delta X)^-$ is extended locally integrable, but $\lim_{t\to\infty}X_t=-\infty$. This shows that the condition involving limit superior is needed in Theorem~\ref{T:conv}\ref{T:conv:f}, even if $X$ is a martingale. We further note that if $X$ is Brownian motion, then $\limsup_{t\to\infty}X_t>-\infty$ and $(\Delta X)^-=0$, but $[X,X]_{\infty-}=\infty$. Thus some condition involving the quadratic variation is also needed in Theorem~\ref{T:conv}\ref{T:conv:f}. \item Note that choosing $x_n= (-1)^n/n$ for each $n\in \mathbb{N}$ yields a locally bounded martingale~$X$ with $[X,X]_\infty<\infty$, $X_\infty = \lim_{t \to \infty} X_t$ exists, but $X$ is not a semimartingale on $[0,\infty]$. This contradicts statements in the literature which assert that a semimartingale that has a limit is a semimartingale on the extended interval. This example also illustrates that the implications \ref{T:conv1:a} $\Longrightarrow$ \ref{C:conv001:d} and \ref{T:conv1:a} $\Longrightarrow$ \ref{C:conv001:e} in Corollary~\ref{C:conv001} fail without additional integrability condition. For the sake of completeness, Example~\ref{ex:semimartingale} illustrates that the integrability condition in Corollary~\ref{C:conv001}\ref{C:conv001:e} is not redundant either. \qed \end{enumerate} \end{example} \begin{remark} Many other types of behavior can be generated within the setup of this subsection. For example, by choosing the sequence $(x_n)_{n \in \mathbb{N}}$ appropriately we can obtain a martingale $X$ that converges nowhere, but satisfies $\mathbb{P}(\sup_{t\geq 0} |X_t| < \infty)=1$. We can also choose $(x_n)_{n \in \mathbb{N}}$ so that, additionally, either $\mathbb{P}([X,X]_{\infty-} = \infty)=1$ or $\mathbb{P}([X,X]_{\infty-} <\infty)=1$. \qed \end{remark} \begin{example} \label{ex:ui} The uniform integrability assumption in Corollary~\ref{C:convYX2} cannot be weakened to $L^1$--boundedness. To see this, within the setup of this subsection, let $x_n=-1/2$. Then $\Delta X\ge-1/2$. Moreover, the sequence $(p_n)_{n\in\mathbb{N}}$ can be chosen so that \begin{align} \label{ex:ui:eq1} \sup_{\sigma\in{\mathcal T}} \mathbb{E}\left[ e^{c\log(1+x) * (\mu^X-\nu^X)_\sigma} \right] < \infty \end{align} for each $c<1$, while, clearly, $\mathbb{P}(\lim_{t \to \infty} X_t = -\infty)=1$. This shows that the implication \ref{T:convYX2:b} $\Longrightarrow$ \ref{T:convYX2:a} in Corollary~\ref{C:convYX2}, with $c<1$, fails without the tail condition on $\nu^X$. To obtain~\eqref{ex:ui:eq1}, note that $Y=\log(1+x) * (\mu^X-\nu^X)$ is a martingale, so that $e^{cY}$ is a submartingale, whence $\mathbb{E}[e^{cY_\sigma}]$ is nondecreasing in~$\sigma$. Since the jumps of $X$ are independent, this yields \begin{align*} \sup_{\sigma\in{\mathcal T}} \mathbb{E}\left[ e^{c \log(1+x) * (\mu^X-\nu^X)_\sigma} \right] \le \prod_{n=1}^\infty \mathbb{E}\left[ (1+\Delta X_n)^c \right] e^{-c\,\mathbb{E}[\log(1+\Delta X_n)]} =: e^{\kappa_n}. \end{align*} We have $\kappa_n\ge0$ by Jensen's inequality, and a direct calculation yields \begin{align*} \kappa_n =& \log\mathbb{E}\left[ (1+\Delta X_n)^{c} \right] - c\,\mathbb{E}[\log(1+\Delta X_n)] \le \log\left( 2 p_n^{1-c} +1\right) - c\,p_n \log(1+p_n^{-1}) \end{align*} for all $c <1$. Let us now fix a sequence $(p_n)_{n \in \mathbb{N}}$ such that the following inequalities hold for all $n \in \mathbb{N}$: \begin{align*} p_n \log(1+p_n^{-1}) \leq \frac{1}{n^3} \qquad \text{and} \qquad p_n \leq \frac{1}{2^n}\left( e^{n^{-2}} -1\right)^n. \end{align*} This is always possible. Such a sequence satisfies $\sum_{n\in\mathbb{N}}p_n<\infty$ and results in $\kappa_n \leq 2/n^2$ for all $n \geq -c \vee (1/(1-c))$, whence $\sum_{n\in\mathbb{N}}\kappa_n<\infty$. This yields the assertion. \qed \end{example} \subsection{Quasi-left continuous one-jump martingales} \label{A:SS:one} We now present examples based on a martingale $X$ which, unlike in Subsection~\ref{A:SS:lack}, has one single jump that occurs at a totally inaccessible stopping time. In particular, the findings of Subsection~\ref{A:SS:lack} do not rely on the fact that the jump times there are predictable. Let $\lambda, \gamma:\mathbb R_+\to\mathbb R_+$ be two continuous nonnegative functions. Let $\Theta$ be a standard exponential random variable and define $\rho = \inf\{t\ge 0: \int_0^t\lambda(s) ds \ge \Theta\}$. Let $\mathbb{F}$ be the filtration generated by the indicator process $\boldsymbol 1_{[\![\rho,\infty[\![}$, and define a process $X$ by \[ X_t = \gamma(\rho)\1{\rho\le t} - \int_0^t \gamma(s)\lambda(s)\1{s<\rho}\mathrm{d} s. \] Note that $X$ is the integral of $\gamma$ with respect to $\1{\rho\le t}-\int_0^{t\wedge\rho}\lambda_s\mathrm{d} s$ and is a martingale. Furthermore, $\rho$ is totally inaccessible. This construction is sometimes called the {\em Cox construction}. Furthermore, the jump measure $\mu^X$ and corresponding compensator $\nu^X$ satisfy \[ F*\mu^X = F(\rho,\gamma(\rho))\boldsymbol 1_{[\![ \rho, \infty[\![}, \qquad F*\nu^X_t = \int_0^{t\wedge\rho}F(s,\gamma(s))\lambda(s) \mathrm{d} s \] for all $t \geq 0$, where $F$ is any nonnegative predictable function. We will study such martingales when $\lambda$ and $\gamma$ posses certain integrability properties, such as the following: \begin{align} &\int_0^\infty\lambda(s)\mathrm{d} s <\infty; \label{A:eq:int1}\\ &\int_0^\infty\gamma(s)\lambda(s)\mathrm{d} s =\infty; \label{A:eq:int2}\\ &\int_0^\infty (1+\gamma(s))^c\lambda(s) \mathrm{d} s <\infty \quad \text{ for all }c<1. \label{A:eq:int3} \end{align} For instance, $\lambda(s) = 1/(1+s)^2$ and $\gamma(s)=s$ satisfy all three properties. \begin{example} \label{E:2} The limit superior condition in Theorem~\ref{T:conv} is essential, even if $X$ is a local martingale. Indeed, with the notation of this subsection, let $\lambda$ and $\gamma$ satisfy \eqref{A:eq:int1} and \eqref{A:eq:int2}. Then \begin{align*} &\mathbb{P}\left([X,X]_{\infty-} + (x^2\wedge 1) * \nu^X_{\infty-} < \infty\right) = \mathbb{P}\Big(\sup_{t\ge 0} X_t < \infty\Big)=1; \\ &\mathbb{P} \Big(\limsup_{t\to \infty} X_t = - \infty\Big) > 0. \end{align*} This shows that finite quadratic variation does not prevent a martingale from diverging; in fact, $X$ satisfies $\{[X,X]_{\infty-} =0\} = \{\limsup_{t\to \infty}X_t = - \infty\}$. The example also shows that one cannot replace $(x^2\wedge |x|)*\nu^X_{\infty-}$ by $(x^2\wedge 1) * \nu^X_{\infty-}$ in \eqref{T:conv2:2}. Finally, it illustrates in the quasi-left continuous case that diverging local martingales need not oscillate, in contrast to continuous local martingales. To prove the above claims, first observe that $[X,X]_{\infty-} = \gamma(\rho)^2 \1{\rho<\infty} < \infty$ and $\sup_{t\ge 0} X_t\le\gamma(\rho)\1{\rho<\infty}<\infty$ almost surely. Next, we get $\mathbb{P}(\rho=\infty) = \exp({-\int_0^\infty\lambda(s) \mathrm{d} s})>0$ in view of~\eqref{A:eq:int1}. We conclude by observing that $\lim_{t \to \infty} X_t = - \lim_{t \to \infty} \int_0^t \gamma(s)\lambda(s) \mathrm{d} s = -\infty$ on the event $\{\rho=\infty\}$ due to~\eqref{A:eq:int2}. \qed \end{example} \begin{example} Example~\ref{E:2} can be refined to yield a martingale with a single positive jumps, that diverges without oscillating, but has infinite quadratic variation. To this end, extend the probability space to include a Brownian motion $B$ that is independent of~$\Theta$, and suppose $\mathbb{F}$ is generated by $(\boldsymbol 1_{[\![\rho,\infty[\![},B)$. The construction of $X$ is unaffected by this. In addition to \eqref{A:eq:int1} and \eqref{A:eq:int2}, let $\lambda$ and $\gamma$ satisfy \begin{equation} \label{eq:E:3:1} \lim_{t\to\infty} \frac{\int_0^t \gamma(s)\lambda(s) \mathrm{d} s }{\sqrt{2 t \log \log t}} = \infty. \end{equation} For instance, take $\lambda(s)=1/(1+s)^2$ and $\gamma(s)=1/\lambda(s)$. Then the martingale $X' = B + X$ satisfies \begin{equation} \label{eq:E:3:2} \mathbb{P}\left([X',X']_{\infty-} = \infty\right)= 1 \qquad\text{and}\qquad \mathbb{P}\Big(\sup_{t\ge 0} X'_t < \infty\Big) > 0, \end{equation} so that, in particular, the inclusion $\{ [X',X']_\infty=\infty\} \subset \{\sup_{t\ge 0} X'_t=\infty\}$ does not hold in general. To prove~\eqref{eq:E:3:2}, first note that $[X',X']_{\infty-} \geq [B,B]_{\infty-} = \infty$. Next, \eqref{eq:E:3:1} and the law of the iterated logarithm yield, on the event $\{\rho=\infty\}$, \[ \limsup_{t \to \infty} X'_t = \limsup_{t \to \infty} \Big(B_t - \int_0^t \gamma(s)\lambda(s) \mathrm{d} s\Big) \leq \limsup_{t \to \infty} \Big(2 \sqrt{2 t \log \log t} - \int_0^t \gamma(s)\lambda(s) \mathrm{d} s\Big) = -\infty. \] Since $\mathbb{P}(\rho=\infty)>0$, this implies $\mathbb{P}(\sup_{t\geq 0} X'_t < \infty)>0$. \qed \end{example} \begin{example} \label{ex:semimartingale} The semimartingale property does not imply that $X^- \wedge (\Delta X)^-$ is extended local integrability. With the notation of this subsection, consider the process $\widehat{X} = -\gamma(\rho) \boldsymbol 1_{[\![ \rho, \infty[\![}$, which is clearly a semimartingale on $[0,\infty]$. On $[0,\infty)$, it has the special decomposition $\widehat{X} = \widehat{M} - \widehat{A}$, where $\widehat{M} = -X$ and $\widehat{A} = \int_0 \gamma(s) \lambda(s) \boldsymbol 1_{\{s < \rho\}} \mathrm{d} s$. We have $\mathbb{P}(A_\infty = \infty) > 0$, and thus, by Corollary~\ref{C:conv2} we see that the integrability condition in Corollary~\ref{C:conv001}\ref{C:conv001:e} is non-redundant. This example also illustrates that~\eqref{eq:XMA} does not hold in general. \qed \end{example} \begin{example}\label{ex:6.8} Also in the case where $X$ is quasi-left continuous, the uniform integrability assumption in Corollary~\ref{C:convYX2} cannot be weakened to $L^1$--boundedness. We again put ourselves in the setup of this subsection and suppose $\lambda$ and $\gamma$ satisfy \eqref{A:eq:int1}--\eqref{A:eq:int3}. Then, while $X$ diverges with positive probability, it nonetheless satisfies \begin{align} \label{A:eq:prop6.1} \sup_{\sigma\in{\mathcal T}} \mathbb{E}\left[ e^{c\log(1+x) * (\mu^X-\nu^X)_\sigma }\right] < \infty \end{align} for all $c<1$. Indeed, if $c \leq 0$, then $$e^{c\log(1+x) * (\mu^X-\nu^X)_\sigma} \leq e^{|c| \log(1+x) * \nu^X_\rho} \leq e^{|c| \int_0^\infty \log(1+s) \lambda(s) \mathrm{d} s} < \infty $$ for all $\sigma \in {\mathcal T}$. If $c \in (0,1)$, the left-hand side of~\eqref{A:eq:prop6.1} is bounded above by \begin{align*} \sup_{\sigma \in \mathcal T} \mathbb{E}\left[e^{c\log(1+x)*\mu^X_\sigma}\right] &\le 1 + \sup_{\sigma \in \mathcal T} \mathbb{E}\left[(1+\gamma(\rho))^c\,\1{\rho\le\sigma}\right] \le 1 + \mathbb{E}\left[(1+x)^c*\mu^X_{\infty}\right] \\ &=1 + \mathbb{E}\left[(1+x)^c*\nu^X_{\infty}\right] \le 1 + \int_0^\infty(1+\gamma(s))^c\lambda(s)ds < \infty, \end{align*} due to \eqref{A:eq:int3}. \qed \end{example} \subsection{Counterexamples for Novikov-Kazamaki conditions} \label{SS:counter NK} We now apply the constructions in the previous subsections to construct two examples that illustrate that the uniform integrability assumption in Corollary~\ref{C:NK1}\ref{C:NK1:5} cannot be weakened to $L^1$--boundedness. In the first example we consider predictable---in fact, deterministic---jump times, while in the second example there is one single jump that occurs at a totally inaccessible stopping time. \begin{example} Let $(\xi_n)_{n\in\mathbb{N}}$ be a sequence of independent random variables, defined on some probability space $(\Omega,{\mathcal F},\mathbb{P})$, such that \begin{align*} \mathbb{P}\left( \xi_n = 1\right) &= \frac{1-p_n}{2}; \\ \mathbb{P}\left( \xi_n = -\frac{1-p_n}{1+p_n}\right) &= \frac{1+p_n}{2}, \end{align*} where $(p_n)_{n\in\mathbb{N}}$ is the sequence from Example~\ref{ex:ui}. Let $M$ be given by $M_t=\sum_{n =1}^{[t]} \xi_n$ for all $t \in \mathbb{N}$, which is a martingale with respect to its natural filtration. Fix $a> 0$. We claim the following: The local martingale $Z={\mathcal E}(M)$ satisfies $\sup_{\sigma\in{\mathcal T}}\mathbb{E}_\mathbb{P}[e^{B_\sigma^a}]<\infty$ for all $a >0$, but nonetheless fails to be a uniformly integrable martingale. Let $\mathbb{Q}$ be the F\"ollmer associated with $Z$; see Theorem~\ref{T numeraire}. The process $N$ in~\eqref{eq:N} is then a pure jump martingale under~$\mathbb{Q}$, constant between integer times, with $\Delta N_n=-1/2$ if $\xi_n=1$, and $\Delta N_n=(1-p_n)/(2p_n)$ otherwise for each $n \in \mathbb{N}$. In view of Example~\ref{ex:ui}, the process $N$ explodes under~$\mathbb{Q}$. Hence $Z$ is not a uniformly integrable martingale under~$\mathbb{P}$. However, Lemma~\ref{L:ABdecomp} and~\eqref{ex:ui:eq1} in Example~\ref{ex:ui} yield \[ \sup_{\sigma\in{\mathcal T}}\mathbb{E}_\mathbb{P}\left[e^{B_\sigma^a}\right] = \sup_{\sigma\in{\mathcal T}}\mathbb{E}_\mathbb{Q}\left[e^{(1-a) \log(1+x) * (\mu^N - \widehat\nu^N)_\sigma}\1{\sigma<\tau_\infty}\right] < \infty, \] where $\widehat\nu^N=\nu^N/(1+y)$ is the compensator of $\mu^N$ under~$\mathbb{Q}$. \qed \end{example} \begin{example} Let $N=X$ be the martingale constructed in Example~\ref{ex:6.8} but now on a probability space $(\Omega,{\mathcal F},\mathbb{Q})$. Next, define the process $M$ in accordance with Theorem~\ref{T:reciprocal} as \[ M = -N + \frac{y^2}{1+y}*\mu^N = -\frac{\gamma(\rho)}{1+\gamma(\rho)}\1{\rho\le t} + \int_0^{t\wedge\rho}\gamma(s)\lambda(s)ds. \] Then $M$ is a local martingale under the F\"ollmer measure $\mathbb{P}$ associated with~${\mathcal E}(N)$. Further, $Z={\mathcal E}(M)$ cannot be a uniformly integrable martingale under~$\mathbb{P}$, since~$N$ explodes with positive probability under~$\mathbb{Q}$. Nonetheless, thanks to~\eqref{A:eq:prop6.1} we have \begin{align*} \sup_{\sigma \in \mathcal T} \mathbb{E}_\mathbb{P}\left[e^{B_\sigma^a}\1{\sigma<\tau_0}\right] &=\sup_{\sigma \in \mathcal T}\mathbb{E}_\mathbb{Q}\left[e^{(1-a)\log(1+y)*(\mu^N-\widehat\nu^N)_\sigma}\right]< \infty, \end{align*} where again $\widehat\nu^N=\nu^N/(1+y)$ is the compensator of $\mu^N$ under~$\mathbb{Q}$. We conclude that \eqref{eq:4.7} is not enough in general to guarantee that ${\mathcal E}(M)$ be a uniformly integrable martingale. \qed \end{example} \appendix \section{Stochastic exponentials and logarithms} \label{A:SE} In this appendix we discuss stochastic exponentials of semimartingales on stochastic intervals. \begin{definition}[Maximality]\label{D:maximal} Let $\tau$ be a foretellable time, and let $X$ be a semimartingale on $\lc0,\tau[\![$. We say that $\tau$ is {\em $X$--maximal} if the inclusion $\{\lim_{t\to\tau}X_t \text{ exists in }\mathbb{R}\}\subset\{\tau=\infty\}$ holds almost surely. \qed \end{definition} \begin{definition}[Stochastic exponential]\label{D:stochExp} Let $\tau$ be a foretellable time, and let $X$ be a semimartingale on $[\![ 0,\tau[\![$ such that $\tau$ is $X$--maximal. The \emph{stochastic exponential of $X$} is the process $\mathcal E(X)$ defined by \[ \mathcal E(X )_t = \exp\left(X_t - \frac{1}{2}[X^c,X^c]_t \right) \prod_{0<s\le t} (1+\Delta X_s)e^{-\Delta X_s} \] for all $t< \tau$, and by $\mathcal E(X)_t=0$ for all $t\ge\tau$. \qed \end{definition} If $(\tau_n)_{n\in\mathbb{N}}$ is an announcing sequence for $\tau$, then ${\mathcal E}(X)$ of Definition~\ref{D:stochExp} coincides on $\lc0,\tau_n[\![$ with the usual (Dol\'eans-Dade) stochastic exponential of $X^{\tau_n}$. In particular, the two notions coincide when $\tau=\infty$. Many properties of stochastic exponentials thus remain valid. For instance, if $\Delta X_t=-1$ for some $t\in[0,\tau)$ then ${\mathcal E}(X)$ jumps to zero at time $t$ and stays there. If $\Delta X>-1$ then ${\mathcal E}(X)$ is strictly positive on~$\lc0,\tau[\![$. Also, on $\lc0,\tau[\![$, $\mathcal E(X)$ is the unique solution to the equation \begin{equation*} Z = e^{X_0} + Z_- \cdot X \quad \text{on} \quad [\![ 0,\tau[\![; \end{equation*} see \citet{Doleans_1976}. It follows that $\mathcal E(X)$ is a local martingale on $[\![ 0,\tau[\![$ if and only if $X$ is. We record the more succinct expression \begin{align*} \mathcal E( X ) = \boldsymbol 1_{\lc0,\tau_0[\![} \exp\left(X - \frac{1}{2}[X^c,X^c] - (x - \log(1+x)) * \mu^X\right), \end{align*} where $\tau_0=\tau\wedge\inf\{t \geq 0:\Delta X_t=-1\}$. If $X$ is a local supermartingale on $\lc0,\tau[\![$ with $\Delta X \geq -1$, Theorem~\ref{T:conv} in conjunction with the $X$--maximality of $\tau$ shows that $\lim_{t\to\tau}{\mathcal E}(X)_t=0$ almost surely on $\{\tau<\infty\}$. We now consider the stochastic logarithm of a nonnegative semimartingale $Z$ that stays at zero after reaching it. In preparation for this, recall that for a stopping time $\rho$ and a set $A\in\mathcal F$, the \emph{restriction of $\rho$ to $A$} is given by \[ \rho(A) = \rho \boldsymbol 1_A + \infty \boldsymbol 1_{A^c}. \] Here $\rho(A)$ is a stopping time if and only if $A\in\mathcal F_\rho$. Define the following stopping times associated to a nonnegative semimartingale $Z$ (recall our convention that $Z_{0-}=Z_0$): \begin{align} \nonumber \tau_0 &= \inf\{ t\ge 0: Z_t = 0\};\\ \label{eq:tauC} \tau_c &= \tau_0(A_C), \quad\quad A_C =\{Z_{\tau_0-}=0\};\\ \tau_J &= \tau_0(A_J), \quad\quad A_J = \{Z_{\tau_0-}>0\}. \nonumber \end{align} These stopping times correspond to the two ways in which $Z$ can reach zero: either continuously or by a jump. We have the following property of $\tau_c$. \begin{lemma}\label{L:tauCpred} Fix some nonnegative semimartingale $Z$. The stopping time $\tau_c$ of \eqref{eq:tauC} is foretellable. \end{lemma} \begin{proof} We must exhibit an announcing sequence for $\tau_c$, and claim that $(\sigma_n)_{n\in\mathbb N}$ is such a sequence, where \begin{align*} \sigma_n = n\wedge\sigma'_n(A_n), \qquad \sigma'_n = \inf\left\{t\ge 0 : Z_t < \frac{1}{n} \right\}\wedge n, \qquad A_n = \{Z_{\sigma_n'}>0\}. \end{align*} To prove this, we first observe that $\sigma_n = n < \infty = \tau_c$ on $A_n^c$ for all $n \in \mathbb{N}$. Moreover, we have $\sigma_n \leq \sigma_n'< \tau_c$ on $A_n$ for all $n \in \mathbb{N}$, where we used that $Z_{\tau_c} = 0$ on the event $\{\tau_c < \infty\}$. We need to show that $\lim_{n \to \infty} \sigma_n = \tau_c$. On the event $A_C$, see \eqref{eq:tauC}, we have $\tau_c = \tau_0 = \lim_{n \to \infty} \sigma_n' = \lim_{n \to \infty} \sigma_n$ since $A_C \subset A_n$ for all $n \in \mathbb{N}$. On the event $A_C^c = \bigcup_{n =1}^\infty A_n^c$, we have $\tau_c = \infty = \lim_{n \to \infty} n = \lim_{n \to \infty} \sigma_n$. Hence $(\sigma_n)_{n \in \mathbb{N}}$ is an announcing sequence of $\tau_c$, as claimed. \end{proof} If a nonnegative semimartingale $Z$ reaches zero continuously, the process $H = \frac{1}{Z_-}\1{Z_->0}$ explodes in finite time, and is therefore not left-continuous. In fact, it is not $Z$--integrable. However, if we view $Z$ as a semimartingale on the stochastic interval $[\![ 0,\tau_c[\![$, then $H$ is $Z$--integrable in the sense of stochastic integration on stochastic intervals. Thus $H \cdot Z$ exists as a semimartingale on~$\lc0,\tau_c[\![$, which we will call the stochastic logarithm of $Z$. \begin{definition}[Stochastic logarithm]\label{D:stochLog} Let $Z$ be a nonnegative semimartingale with $Z=Z^{\tau_0}$. The semimartingale $\mathcal L(Z)$ on $\lc0,\tau_c[\![$ defined by \begin{align*} \mathcal L( Z ) = \frac{1}{Z_-}\1{Z_->0} \cdot Z \quad \text{on} \quad \lc0,\tau_c[\![ \end{align*} is called the \emph{stochastic logarithm of $Z$}.\qed \end{definition} We now clarify the relationship of stochastic exponentials and logarithms in the local martingale case. To this end, let $\mathfrak{Z}$ be the set of all nonnegative local martingales $Z$ with $Z_0 = 1$. Any such process~$Z$ automatically satisfies $Z = Z^{\tau_0}$. Furthermore, let $\mathfrak L$ denote the set of all stochastic processes~$X$ satisfying the following conditions: \begin{enumerate} \item $X$ is a local martingale on $\lc0,\tau[\![$ for some foretellable, $X$--maximal time $\tau$. \item $X_0=0$, $\Delta X\ge -1$ on $\lc0,\tau[\![$, and $X$ is constant after the first time $\Delta X=-1$. \end{enumerate} The next theorem extends the classical correspondence between strictly positive local martingales and local martingales with jumps strictly greater than~$-1$. The reader is referred to Proposition~I.5 in \citet{Lepingle_Memin_Sur} and Appendix~A of \citet{K_balance} for related results. \begin{theorem}[Relationship of stochastic exponential and logarithm] \label{T:SE} The stochastic exponential $\mathcal{E}$ is a bijection from $\mathfrak{L}$ to $\mathfrak{Z}$, and its inverse is the stochastic logarithm $\mathcal L$. Consider $Z = \mathcal{E}(X)$ for some $Z\in\mathfrak Z$ and $X \in \mathfrak{L}$. The identity $\tau = \tau_c$ holds almost surely, where $\tau$ is the foretellable $X$--maximal time corresponding to $X$, and $\tau_c$ is given by~\eqref{eq:tauC}. \end{theorem} \begin{proof} First, ${\mathcal E}$ maps each $X\in\mathfrak L$ to some $Z\in\mathfrak Z$, and the corresponding $X$--maximal foretellable time~$\tau$ equals $\tau_c$. To see this, note that the restriction of $Z = \mathcal{E}(X)$ to $\lc0,\tau[\![$ is a nonnegative local martingale on $\lc0,\tau[\![$. By Lemma~\ref{L:SMC}, it can be extended to a local martingale~$\overline{Z}$. The implication \ref{T:conv:a} $\Longrightarrow$ \ref{T:conv:g} in Theorem~\ref{T:conv} yields $Z = \overline{Z}$, whence $Z \in \mathfrak{Z}$. We also get $\tau_c=\tau$. Next, $Z=\mathcal{E}(\mathcal{L}(Z))$ for each $Z\in\mathfrak Z$. This follows from $Z = 1 + Z_-(Z_-)^{-1} \1{Z_->0} \cdot Z = 1 + Z_- \cdot \mathcal{L}(Z)$ in conjunction with uniqueness of solutions to this equation. Finally, ${\mathcal L}$ maps each $Z\in\mathfrak Z$ to some $X\in\mathfrak L$, and $\tau_0$ is equal to the corresponding $X$--maximal foretellable time~$\tau$. Indeed, $X = \mathcal{L}(Z)$ is a local martingale on $\lc0, \tau_c[\![$ with jumps $\Delta X=\Delta Z / Z_-\ge-1$, and is constant after the first time $\Delta X=-1$. Moreover, since $Z={\mathcal E}({\mathcal L}(Z))={\mathcal E}(X)$, the implication \ref{T:conv:g} $\Longrightarrow$ \ref{T:conv:a} in Theorem~\ref{T:conv} yields that $\tau_c$ is $X$--maximal. Thus $X\in\mathfrak L$, with $\tau_c$ being the corresponding $X$--maximal foretellable time. \end{proof} Reciprocals of stochastic exponentials appear naturally in connection with changes of probability measure. We now develop some identities relating to such reciprocals. The following function plays an important role: \begin{equation} \label{eq:phi} \phi: (-1,\infty) \to (-1,\infty), \qquad \phi(x) = -1 + \frac{1}{1+x}. \end{equation} Note that $\phi$ is an involution, that is, $\phi(\phi(x))=x$. The following notation is convenient: Given functions $F:\Omega\times\mathbb{R}_+\times\mathbb{R}\to\mathbb{R}$ and $f:\mathbb{R}\to\mathbb{R}$, we write $F\circ f$ for the function $(\omega,t,x)\mapsto F(\omega,t,f(x))$. We now identify the reciprocal of a stochastic exponential or, more precisely, the stochastic logarithm of this reciprocal. Part of the following result is contained in Lemma~3.4 of~\citet{KK}. \begin{theorem}[Reciprocal of a stochastic exponential] \label{T:reciprocal} Let $\tau$ be a foretellable time, and let $M$ be a local martingale on $[\![ 0,\tau[\![$ such that $\Delta M> -1$. Define a semimartingale $N$ on $\lc0,\tau[\![$ by \begin{align} \label{eq:defN} N = -M + [M^c, M^c] + \frac{x^2}{1+x} * \mu^M. \end{align} Then $\mathcal{E}(M) \mathcal{E}(N) = 1 \quad\text{on}\quad\lc0,\tau[\![$. Furthermore, a predictable function $F$ is $\mu^M$--integrable ($\nu^M$--integrable) if and only if $F \circ \phi$ is $\mu^N$--integrable ($\nu^N$--integrable). In this case, we have \begin{align} \label{eq:fmu} F*\mu^M = (F \circ \phi) * \mu^N \end{align} on $\lc0,\tau[\![$. The same formula holds if $\mu^M$ and $\mu^N$ are replaced by $\nu^M$ and $\nu^N$, respectively. \end{theorem} \begin{remark} Since $|x^2/(1+x)| \leq 2x^2$ for $|x|\le1/2$, the process $x^2/(1+x) * \mu^M$ appearing in \eqref{eq:defN} is finite-valued on $\lc0,\tau[\![$.\qed \end{remark} \begin{remark}\label{R:alternative} Since $\phi$ is an involution, the identity \eqref{eq:fmu} is equivalent to \begin{align} \label{eq:gmu} (G\circ \phi) * \mu^M = G * \mu^N, \end{align} where $G$ is a $\mu^N$--integrable function. The analogous statement holds for $\nu^M$ and $\nu^N$ instead of $\mu^M$ and $\mu^N$, respectively. \qed \end{remark} \begin{proof}[Proof of Theorem~\ref{T:reciprocal}] Note that we have \[ \Delta N = -\Delta M + \frac{(\Delta M)^2}{1+\Delta M} = \phi(\Delta M) \quad \text{on}\quad \lc0, \tau[\![. \] This implies the equality $G*\mu^N=(G\circ\phi)*\mu^M$ on $\lc0,\tau[\![$ for every nonnegative predictable function~$G$. Taking $G=F\circ \phi$ and using that $\phi$ is an involution yields the integrability claim as well as~\eqref{eq:fmu}, and hence also~\eqref{eq:gmu}. The corresponding assertion for the predictable compensators follows immediately. Now, applying~\eqref{eq:gmu} to the function $G(y) = y-\log(1+y)$ yields \[ (y-\log(1+y))*\mu^N = \left(-1+\frac{1}{1+x} + \log(1+x)\right) * \mu^M \quad \text{on}\quad \lc0, \tau[\![. \] A direct calculation then gives ${\mathcal E}(M)=1/{\mathcal E}(N)$ on $\lc0,\tau[\![$. This completes the proof. \end{proof} \section{The F\"ollmer measure} \label{S:follmer} In this appendix we review the construction of a probability measure that only requires the Radon-Nikodym derivative to be a nonnegative local martingale. We rely on results by \citet{Pa}, \citet{F1972}, and \citet{M}, and refer to \citet{Perkowski_Ruf_2014} and \citet{CFR2011} for further details, generalizations, proofs, and related literature. A concise description of the construction is available in~\citet{Larsson_2013}. Consider a filtered probability space $(\Omega,{\mathcal F},\mathbb{F},\mathbb{P})$, where $\Omega$ is a set of possibly explosive paths taking values in a Polish space, $\mathbb{F}$ is the right-continuous modification of the canonical filtration, and ${\mathcal F} = \bigvee_{t\geq 0} {\mathcal F}_t$; see Assumption~$(\mathcal{P})$ in \citet{Perkowski_Ruf_2014} for details. Let $Z$ denote a nonnegative local martingale with $Z_0 = 1$, and define the stopping times $\tau_0=\inf\{t \geq 0:Z_t=0\}$ and $\tau_\infty=\lim_{n\to\infty}\inf\{t\geq 0:Z_t\ge n\}$. Assume that $Z$ does not jump to zero, that is, $Z_{\tau_0-}=0$ on $\{\tau_0<\infty\}$. For notational convenience we assume, without loss of generality, that we work with a version of $Z$ that satisfies $Z_t(\omega) = \infty$ for all $(t,\omega)$ with $\tau_\infty(\omega) \leq t$. \begin{theorem}[F\"ollmer's change of measure] \label{T numeraire} Under the assumptions of this appendix, there exists a probability measure $\mathbb{Q}$ on $\mathcal F$, unique on $\mathcal F_{\tau_\infty-}$, such that \[ \mathbb{E}_\mathbb{P}\left[ Z_\sigma G\right] = \mathbb{E}_\mathbb{Q}\left[ G\1{Z_\sigma<\infty}\right] \] holds for any stopping time $\sigma$ and any nonnegative ${\mathcal F}_\sigma$--measurable random variable~$G$. Moreover, $ Y = (1/Z)\boldsymbol 1_{\lc0,\tau_\infty[\![}$ is a nonnegative $\mathbb{Q}$--local martingale that does not jump to zero. Finally, $Z$ is a uniformly integrable martingale under~$\mathbb{P}$ if and only if $\mathbb{Q}(Y_{\infty-}>0)=1$, that is, if and only if $Z$ does not explode under~$\mathbb{Q}$. \end{theorem} \begin{proof} The statement is proven in Propositions~2.3 and 2.5 and in Theorem~3.1 in \citet{Perkowski_Ruf_2014}, after a change of time that maps $[0,\infty]$ to a compact time interval; see also Theorem~2.1 in \citet{CFR2011}. \end{proof} Now, let $M={\mathcal L}(Z)$ be the stochastic logarithm of $Z$. Thus $Z={\mathcal E}(M)$, and $M$ is a $\mathbb{P}$--local martingale on $\lc0,\tau_0[\![$ with $\Delta M>-1$. The following lemma identifies the compensator under~$\mathbb{Q}$ of the jump measure of~$N$, defined in \eqref{eq:defN}. While it can be obtained using general theory (e.g., Theorem~III.3.17 in \citet{JacodS}), we give an elementary proof for completeness. \begin{lemma} \label{L:predc} Under the assumptions of this appendix, let $\mathbb{Q}$ denote the probability measure in Theorem~\ref{T numeraire} corresponding to $Z$. Then the compensator under $\mathbb{Q}$ of $\mu^N$ is given by $\widehat{\nu}^N = \nu^N/(1+y)$. \end{lemma} \begin{proof} Let $G$ be a nonnegative predictable function and define $F=G\circ\phi$. By monotone convergence and Thereom~II.1.8 in \citet{JacodS}, the claim is a simple consequence of the equalities \[ \mathbb{E}_\mathbb{Q}\left[ G * \mu^N_\sigma\right] = \mathbb{E}_\mathbb{P}\left[\mathcal{E}(M)_\sigma \, \left(F * \mu^M_\sigma\right)\right] = \mathbb{E}_\mathbb{P}\left[\mathcal{E}(M)_\sigma \, \left((1+x)F* \nu^M_\sigma\right)\right] = \mathbb{E}_\mathbb{Q}\left[ G * \frac{\nu^N_\sigma}{1+y}\right], \] valid for any stopping time $\sigma<\tau_0\wedge\tau_\infty$ ($(\mathbb{P}+\mathbb{Q})/2$--almost surely) such that ${\mathcal E}(M)^\sigma$ is a uniformly integrable martingale under~$\mathbb{P}$. The first equality follows from Theorem~\ref{T:reciprocal} and Theorem~\ref{T numeraire}, as does the third equality. We now prove the second equality. The integration-by-parts formula, the equality $[\mathcal{E}(M),F * \mu^M]=(x\mathcal{E}(M)_-F) * \mu^M$, and the associativity rule $\mathcal{E}(M)_- \cdot (F * \mu^M)=(\mathcal{E}(M)_-F) * \mu^M$ yield \begin{equation} \mathcal{E}(M) (F * \mu^M)= (\mathcal{E}(M)_-(1+x)F) * \mu^M + (F * \mu^M)_- \cdot \mathcal{E}(M)\label{eq:L3a} \end{equation} on $\lc0,\tau_0[\![$, and similarly \begin{align} \mathcal{E}(M) ((1+x)F * \nu^M) &= (\mathcal{E}(M)_-(1+x)F) * \nu^M+ ((1+x)F * \nu^M)_- \cdot \mathcal{E}(M) \nonumber \\ &\qquad+ [\mathcal{E}(M),(1+x) F * \nu^M]. \label{eq:L3b} \end{align} Let $(\tau_n)_{n\in\mathbb{N}}$ be a localizing sequence for the following three $\mathbb{P}$--local martingales on $\lc0,\tau_0[\![$: $(F * \mu^M)_- \cdot \mathcal{E}(M)$, $((1+x) F * \nu^M)_- \cdot \mathcal{E}(M)$, and $[\mathcal{E}(M),(1+x) F * \nu^M]$. The latter is a local martingale on $\lc0,\tau_0[\![$ by Yoeurp's lemma; see Example~9.4(1) in \citet{HeWangYan}. Taking expectations in \eqref{eq:L3a} and \eqref{eq:L3b} and recalling the defining property of the predictable compensator yields \[ \mathbb{E}_\mathbb{P}\left[\mathcal{E}(M)_\sigma (F * \mu^M)_{\sigma\wedge\tau_n}\right]=\mathbb{E}_\mathbb{P}\left[\mathcal{E}(M)_\sigma ((1+x)F * \nu^M)_{\sigma\wedge\tau_n}\right]. \] Monotone convergence gives the desired conclusion. \end{proof} \section{Extended local integrability under a change of measure} \label{App:Z} Let $Z$ be a nonnegative local martingale with explosion time $\tau_\infty=\lim_{n\to\infty}\inf\{t\geq 0:Z_t\ge n\}$, and let~$\mathbb{Q}$ be the corresponding F\"ollmer measure; see Theorem~\ref{T numeraire}. It was mentioned in Remark~\ref{R:ELI Q P} that local uniform integrability under~$\mathbb{Q}$ can be characterized in terms of~$\mathbb{P}$. We now provide this characterization. \begin{definition}[Extended $Z$--localization] \label{D:ELZI} With the notation of this appendix, a progressive process $U$ is called {\em extended locally $Z$--integrable} if there exists a nondecreasing sequence $(\rho_m)_{m \in \mathbb{N}}$ of stopping times such that the following two conditions hold: \begin{enumerate} \item\label{D:ELZI:i} $\lim_{m \to \infty} \mathbb{E}_\mathbb{P}[\1{\rho_m < \infty}Z_{\rho_m}] = 0$. \item\label{D:ELZI:ii} $\lim_{n \to \infty} \mathbb{E}_\mathbb{P}[\sup_{t \leq \tau_n \wedge \rho_m} U_t\, Z_{\tau_n \wedge \rho_m}] < \infty$ for all $m \in \mathbb{N}$ and a localizing sequence $(\tau_n)_{n \in \mathbb{N}}$ of $Z$. \end{enumerate} \end{definition} \begin{remark} \label{R:5.7} We make the following remarks concerning Definition~\ref{D:ELZI}. \begin{itemize} \item If $Z$ is not a uniformly integrable martingale under $\mathbb{P}$, then a sequence $(\rho_m)_{m\in\mathbb{N}}$ satisfying condition \ref{D:ELZI:i} of Definition~\ref{D:ELZI} can never be a localizing sequence for $Z$. To see this, note that if~$Z$ is not a uniformly integrable martingale under~$\mathbb{P}$, then \[ \mathbb{E}_\mathbb{P}[\1{\sigma < \infty}Z_\sigma] \geq \mathbb{E}_\mathbb{P}[Z_\sigma] - \mathbb{E}_\mathbb{P}[Z_\infty] = 1 - \mathbb{E}_\mathbb{P}[Z_\infty] > 0 \] for any stopping time $\sigma$ such that $Z^\sigma$ is a uniformly integrable martingale. \item As a warning, we note that $U$ being c\`adl\`ag adapted and extended locally bounded is, in general, not sufficient for $U$ being extended locally $Z$--integrable. However, clearly $U$ being bounded is sufficient. \qed \end{itemize} \end{remark} \begin{lemma} \label{L:extended locally} With the notation and assumptions of this appendix, let $U$ be a progressive process on $\lc0,\tau_\infty[\![$ with $U_0 = 0$. Then $U$ is extended locally integrable under $\mathbb{Q}$ if and only if $U$ is extended locally $Z$--integrable under $\mathbb{P}$. \end{lemma} \begin{proof} Let $(\rho_m)_{m \in \mathbb{N}}$ be a nondecreasing sequence of stopping times. Then $\lim_{m \to \infty} \mathbb{Q}(\rho_m \geq \tau_\infty) = 1$ if and only if $\lim_{m \to \infty} \mathbb{E}_\mathbb{P}[\1{\rho_m < \infty}Z_{\rho_m}] = 0$ since \begin{align*} \mathbb{Q}(\rho_m < \tau_\infty) = \mathbb{E}_\mathbb{Q}\left[\1{\rho_m<\tau_\infty} Z_{\rho_m} \frac{1}{Z_{\rho_m}}\right] = \mathbb{E}_\mathbb{P}\left[\1{\rho_m<\infty} Z_{\rho_n} \right] \end{align*} for each $m \in \mathbb{N}$. Next, since $U$ is progressive, the left-continuous process $\sup_{s<\cdot}|U_s|$ is adapted to the $\mathbb{P}$--augmentation $\overline\mathbb{F}$ of $\mathbb{F}$; see the proof of Theorem~IV.33 in~\cite{Dellacherie/Meyer:1978}. Hence it is $\overline \mathbb{F}$--predictable, so we can find an $\mathbb{F}$--predictable process $V$ that is indistinguishable from it; see Lemma~7 in Appendix~1 of~\cite{Dellacherie/Meyer:1982}. Setting $W_t=\max(V_t,|U_t|)$ it follows that $W$ is progressive with respect to $\mathbb{F}$ and satisfies $\sup_{s\le\cdot}|U_s|= W$ almost surely. Moreover, for each $m \in \mathbb{N}$, \begin{align*} \mathbb{E}_\mathbb{Q}\left[W_{\rho_m}\right] = \lim_{n \to \infty} \mathbb{E}_\mathbb{Q}\left[W_{\tau_n \wedge \rho_m} \right] = \lim_{n \to \infty} \mathbb{E}_\mathbb{P}\left[ W_{\tau_n \wedge \rho_m} Z_{\tau_n \wedge \rho_m}\right] \end{align*} for any localizing sequence $(\tau_n)_{n \in \mathbb{N}}$ of $Z$. These observations yield the claimed characterization. \end{proof} \section{Embedding into path space} \label{app:embed} The arguments in Section~\ref{S:NK} relate the martingale property of a nonnegative local $\mathbb{P}$--martingale $Z$ to the convergence of the stochastic logarithm $N$ of $1/Z$ under a related probability measure $\mathbb{Q}$. We can only guarantee the existence of this probability measure if our filtered measurable space is of the canonical type discussed in Appendix~\ref{S:follmer}. We now argue that this is not a restriction. In a nutshell, we can always embed $Z = \mathcal E(M)$ along with all other relevant processes into such a canonical space. To fix some notation in this appendix, let $(\Omega,{\mathcal F},\mathbb{F},\mathbb{P})$ be a filtered probability space where the filtration $\mathbb{F}$ is right-continuous, let $\tau$ be a foretellable time, and let $M$ be a local martingale on $\lc0,\tau[\![$. Furthermore, let $(H^n)_{n\in\mathbb{N}}$ be an arbitrary, countable collection of c\`adl\`ag adapted processes on $\lc0,\tau[\![$ and let $\mathbb{G}$ denote the right-continuous modification of the filtration generated by $(M, (H^n)_{n\in \mathbb{N}})$. Define $E=\mathbb{R}\times\mathbb{R}\times\cdots$ (countably many copies of $\mathbb{R}$) equipped with the product topology and note that $E$ is Polish. Let $\Delta \notin E$ denote some cemetery state. Let $\widetilde\Omega$ consist of all functions $\widetilde\omega:\mathbb{R}_+\to E \cup \{\Delta\}$ that are c\`adl\`ag on $[0,\zeta(\widetilde\omega))$, where $\zeta(\widetilde\omega)=\inf\{t\ge0:\widetilde\omega(t)=\Delta\}$, and that satisfy $\widetilde\omega(t)=\Delta$ for all $t\ge\zeta(\widetilde\omega)$. Let $\widetilde\mathbb{F}=(\widetilde{{\mathcal F}}_t)_{t\ge0}$ be the right-continuous filtration generated by the coordinate process, and set $\widetilde{\mathcal F}=\bigvee_{t\ge0}\widetilde{{\mathcal F}}_t$. \begin{theorem}[Embedding into canonical space]\label{T:can_embedding} Under the notation of this appendix, there exist a measurable map $\Phi: (\Omega, {\mathcal F}) \rightarrow (\widetilde{\Omega}, \widetilde {{\mathcal F}})$ and c\`adl\`ag $\widetilde \mathbb{F}$--adapted processes $\widetilde{M}$ and $(\widetilde H^n)_{n\in\mathbb{N}}$ such that the following properties hold, where $\widetilde \mathbb{P} = \mathbb{P} \circ \Phi^{-1}$ denotes the push-forward measure: \begin{enumerate} \item\label{T:can_embedding1} $\zeta$ is foretellable under $\widetilde\mathbb{P}$ and $\tau=\zeta\circ\Phi$, $\mathbb{P}$--almost surely. \item\label{T:can_embedding2} $H^n=\widetilde H^n\circ\Phi$ on $[\![ 0, \tau[\![$, $\mathbb{P}$--almost surely for each $n\in\mathbb{N}$. \item\label{T:can_embedding3} $M=\widetilde M\circ\Phi$ on $[\![ 0, \tau[\![$, $\mathbb{P}$--almost surely, and $\widetilde M$ is a local $\widetilde \mathbb{P}$--martingale on $\lc0,\zeta[\![$; we denote the compensator of its jump measure by $\nu^{\widetilde M}$. \item\label{T:can_embedding4} For any measurable function $f: \mathbb{R}\rightarrow \mathbb{R}$, $f*\nu^M=(f*\nu^{\widetilde M})\circ\Phi$ on $[\![ 0, \tau[\![$, $\mathbb{P}$--almost surely, if one side (and thus the other) is well-defined. \item\label{T:can_embedding5} For any $\widetilde \mathbb{F}$--optional process $\widetilde U$, the process $U=\widetilde U\circ\Phi$ is $\mathbb{F}$--optional. In particular, $\sigma=\widetilde\sigma\circ\Phi$ is an $\mathbb{F}$--stopping times for any $\widetilde\mathbb{F}$--stopping time~$\widetilde\sigma$. \end{enumerate} \end{theorem} Assume for the moment that Theorem~\ref{T:can_embedding} has been proved and recall from \citet{Perkowski_Ruf_2014} that $(\widetilde \Omega, \widetilde{\mathcal F}, \widetilde\mathbb{F}, \widetilde\mathbb{P})$ satisfies the assumptions of Theorem~\ref{T numeraire}. Now, with an appropriate choice of the sequence $(H^n)_{n \in \mathbb{N}}$, Theorem~\ref{T:can_embedding} allows us, without loss of generality, to assume $(\widetilde \Omega, \widetilde{\mathcal F}, \widetilde\mathbb{F}, \widetilde\mathbb{P})$ as the underlying filtered space when proving the Novikov-Kazamaki type conditions. To illustrate the procedure, suppose $Z={\mathcal E}(M)$ is a nonnegative local martingale as in Section~\ref{S:NK}, satisfying Theorem~\ref{T:NK}\ref{T:NK:2} for some optional process $U$ that is extended locally $Z$--integrable (see Appendix~\ref{App:Z}). Without loss of generality we may assume $U$ is nondecreasing. We now apply Theorem~\ref{T:can_embedding}. By choosing the family $(H^n)_{n\in\mathbb{N}}$ appropriately, we can find an $\widetilde\mathbb{F}$--optional process $\widetilde U$ with $U=\widetilde U\circ\Phi$ almost surely, that is extended locally $\widetilde Z$--integrable, where $\widetilde Z={\mathcal E}(\widetilde M)$. Furthermore, we have \[ \sup_{\widetilde\sigma \in \widetilde{\mathcal{T}}} \mathbb{E}_{\widetilde\mathbb{P}}\left[\widetilde Z_{\widetilde\sigma} f(\epsilon \widetilde N_{\widetilde\sigma}-\widetilde U_{\widetilde\sigma})\right] \le \sup_{\sigma \in \mathcal{T}} \mathbb{E}_\mathbb{P}\left[Z_\sigma f(\epsilon N_\sigma-U_\sigma)\right] < \infty, \] where $\widetilde{\mathcal T}$ is the set of all bounded $\widetilde\mathbb{F}$--stopping times, and $\widetilde N$ is given by~\eqref{eq:N} with $M$ replaced by $\widetilde M$. By Theorem~\ref{T:NK}, the local $\widetilde\mathbb{F}$--martingale $\widetilde Z$ is a uniformly integrable martingale, and thus so is~$Z$. Transferring the reverse implication to the canonical space is done in similar fashion, using also Remark~\ref{R:Uspecs}. We begin the proof of Theorem~\ref{T:can_embedding} with a technical lemma. \begin{lemma}[A canonical sub-filtration] \label{L:shrink} Under the notation of this appendix, let $(f^n)_{n\in\mathbb{N}}$ be a collection of bounded measurable functions on $\mathbb{R}$, each supported in some compact subset of~$\mathbb{R}\setminus\{0\}$. Then there exists a countable family $(K^n)_{n \in \mathbb{N}}$ of $\mathbb{F}$--adapted c\`adl\`ag processes, such that the following properties hold, with $\mathbb{H}$ denoting the right-continuous modification of the filtration generated by $(M,f^n*\nu^{M}, H^n, K^n)_{n\in \mathbb{N}}$: \begin{enumerate} \item\label{L:shrink:i} $\tau$ is foretellable with respect to $\mathbb{H}$. \item\label{L:shrink:ii} $M$ is an $\mathbb{H}$--local martingale on $\lc0,\tau[\![$. \item\label{L:shrink:iii} $f^n*\nu^{M}$ is indistinguishable from the $\mathbb{H}$--predictable compensator of $f^n*\mu^{M}$ for each~$n\in\mathbb{N}$. \end{enumerate} \end{lemma} \begin{proof} Let $(\tau_m)_{m\in\mathbb{N}}$ be a localizing sequence for $M$ announcing $\tau$. Including the c\`adl\`ag $\mathbb{F}$--adapted processes $\boldsymbol 1_{[\![\tau_m,\infty[\![}$ in the family $(K^n)_{n\in\mathbb{N}}$ makes $\tau_m$ an $\mathbb{H}$--stopping time for each $m \in \mathbb{N}$, and guarantees \ref{L:shrink:i} and \ref{L:shrink:ii} . Next, fix $n\in\mathbb{N}$. Then, the $\mathbb{F}$--martingale $f^n*\mu^{M}-f^n*\nu^{M}$ is clearly $\mathbb{H}$--adapted and hence an $\mathbb{H}$--martingale. Thus \ref{L:shrink:iii} follows if the $\mathbb{F}$--predictable process $X = f^n*\nu^{M}$ is indistinguishable from an $\mathbb{H}$--predictable process. Let $(\sigma_m)_{m\in\mathbb{N}}$ be a sequence of $\mathbb{F}$--predictable times with pairwise disjoint graphs covering the jump times of $X$; see Proposition~I.2.24 in \citet{JacodS}. Since $X$ has bounded jumps, we may define a martingale $J^m$ as the right-continuous modification of $(\mathbb{E}[ \Delta X_{\sigma_m}\1{\sigma_m<\infty}\mid{\mathcal F}_t])_{t \geq 0}$, for each $m \in \mathbb{N}$. Let also $(\rho_{m,k})_{k\in\mathbb{N}}$ be an announcing sequence for $\sigma_m$. We then have \[ \lim_{k\to \infty} J^m_{\rho_{m,k}}\boldsymbol 1_{]\!]\rho_{m,k},\infty[\![} = J^m_{\sigma_m-}\boldsymbol 1_{[\![\sigma_m,\infty[\![} = \Delta X_{\sigma_m}\boldsymbol 1_{[\![\sigma_m,\infty[\![}. \] Thus, if we include the processes $J^m$ and $\boldsymbol 1_{[\![\rho_{m,k},\infty[\![}$ for $m,k \in \mathbb{N}$ in the family $(K^n)_{n \in \mathbb{N}}$, then each $\Delta X_{\sigma_m}\boldsymbol 1_{[\![\sigma_m,\infty[\![}$ becomes the almost sure limit of $\mathbb{H}$--adapted left-continuous processes and hence $\mathbb{H}$--predictable up to indistinguishability. The decomposition \[ X=X_-+\sum_{m\in\mathbb{N}}\left(\Delta X_{\sigma_m}\boldsymbol 1_{[\![\sigma_m,\infty[\![} - \Delta X_{\sigma_m}\boldsymbol 1_{]\!]\sigma_m,\infty[\![}\right) \] then implies that, up to indistinguishability, $X$ is a sum of $\mathbb{H}$--predictable processes. Repeating the same construction for each $n\in\mathbb{N}$ yields \ref{L:shrink:iii}. \end{proof} \begin{proof}[Proof of Theorem~\ref{T:can_embedding}] Let $(M,f^n*\nu^{M}, H^n, K^n)_{n\in \mathbb{N}}$ and $\mathbb{H}$ be as in Lemma~\ref{L:shrink}. There exists an $\mathbb{H}$--stopping time~$T$ with $\mathbb{P}(T<\infty) = 0$ such that the paths $(M(\omega),f^n*\nu^{M}(\omega), H^n(\omega), K^n(\omega))_{n\in \mathbb{N}}$ are c\`adl\`ag on $[0,T(\omega) \wedge \tau(\omega) )$ for all $\omega \in \Omega$. We now check that the measurable map $\Phi: (\Omega, {\mathcal F}) \rightarrow (\widetilde{\Omega}, \widetilde {{\mathcal F}})$ given by \[ \Phi(\omega)(t) = \left( M_t(\omega),f^n*\nu^M_t(\omega),H^n_t(\omega),K^n_t(\omega)\right)_{n \in \mathbb{N}}\1{t<T(\omega) \wedge \tau(\omega)} + \Delta \1{t\ge T(\omega) \wedge \tau(\omega)}, \] along with the obvious choice of processes $\widetilde{M}$ and $(\widetilde{H}^n)_{n \in \mathbb{N}}$, satisfies the conditions of the theorem. The statements in \ref{T:can_embedding1} and~\ref{T:can_embedding2} are clear since $\zeta\circ\Phi=T\wedge\tau$. For \ref{T:can_embedding3}, one uses in addition that $\Phi^{-1}(\widetilde{\mathcal F}_t)\subset{\mathcal F}_t$ for all $t\ge0$ due the ${\mathcal F}_t/\widetilde{\mathcal F}_t$--measurability of~$\Phi$. We now prove \ref{T:can_embedding4}. For each $n \in \mathbb{N}$, let $\widetilde{F}^n$ denote the coordinate process in the canonical space corresponding to $f^n *\nu^M$. Then, by Lemma~\ref{L:shrink}, for each $n \in \mathbb{N}$, $\widetilde{F}^n$ is indistinguishable from an $\widetilde {\mathbb{F}}$--predictable process and, due to the definition of $\widetilde \mathbb{P}$, the process $f^n * \mu^{\widetilde{M}} - \widetilde{F}^n$ is a $\widetilde{\mathbb{P}}$--local martingale. Here, $\mu^{\widetilde{M}}$ denotes the jump measure of the c\`adl\`ag process $\widetilde M$. Thus $\widetilde{F}^n$ is indistinguishable from $f^n*\nu^{\widetilde{M}}$, which gives $f^n * \nu^M = (f^n * \nu^{\widetilde M}) \circ \Phi$ on $[\![ 0, \tau[\![$, $\mathbb{P}$--almost surely, for each $n \in \mathbb{N}$. Choosing $(f^n)_{n \in \mathbb{N}}$ to be a measure-determining family along with a monotone class argument thus yields \ref{T:can_embedding4}. For~\ref{T:can_embedding5}, let $\widetilde\sigma$ be an $\widetilde\mathbb{F}$--stopping time. Then $\sigma=\widetilde\sigma\circ\Phi$ is an $\mathbb{F}$-stopping times, since \[ \{\sigma\le t\}=\Phi^{-1}(\{\widetilde\sigma\le t\})\in\Phi^{-1}(\widetilde{\mathcal F}_t)\subset{\mathcal F}_t \] for all $t\ge0$. Thus if $\widetilde U=\boldsymbol 1_{\lc0,\widetilde\sigma[\![}$, then $U=\widetilde U\circ\Phi=\boldsymbol 1_{\lc0,\sigma[\![}$ is $\mathbb{F}$--optional. The result now follows by a monotone class argument. \end{proof} \setlength{\bibsep}{0.0pt} {} \end{document}
arXiv
Learning beyond the Brick and Mortar: A Review of Prospects and Challenges of E-learning Innovation Lamin B. Ceesay Subject: Social Sciences, Education Studies Keywords: e-learning; information technology services; e-learning adoption; e-learning diffusion; systematic review; bibliometric analysis Increased proliferation of IT services in all sectors has reinforced the adoption and diffusion across all levels of education and training institutions. However, lack of awareness of and knowledge about the key challenges and opportunities of elearning, seem to allude policymakers, resulting in low adoption or increased failure rate of many e-learning projects. Our study tries to address this problem through a review of relevant literature in e-learning. Our goal was to draw from the existing literature, insights into the opportunities and challenges of e-learning diffusion, and the current state-of-research in the field. To do this, we employed a systematic review of literature on some of the salient opportunities and challenges of e-learning innovation for educational institutions. These results aimed to inform policymakers and suggest some interesting issues to advance the research and adoption and diffusion of e-learning. Moreover, the bibliometric analysis shows that the field is experiencing high research attraction among scholars. However, several research areas in the field witnessed relatively low research paucity. Based on these findings, we discussed topics for possible future research. Is Online Learning Ready to Replace Traditional Education? A Commentary Wael Osman Subject: Social Sciences, Education Studies Keywords: online learning; e-learning; hybrid learning; innovation; education In recent years, online learning has become one of the most popular methods of educational delivery due to advances in technology, which has been made even more evident in the COVID-19 lockdown period. Online education has evolved into a distinct field of study within the educational system over the last few years. It is also important to note that parallel with the growth in this field, there has also been an increase in the number of scholarly journals that regularly publish research in this field, reflecting the importance of this field in the modern day. In spite of the fact that online learning offers a wide range of educational options, from short courses to full-time degrees, as well as being accessible, flexible, environmentally friendly, and affordable, there are also certain challenges associated with this educational approach. These challenges include the lack of social interaction, technical errors, a lack of hands-on training, and difficulties in assessing students. It is, therefore, imperative to ask the crucial question of whether online learning can replace traditional classroom learning or whether it can supplement it in hybrid models with it, as well as what factors and conditions are likely to determine this in the short- and long-term, as well as how it will be blended together in the future. The purpose of this commentary is to provide a brief summary of the current status of both learning models, as well as their pros and cons, in order to answer the question that was posed above. Effects of a Brief e-Learning Resource on Sexual Attitudes and Beliefs of Healthcare Professionals Working in Prostate Cancer Care: A Single Arm Pre and Post‐Test Study Eilis M. McCaughan, Carrie Flannagan, Kader Parahoo, Sharon L. Bingham, Nuala Brady, John Connaghan, Roma Maguire, Samantha Thompson, Suneil Jain, Michael Kirby, Seán R. O'Connor Subject: Medicine & Pharmacology, Other Keywords: Sexual wellbeing; prostate cancer; e-learning Sexual issues and treatment side effects are not routinely discussed with men receiving treatment for prostate cancer and support to address these concerns is not consistent across settings. This study evaluates a brief e-learning resource designed to improve sexual wellbeing support and examine its effects on healthcare professionals' sexual attitudes and beliefs. Healthcare professionals (n=44) completed an online questionnaire at baseline which included a modified 12-item sexual attitudes and beliefs survey (SABS). Follow-up questionnaires were completed immediately after the e-learning and at 4 weeks. Data were analysed using one-way, repeat measures ANOVAs to assess change in attitudes and beliefs over time. Significant improvements were observed at follow-up for a number of survey statements including 'knowledge and understanding', 'confidence in discussing sexual wellbeing' and the extent to which participants felt 'equipped with the language to initiate conversations'. The resource was seen as concise, relevant to practice, and as providing useful information on potential side effects of treatment. Brief, e-learning has potential to address barriers to sexual wellbeing communication and promote delivery of support for prostate cancer survivors. Practical methods and resources should be included with these interventions to support implementation of learning and long-term changes in clinical behaviour. Change the Teaching Methodologies to Improve E-Learning Quality Thanh Nga Pham Subject: Social Sciences, Education Studies Keywords: e-learning; teaching methodologies; lecturer; learner E-learning with many outstanding advantages in training has drastically changed the self-study process due to the ability to personalize and effectively meet the learning activities of learners. E-learning and building an e-learning environment are currently paying attention and being deployed in many universities in Vietnam with different scope and levels. Especially in the current period, when science and technology are developing, many applications of technology and technology products have been applied in the field of education, changing the way of teaching and learning activities, the practice of both lecturers and students. Big Data and Artificial Intelligence (AI) technologies have replaced people not only for manual labor but also for intellectual labor, including the teaching of teachers. Many software applications have been used to replace people in the transmission of knowledge, testing, and evaluation of training quality, especially E-learning online training programs. However, in Vietnam today, the output quality of these online training programs has not been highly appreciated compared to similar programs in the world. The cause of this situation is that the training, teaching, and learning are not effective. Therefore, in this article, I will give some analysis, evaluate the current teaching and learning methods and propose solutions to enhance the interaction and initiative in the teaching and learning process of lecturers and students to improve the quality of online training in the future. Fostering Accessible Online Education Using Galaxy as an e-learning Platform Beatriz Serrano-Solano, Melanie Föll, Cristóbal Gallardo-Alba, Anika Erxleben, Helena Rasche, Saskia Hiltemann, Matthias Fahrner, Mark J. Dunning, Marcel Schulz, Beáta Scholtz, Dave Clements, Anton Nekrutenko, Bérénice Batut, Björn Grüning Subject: Keywords: e-learning; online teaching; Galaxy; TIaaS; accessibility; scalability The COVID-19 pandemic is shifting teaching to an online setting all over the world. The Galaxy framework facilitates the online learning process and makes it accessible by providing a library of high-quality community-curated training materials, enabling easy access to data and tools, and facilitates sharing achievements and progress between students and instructors. By combining Galaxy with robust communication channels, effective instruction can be designed inclusively, regardless of the students' environments. Factors Affecting MOOC Usage by Students in Selected Ghanaian Universities Eli Fianu, Craig Blewett, Kwame Simpe Ofori Subject: Social Sciences, Education Studies Keywords: e-learning; technology adoption; MOOC; UTAUT; PLS-SEM There has been widespread criticism about the rates of participation of students enrolled on MOOCs (Massive Open Online Courses), more importantly, the percentage of students who actively consume course materials from beginning to the end. This study sought to investigate this trend by examining the factors that influence MOOC adoption and use by students in selected Ghanaian universities. The Unified Theory of Acceptance and Use of Technology (UTAUT) was extended to develop a research model. A survey was conducted with 270 questionnaires administered to students who had been assigned MOOCs; 204 questionnaires were retrieved for analysis. Findings of the study show that MOOC usage intention is influenced by computer self-efficacy, performance expectancy, and system quality. Results also showed that MOOC usage is influenced by facilitating conditions, instructional quality, and MOOC usage intention. Social influence and effort expectancy were found not to have a significant influence on MOOC usage intention. The authors conclude that universities must have structures and resources in place to promote the use of MOOCs by students. Computer skills training should also be part of the educational curriculum at all levels. MOOC designers must ensure good instructional quality by using the right pedagogical approaches and also ensure that the sites and learning materials are of good quality. A Constructivist-based Proposal for Bioinformatics Teaching Practices During Lock-down Cristóbal Gallardo-Alba, Björn Grüning, Beatriz Serrano-Solano Subject: Keywords: constructivism; e-learning; online teaching; social constructivism theory; cognitive learning theory; transformative learning theory The COVID-19 outbreaks have caused universities all across the globe to close their campuses and forced them to initiate online teaching. This article reviews the pedagogical foundations for developing effective distance education practices, starting from the assumption that promoting autonomous thinking is an essential element to guarantee full citizenship in a democracy and for moral decision making in situations of rapid change, which has become a pressing need in the context of a pandemic. In addition, the main obstacles related to this new context are identified, and solutions are proposed according to the existing bibliography in learning sciences. Escape Rooms in Medical Education: Deductive Analysis of Designs, Applications and Implementations Ahmad Neetoo, Nabil Zary, Youness Zidoun Subject: Keywords: Medical Education; Serious games; Escape room; E-learning; Edutainment; Game-based learning Background: Serious games are conceptualized as a broad topic and overlap segments of more modern forms of education: e-learning, edutainment, game-based learning, and digital game-based learning. Serious Games aligns with digitalization and the modern era and creates novel opportunities for learning and assessment in medical education. Escape rooms, a type of serious games, merge mental and physical aspects to reinforce critical skills useful in daily life. It challenges logic and reasoning and demands careful analysis of situations to correlate and solve different stages of the escape room under pressurized, timed conditions. Furthermore, it serves as an adequate environment to build problem-solving skills, communication skills, and leadership skills through the collaboration of people to achieve a common goal. The aim of this study was to investigate the applications of escape rooms in Medical Education. Method: This study investigated the applications of escape rooms in medical education. Serious games are expanding in education and have attained great relevance due to their intriguing and intrinsically motivating attributes. Within serious games, we focused on escape rooms in which participants are locked in a room, faced with puzzles that must be solved to 'escape the room'. Compiling the data from the first 100 hits of medical application of escape rooms, we found 72 cases and categorized them by year, specialty, participants structure, simulation experience, and design. Results: We reported on escape rooms in medical education by the year in which they were reported, the medical specialty, the participant structure, grouped or individual, the experience design; real, hybrid, or digital, and the modality of the delivery. 72% of the escape rooms focused on four main areas: nursing education (25.0%), emergency medicine (22.2%), pharmacy (12.5%), and interprofessional education (12.5%). Most of the escape rooms had a group-based physical design and little attention was given to provide a detailed description of the design considerations, such as the pathway type (linear, semi-linear, open). Conclusion: Escape rooms are applied in a wide range of medical education areas. In Medical Education, group-based on-site escape rooms with a focus on nursing, emergency medicine, pharmacy and interprofessional education dominates the implementation landscape. To further advance the field, stronger emphasis on making explicit the design considerations will advance the research and inform implementations. A Hybrid Adaptive Educational eLearning Project based on Ontologies Matching and Recommendation System Vasiliki Demertzi, Konstantinos Demertzis Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Adaptive Educational System; E-Learning; Machine Learning; Semantics; Recommendation System; Ontologies Matching. The implementation of teaching interventions in learning needs has received considerable attention, as the provision of the same educational conditions to all students, is pedagogically ineffective. In contrast, more effectively considered the pedagogical strategies that adapt to the real individual skills of the students. An important innovation in this direction is the Adaptive Educational Systems (AES) that support automatic modeling study and adjust the teaching content on educational needs and students' skills. Effective utilization of these educational approaches can be enhanced with Artificial Intelligence (AI) technologies in order to the substantive content of the web acquires structure and the published information is perceived by the search engines. This study proposes a novel Adaptive Educational eLearning System (AEeLS) that has the capacity to gather and analyze data from learning repositories and to adapt these to the educational curriculum according to the student skills and experience. It is a novel hybrid machine learning system that combines a Semi-Supervised Classification method for ontology matching and a Recommendation Mechanism that uses a hybrid method from neighborhood-based collaborative and content-based filtering techniques, in order to provide a personalized educational environment for each student. Integration of Data Mining Clustering Approach with the Personalized E-Learning System Samina Kausar, Huahu Xu, Iftikhar Hussain, Wenhau Zhu, Misha Zahid Subject: Mathematics & Computer Science, Other Keywords: big data; clustering; data mining; educational data mining; e-learning; profile learning Educational data-mining is an evolving discipline that focuses on the improvement of self-learning and adaptive methods. It is used for finding hidden patterns or intrinsic structures of educational data. In the arena of education, the heterogeneous data is involved and continuously growing in the paradigm of big-data. To extract meaningful information adaptively from big educational data, some specific data mining techniques are needed. This paper presents a clustering approach to partition students into different groups or clusters based on their learning behavior. Furthermore, personalized e-learning system architecture is also presented which detects and responds teaching contents according to the students' learning capabilities. The primary objective includes the discovery of optimal settings, in which learners can improve their learning capabilities. Moreover, the administration can find essential hidden patterns to bring the effective reforms in the existing system. The clustering methods K-Means, K-Medoids, Density-based Spatial Clustering of Applications with Noise, Agglomerative Hierarchical Cluster Tree and Clustering by Fast Search and Finding of Density Peaks via Heat Diffusion (CFSFDP-HD) are analyzed using educational data mining. It is observed that more robust results can be achieved by the replacement of existing methods with CFSFDP-HD. The data mining techniques are equally effective to analyze the big data to make education systems vigorous. Commonly Used External TAM Variables in Virtual Reality, E-Learning and Agriculture Applications: A Literature Review Using QFD as Organizing Framework Ivonne Angelica Castiblanco Jimenez, Laura Cepeda Garcia, Maria Grazia Violante, Enrico Vezzetti Subject: Engineering, Automotive Engineering Keywords: TAM; e-learning; agriculture; virtual reality; QFD; technology acceptance In recent years information and communication technologies (ICT) play a significant role in all aspects of modern society and impact socioeconomic development in sectors as education, administration, business, medical care and agriculture. The benefits of such technologies in agriculture can be appreciated only if farmers use them. In order to predict and evaluate the adoption of these new technological tools, the technology acceptance model (TAM) can be a valid aid. The paper measures the potential acceptance of an e-learning tool designed for EU farmers and agricultural entrepreneurs. Starting from a literature review of the technology acceptance model, by analyzing the most commonly used external variables in the fields of e-learning, Agriculture, and Virtual reality, the analysis shows that computer self-efficacy, individual innovativeness, computer anxiety, perceived enjoyment, social norm, content and system quality, experience and facilitating conditions are the most common determinants addressing technology acceptance. Furthermore, findings evidenced that the external variables have a different impact on the two main beliefs of the TAM Model, Perceived Usefulness (PU) and Perceived Ease of Use (PEOU). This study is expected to bring theoretical support for academics when determining the variables to be included in TAM extensions. Effectiveness of Training Program in Enhancing E-Assessment through Tablets and Smartphones: A Case Study of Saudi Arabia Ahmed Maajoon Alenezi Subject: Arts & Humanities, Other Keywords: E raining; digital learning objects; electronic assessment; tablets; ‎smartphones This research assess the effects of training program based on the usage of the digital learning objects in teaching practice at the Northern Borders University staff. E Assessment through the tablets and smart phones and the teachers' attitudes towards such way of evaluation is the major objective of this study as the researcher expects that the assessment mechanism in the university through utilization of tablets and smart phones and its application will inevitably bring in a systematic improvement in the assessment and evaluation process of the curricula. Moreover, making use of the e learning objects in training will make a significant change in e training program of the university. Hence, the researcher has chosen voluntary random samples from the university teaching staff (men\women) from various different faculties (medicine, medical sciences, science, education and arts, business administration, home economics, and science and literature). These samples included 300 members of the teaching staff. In a group of 20 to 25 members, a personal training was conducted regarding the usage of tablets and smart phones and its applications in the assessment process. Each group participated by producing a complete e-assessment for their students in the Northern Borders University and by the e learning system i.e. Blackboard and Question Mark. The research also depends on the semi-experimental design of multiple groups and on testing the groups' pre and post achievement tests. In addition, the research identifies the level of the university teaching staff in using the tablets and the smartphones and its applications in the assessment process by the note card that the individuals have during the test. The Effect of the Number of E-stores Subscribers on Chinese Smartphone Brand Purchases: Evidence From a Machine Learning Model Karamoko K.E.H. N'da, Jiaoju Ge, Ren Ji Fan, Xiaobo Xu Subject: Behavioral Sciences, Other Keywords: Chinese smartphone brands; Decision trees; e-stores subscribers; consumer learning Introduction. Until now, the impact of learning variables on consumers' choices concerning Chinese product brands in the international online shopping framework remains unknown. Accordingly, this study aims to examine the effect of those learning variables on global consumers' choices of Chinese product brands. Method. A total of 44,704 transactions related to the buying process have been collected from a programming language and the Octopus Software within a Chinese International Online Shopping platform. Analysis. The 44,704 transactions have been analyzed through a Decision Tree. Results. The study points out that the number of e-retailers' subscribers reinforces the international consumers' trust online. At the same time, the pricing levels and quantity of product availability are used by global online consumers to assess the originality of Chinese product brands. Conclusions. First, this study extends the existing literature on consumer learning by going beyond the learning variables considered. Second, the study boosts consumer learning literature by elucidating the most significant learning variables guiding international online consumers' choices and purchases. The application of the results will enable brands and e-retailers to understand (1) the stages of the international online consumers' choice; (2) the buying strategies of global consumers. The Latent Digital Divides and Its Drivers in E-Learning: Among Bangladeshi Student During COVID-19 Pandemic Md Badiuzzaman, Md. Rafiquzzaman, Md Insiat Islam Rabby, Mohammad Mustaneer Rahman Subject: Social Sciences, Accounting Keywords: COVID-19; Digital Divide; Online Learning; Multi-level Digital Divide The devastating COVID-19 pandemic forced academia to go virtual. Educational institutions around the world have stressed online learning programs in the aftermath of the pandemic. However, because of insufficient access to ICT, a substantial number of students failed to harness the opportunity of online learning. This study explores the latent digital divide exhibited during the COVID-19 pandemic while online learning activities are emphasized among Bangladeshi students. It also investigates the digital divide exposure and the significant underlying drivers of the divide. A cross-sectional survey was employed to collect quantitative data mixed with open-ended questions to collect qualitative information from the student community. The findings revealed that despite the majority of students have physical access to ICT but only 32.5% of students could attend online classes seamlessly, 34.1% of the students reported the data prices as the critical barrier, and 39.8% of students identified the poor network infrastructure is the significant barrier for them to participate in online learning activities. Although most students possess physical access to the device and the Internet, they face the first-level digital divide due to the quality of access and maintaining subscriptions. Consequently, they fail to take advantage of physical access, resulting in the third-level digital divide (Utility Gap) and submerging them into a digital divide cycle. This paper aimed to explore the underlying issues of the digital divide among Bangladeshi students to assist relevant stakeholders (e.g., the Bangladesh government, Educational Institutions, Researchers) in providing the necessary insights and theoretical understanding to arrange adequate support for students to undertake conducive online learning activities. Assessment of the Development of Professional Skills in University Students: Sustainability and Serious Games Noemi Peña Miguel, Javier Corral Lage, Ana Mata Galindez Subject: Social Sciences, Education Studies Keywords: Active learning; professional skills; civic education; higher education; e-learning; serious games; critical thinking; sustainability This study assesses the development of professional skills in university students using serious games (SG), from a sustainability perspective. The Sustainable Development Goals (SDGs) were set by the United Nations' 2030 Agenda for Sustainable Development. Universities are strategic agents in the transformation process towards sustainability. This way, they should be committed to promoting such sustainable values in the students through curricular sustainability, implementing active methodologies and SG for that purpose. Transversal skills are essential for the development of future graduates. The objective of this study was to assess which professional skills should be developed through the SG called The Island, to improve the degree of student satisfaction with the incorporation of a sustainable curriculum. The data were obtained using a questionnaire, and then analysed using linear regression models, with their inference estimated through the goodness of fit and ANOVA. The first results indicated that the implementation of the SG promoted a strengthening of the students' sustainable curriculum through the development of those skills. It was concluded that the key to success in education for sustainable development is improving the development of strategic thinking, collaborative thinking, and self-awareness, in addition to encouraging systemic, critical, and problem-solving thinking. Robot-Enhanced Language Learning for Children in Norwegian Day-Care Centers Till Halbach, Trenton Schulz, Wolfgang Leister, Ivar Solheim Subject: Mathematics & Computer Science, Other Keywords: Social robot; mobile app; e-learning; language education; kindergarden, pre-school We transformed the existing learning program Language Shower, which is used in some Norwegian day-care centers in the Grorud district of Oslo municipality, into a digital solution using an app for smartphone or tablet with the option for further enhancement of presentation by a NAO robot. The solution was tested in several iterations and multiple day-care centers over several weeks. Measurements of the children's progress across learning sessions indicate a positive impact of the program using a robot as compared to the program without robot. In-situ observations and interviews with day care center staff confirmed the solution's many advantages, but also revealed some important areas for improvement. In particular, the speech recognition needs to be more flexible and robust, and special measures have to be in place to handle children speaking simultaneously. Initial Experience in Developing AI Algorithms in Medical Imaging Based on Annotations Derived From an E-Learning Platform Maurice Henkel, Hanns-Christian Breit, Patricia Wiesner, Jakob Wasserthal, Victor Parmar, Thomas Weikert, Verena Hofmann, Sebastian Eiden, Lena Schmülling, Konrad Appelt, David Winkel, Fabiano Paciolla, Christian A. Lechtenboehmer, Moritz Vogt, Laurent Binsfeld, Raphael Sexauer, Christian Wetterauer, Kirsten D. Mertz, Alexander Sauter, Bram Stieltjes Subject: Medicine & Pharmacology, Other Keywords: E-learning derived annotations; Pneumothorax; Artificial intelligence; Crowdsourcing; Educational data mining Development of supervised AI algorithms requires a large amount of labeled images. Image labelling is both time-consuming and expensive. Therefore, we explored the value of e-learning derived annotations for AI algorithm development in medical imaging. Methods We have developed an e-learning platform that involves image-based single click labelling as part of the educational learning process. Ten radiology residents, as part of their residency training, trained the recognition of pneumothorax on 1161 chest X-rays in posterior-anterior projection. Using this data, multiple AI algorithms for detecting pneumothorax were developed. Classification and localization performance of the models was tested on an independent internal testing dataset and on the public NIH ChestX-ray14 dataset. Results The AI models F1 scores on the internal and the NIH dataset were 0.87 and 0.44, respectively. Sensitivity was 0.85 and 0.80 for classification and specificity 0.96 and 0.48 for classification. F1 scores were 0.72 and 0.66, sensitivity 0.72 and 0.72. False positive rate was 0.36 and 0.32 for localisation. Conclusion Our results demonstrated that e-learning derived annotations are a valuable data source for algorithm development. Further work is needed to include additional parameters such as user performance, consensus of diagnosis, and quality control in the development pipeline. Genome Analysis of Bacteriophage (U1G) of Schitoviridae, Host Receptor Prediction using Machine Learning Tools and its Evaluation to Mitigate Colistin Resistant Clinical Isolate of Escherichia Coli In Vitro and In Vivo Niranjana Sri Sundaramoorthy, Vineetha KU, Jean Sophy Roy JBB, Sneha Srikanth, Santhosh Kumar S, Suma Mohan, Saisubramanian Nagarajan Subject: Life Sciences, Microbiology Keywords: Bacteriophage, colistin resistance, E. coli, Schitoviridae, zebrafish, Machine learning, Host receptor Prediction The objective of the present study is to isolate phages targeting colistin resistant E. coli clinical isolates (U3790 and U1007), sequence and analyze the phage genome and use machine learning tools to predict host cell surface receptor and finally evaluate the efficiency of the phage in vitro and in vivo in a zebrafish model. Phage targeting colistin resistant U3790 could not be isolated possibly due to presence of capsule and intact prophages in genome of U3790 strain. Phage specific for E. coli U1007 was isolated from Ganges River (designated as U1G). The obtained phage was triple purified and enriched. U1G phages had a greater burst size of 195 PFU/cell and a short latent time of 25 min. TEM analysis showed that U1Gphage possessed a capsid of 70 nm in diameter with a shorter tail, which shows that U1G belongs to the family Podoviridae. Genome sequencing and analysis revealed that the size of the phage genome is 73275 bp with no tRNA sequence, antibiotic resistant or virulent genes. PHASTER annotation revealed the presence of phage RNA polymerase gene in the genome, which favors the classification of phage under a new family Schitoviridae. A machine learning (ML) based multi-class classification model using algorithms such as Random Forest, Logistic Regression, and Decision Tree were employed to predict the host receptor targeted by U1G phage and the best performing two algorithms predicted LPS O antigen as the host receptor targeted by the U1G receptor binding protein (RBP), the tail spike protein. The isolated phage was stable from pH 5.0 to 9.0 and upto 45°C. In vitro time kill assay showed an initial 5 log decline in CFU/ml of U1007 at 2 h in the presence of U1G followed by regrowth, Addition of colistin with U1G restricted the growth until 6 h, however it also resulted in a regrowth by 24 h. The phage did not pose any toxicity to zebrafish as evidenced by liver/brain enzyme profiles. In vivo intramuscular infection study showed that U1G and Col + U1G treatment caused a 0.8 log and 1.4 log decline, respectively underscoring its potential for use in phage cocktail therapy. FastTest Plugin: a New Plugin to Generate Moodle Quiz XML Files Milagros Huerta, Manuel Alejandro Fernandez-Ruiz Subject: Engineering, General Engineering Keywords: content creation tools; plugin; Moodle; e-learning technologies; LMS; XML file; quiz The use of Learning Management Systems (LMS) has had a rapid growth over the last decades. Great efforts have been recently made to assess the level of performance of the students online due to the COVID-19 pandemic. Faculty members with limited experience in the use of LMS such as Moodle, Edmodo, MOOC, Blackboard and Google Classroom can have some problems creat-ing online tests. In this work, a new plugin for Moodle is presented: FastTest PlugIn. This plugin is based on a Microsoft® Excel spreadsheet and it can be used to export questions to Moodle. Us-ers of FastTest PlugIn can create question pools in an easy and intuitive way. A literature review about plugins used to import/export questions in Moodle is carried out. Then, the characteristics of FastTest PlugIn are presented. At the end, the characteristics of the main plugins found in the literature are discussed and compared with the ones of FastTest PlugIn. Medi-Test: GENERATING Tests from Medical Reference Texts Íonuț Pistol, Diana Trandabăț, Mădălina Răschip Subject: Mathematics & Computer Science, Analysis Keywords: e-learning; automatic test generation; medical ontology; data mining for medical texts The Medi-test system we developed was motivated by the large number of resources available for the medical domain, as well as the number of tests needed in this field (during and after the medical school) for evaluation, promotion, certification, etc. Generating questions to support learning and user interactivity has been an interesting and dynamic topic in NLP since the availability of e-book curricula and e-learning platforms. Current e-learning platforms offer increased support for student evaluation, with an emphasis in exploiting automation in both test generation and evaluation. In this context, our system is able to evaluate a student's academic performance for the medical domain. Using as input medical reference texts and supported by a specially designed medical ontology, Medi-test generates different types of questionnaires for Romanian language. The evaluation includes 4 types of questions (multiple-choice, fill in the blanks, true/false and match), can have customizable length and difficulty and can be automatically graded. A recent extension of our system also allows for the generation of tests which include images. We evaluated our system with a local testing team, but also with a set of medicine students, and user satisfaction questionnaires showed that the system can be used to enhance learning. Site Classification and Evaluation of Eucalyptus urophylla × Eucalyptus grandis Plantation in Southern Yunnan, China Haifei Lu, Jianmin Xu, Guangyou Li, Wangshu Liu, Yuqiang Wu, Yundong Zhang, Jun Bai, Guolei Su, Cheng Jiang Subject: Life Sciences, Other Keywords: E. urophylla × E. grandis; plantation; forest yield Background and Objectives: The site types of Eucalyptus urophylla × Eucalyptus grandis clonal plantations in southern Yunnan were compared, aiming to provide basis for site selection and scientific plantations management. Materials and Methods: In this study, 80 standard plots were set up in the 6−9-year-old Eucalypts plantations in Pu'er City and Lincang City. Furthermore, the quantitative theory I model and canonical correlation analysis were used to analyze the relationship between dominant tree growth traits and site factors, and evaluate the growth potential of E. urophylla × E. grandis plantation. Results: The multiple correlation coefficient between 8 site factors (altitude, slope, slope level, soil thickness, slope direction, texture, soil bulk density, and litter thickness) and the quantitative growth of the dominant wood was 0.825 (P < 0.05). According to the correlation coefficient of the quantitative regression model, slope, altitude, and soil thickness were the main factors for the classification of E. urophylla × E. grandis plantations in southern Yunnan. In addition, E. urophylla × E. grandis plantations grew best downhill and mid uphill at relatively low altitude, where the soil layer was thick and composed of weathered red soil. Contrastingly, E. urophylla × E. grandis plantation growth was extremely poor in uphill sites at higher altitude, where the soil layer was thin and composed of semi-weathered purple soil. Furthermore, total N, and available B, Cu, and Zn content, as well as soil organic matter content in the soil had a great influence on the growth of E. urophylla × E. grandis. Conclusions: Nitrogen and phosphate fertilizer as well as trace elements such as B, Zn, and Cu can be properly applied in middle- and low-yield forests to promote the growth and development of E. urophylla × E. grandis plantations. Preprint BRIEF REPORT | doi:10.20944/preprints201806.0031.v1 Youth Access to Electronic Cigarettes in an Unrestricted Market: a Cross-Sectional Study from Poland Lukasz Balwicki, Danielle Smith, Malgorzata Balwicka-Szczyrba, Michal Gawron PharmD, Andrzej Sobczak, Maciej L. Goniewicz Subject: Behavioral Sciences, Other Keywords: e-cigarette access; e-cigarette use; youth Background: Electronic cigarette (e-cigarette) use among youth in Poland has become very popular. The aim of this study was to identify potential points of access to these products among students aged 16-17 before implementation of sales restrictions to minors in Poland in November 2016. Methods: A school-based, cross-sectional survey was administered in 2015-2016 in 21 secondary/technical schools across two regions of Poland. Analyses focused on 341 students aged 16-17 who reported past 30-day use of e-cigarettes. Pearson chi-square analyses were utilized to examine associations between access-related items, e-cigarette use, and demographics. Results: Among youth e-cigarette users, the most common access to their first e-cigarette was from a friend (38%), followed by purchasing from vape shops (26%). Similar patterns emerged when students were asked about the access to their currently used e-cigarette. Most youth reported no difficulty purchasing cartridges/e-liquid containing nicotine (90%); the majority of users (52%) reported buying such products in vape shops. Conclusions: Prior to implementing age-related sales restrictions, youth access to e-cigarettes and paraphernalia did not pose any significant barriers. Poland's introduction of a new age limit on e-cigarette sales may help limit the number of youth who purchase e-cigarettes from vape shops. Selective Disintegration-Milling to Obtain Metal-Rich Particle Fractions from the E-Waste Ervins Blumbergs, Vera Serga, Andrei Shishkin, Dmitri Goljandin, Andrej Shishko, Vjaceslavs Zemcenkovs, Karlis Markus, Janis Baronins, Vladimir Pankratov Subject: Materials Science, Metallurgy Keywords: e-waste; e-waste mechanical pretreatment; disintegration; e-waste milling; printed circuit boards; precious metals Various metals and semiconductors containing Printed Circuit Boards (PCBs) are abundant in any electronic device equipped with controlling and computing features. These devices inevitably constitute E-waste after the end of service life. The typical construction of PCBs includes mechanically and chemically resistive materials, which significantly reduce the reaction rate or even avoid accessing chemical reagents (dissolvents) to target metals. Additionally, the presence of relatively reactive polymers and compounds from PCBs requires high energy consumption and reactive supply due to the formation of undesirable and sometimes environmentally hazardous reaction products. Preliminarily milling PCBs into powder is a promising method for increasing the reaction rate and avoiding liquid and gaseous emissions. Unfortunately, current state-of-the-art milling methods also lead to the presence of significantly more reactive polymers still adhered to milled target metal particles. This paper aims to find a novel single and two-stage disintegration-milling approach that can provide the formation of metal-rich particle size fractions. The morphology, particle fraction sizes, bulk density, and metal content in produced particles were measured and compared. Research results show the highest bulk density (up to 6.8 g·cm-3) and total metal content (up to 95.2 wt. %) in finest sieved fractions after the single-step milling of PCBs. Therefore, the concentrations of about half tested metallic elements are higher in the single milled specimen and with lower adhered plastics concentrations, as compared to double milled specimens. Educating Tomorrow's Workforce for the Fourth Industrial Revolution – The Necessary Breakthrough in Mindset and Culture of the Engineering Profession Michael Max Bühler, Konrad Nübel, Thorsten Jelinek Subject: Engineering, Automotive Engineering Keywords: engineering education; Forth Industrial Revolution; 4IR; skills gap; future of work; e-learning; didactics We are calling for a paradigm shift in engineering education. In times of the Fourth Industrial Revolution ("4IR"), a myriad of potential changes is affecting all industrial sectors leading to increased ambiguity that makes it impossible to predict what lies ahead of us. Thus, incremental culture change in education is not an option any more. The vast majority of engineering education and training systems, having remained mostly static and underinvested in for decades, are largely inadequate for the new 4IR labor markets. Some positive developments in changing the direction of the engineering education sector can be observed. Novel approaches of engineering education already deliver distinctive, student centered curricular experiences within an integrated and unified educational approach. We must educate engineering students for a future whose main characteristics are volatility, uncertainty, complexity and ambiguity. Talent and skills gaps across all industries are poised to grow in the years to come. The authors promote an engineering curriculum that combine timeless didactic tradition, such as Socratic inquiry, project-based learning and first-principles thinking with novel elements (e.g. student centered active and e-learning by focusing on the case study and apprenticeship pedagogical methods) as well as a refocused engineering skillset and knowledge. These capabilities reinforce engineering students' perceptions of the world and the subsequent decisions they make. This 4IR engineering curriculum will prepare engineering students to become curious engineers and excellent communicators better navigating increasingly complex multistakeholder ecosystems. Effects of Manufacturing Variation in Electronic Cigarette Coil Resistance and E-liquid Characteristics on Coil Lifetime and Aerosol Generation Qutaiba M. Saleh, Edward C. Hensel, Nathan C. Eddingsaas, Risa J. Robinson Subject: Medicine & Pharmacology, Allergology Keywords: aerosol generation; e-cigarette; coil resistance; e-liquid; manufacturing variation This work investigated the effects of manufacturing variations including coil resistance, initial pod mass, and e-liquid color on coil lifetime and aerosol generation of Vuse ALTO pods. Random samples of pods were used until failure (where e-liquid was consumed, and coil resistance increased to high value indicating a coil break). Initial coil resistance, initial pod mass, and e-liquid net mass ranged between 0.89 to 1.14 [], 6.48 to 6.61 [g], and 1.88 to 2.00 [g] respectively. Coil lifetime with light color e-liquid was (mean) = 149, (standard deviation) = 10.7 puffs while pods with dark color e-liquid was = 185, = 22.7 puffs with a difference of ~36 puffs (p &lt;0.001). Total mass of e-liquid consumed until coil failure was = 1.93, = 0.035 [g]. TPM yield per puff of all test pods for the first session (brand new pods) was = 0.0123, = 0.0003 [g]. During usage, TPM yield per puff of pods with light color e-liquid was relatively steady while it was continuously decreasing for pods with dark e-liquid. Coil lifetime and TPM yield per puff were not correlated with either variation in initial coil resistance or variation in initial pod mass. The absence of e-liquid in the pod is an important factor in causing coil failure. Small bits of the degraded coil could be potentially introduced to the aerosol. There is a potential correlation of e-liquid color with both coil lifetime and TPM yield per puff. Change of e-liquid color might have been a result of oxidation which changed some nicotine into nicotyrine. Generalized Preinvex Functions and Their Applications Wedad Saleh, Adem Kilicman Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Geodesic E-convex sets; Geodesic E-convex functions; Riemannian manifolds This study focuses on a new class of functions called sub-b-s-preinvex, that is a generalization of sub-b-s-convex and preinvex functions, and discusses some of their properties. A new sub-b-s-preinvex programming is introduced and the sufficient conditions of optimality under this type of function is established. Preprint SHORT NOTE | doi:10.20944/preprints201705.0183.v1 10a,11,11-Trimethyl-10a,11-dihydro-8H-benzo[e]imidazo[1,2-a]indol-9(10H)-one Elena Ščerbetkaitė, Rasa Tamulienė, Aurimas Bieliauskas, Algirdas Šačkus Subject: Chemistry, Organic Chemistry Keywords: benzo[e]indole; benzo[e]imidazo[1,2-a]indole; fluorescence The alkylation of 1,1,2-trimethyl-1H-benzo[e]indole with 2-chloroacetamide, followed by work-up of the reaction mixture with a base and the subsequent treatment of a crude product with acetic acid gives 10a,11,11-trimethyl-10a,11-dihydro-8H-benzo[e]imidazo[1,2-a]indol-9(10H)-one. The structure assignments were based on data from 1H, 13C and 15N NMR spectroscopy. The optical properties of the obtained compound were studied by UV-vis and fluorescence spectroscopy. Discrimination of Beer Based on E-tongue and E-nose Combined with SVM: Comparison of Different Variable Selection Methods by PCA, GA-PLS and VIP Hong Men, Yan Shi, Songlin Fu, Yanan Jiao, Yu Qiao, Jingjing Liu Subject: Chemistry, Electrochemistry Keywords: E-tongue; E-nose; data fusion; variable selection; patter recognition; beer Multi-sensor data fusion of E-tongue and E-nose can provide a more comprehensive and more accurate analysis results. However, it also brings some redundant information, it is a hot issue to reduce the feature dimension for pattern recognition. In this paper, the taste-olfactory data fusion based on E-tongue and E-nose combined with Support Vector Machine (SVM) was used to classify five different beers. First, the taste and olfactory feature information were obtained based on E-tongue and E-nose. Second, the original feature data of single system were fused, then Principal Component Analysis (PCA) was applied to extract principal components, Genetic Algorithm-Partial Least Squares (GA-PLS) was used to select the characteristic variables, 20 subsets were generated with those variables based on the best Variable Importance of Projection (VIP) score. Finally, the classification models based on SVM were established, also c and g of SVM were calculated by Grid Search (GS), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO), the classification results of all subsets were obtained. The results showed that the classification accuracy using data fusion was much higher over single E-tongue and single E-nose, and the variable selection method by VIP had the best classification performance in #12 subset coupled with GA-SVM. A New Intelligent Approach for Effective Recognition of Diabetes in the IoT E-HealthCare Environment Amin Ul Haq, Jianping Li, Jalaluddin khan, Muhammad Hammad Memon, Shah Nazir, Sultan Ahmad, Ghufran Ahmad khan, Amjad Ali Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: diabetes disease; feature selection; E-Healthcare; decision tree; performance; machine learning; internet of things; medical data A significant attention has been made to the accurate detection of diabetes which is a big challenge for the research community to develop a diagnosis system to detect diabetes in a successful way in the IoT e-healthcare environment. Internet of Things (IOT) has emerging role in healthcare services which delivers a system to analyze the medical data for diagnosis of diseases applied data mining methods. The existing diagnosis systems have some drawbacks, such as high computation time, and low prediction accuracy. To handle these issues, we have proposed a IOT based diagnosis system using machine learning methods, such as preprocessing of data, feature selection, and classification for the detection of diabetes disease in e- healthcare environment. Model validation and performance evaluation metrics have been used to check the validity of the proposed system. We have proposed a filter method based on the Decision Tree (Iterative Dichotomiser 3) algorithm for highly important feature selection. Two ensemble learning Decision Tree algorithms, such as Ada Boost and Random Forest are also used for feature selection and compared the classifier performance with wrapper based feature selection algorithms also. Machine learning classifier Decision Tree has been used for the classification of healthy and diabetic subjects. The experimental results show that the Decision Tree algorithm based on selected features improves the classification performance of the predictive model and achieved optimal accuracy. Additionally, the proposed system performance is high as compared to the previous state-of-the-art methods. High performance of the proposed method is due to the different combinations of selected features set and GL, DPF, and BMI are more significantly important features in the dataset for prediction of diabetes disease. Furthermore, the experimental results statistical analysis demonstrated that the proposed method would be effectively detected diabetes disease and can easily be deployed in IOT wireless sensor technologies based e-healthcare environment. Serum Creatinine Level and Its Relation to Backache in Chronic Kidney Disease with Unknown Aetiology (CKDu) Patients in North Central Province, Sri Lanka Fahim Aslam, Nishantha Kumarasinghe Subject: Medicine & Pharmacology, Other Keywords: CKDu, e-GFR, S.Cr Background - Chronic kidney disease with unknown aetiology (CKDu) is a progressing disease which is affecting many Sri Lankans' yearly, this disease is slow progressive, irreversible and asymptomatic disease. Patients suffering from CKDu over the past two decades have been evaluated using the estimated glomerular filtration rate (e-GFR). This is the standard test performed under WHO guidelines to investigate and grade the CKDu, and this creatinine clearance is measured in this in order to grade the patient. Backache has been a common symptom presented in most of the chronic kidney disease patients (CKD), patients with CKDu are liable to get backache due to the renal failure that takes place (Jayathilake et al., 2013; Redmon et al., 2016). Objective – To detect a correlation between serum creatinine values and level of backache presented in the patient. Method -Using an interview based questionnaire to assess patients' body condition and the degree of backache is assessed, 59 patients were part of the study for a period of five months and had an average age of 60.3 years. The questionnaire answers are scaled into four main types of backache from the least to the most painful based on the pain scale adapted from Kapkia. Serum creatinine is measured using an automated analyzer which is measured using Jaffe's reaction. Using Pearson R correlation, the relationship was determined between serum creatinine and backache (Kafkia et al., 2011). Results – The results were obtained for 58 patients (n=58), where the mean backache score was 2.30 and mean serum creatinine value 2.77. The Pearson R value obtained was 0.01 which is indicating almost no relationship between backache and serum creatinine. Patients with stage 3 kidney disease (n=14) had an average E-GFR of 38.0, stage 4 (n=38) with 22.3 and stage 5 (n=6) with 14.0. Conclusion – Backache can be used an indicator of CKDu since it is presented in most patients' suffering from the disease but unable to be used to determine the stage of CKDu suffered by the patient due to the external causes that leads to backache. Several predisposing factors such as temperature, water source and agricultural activity can influence the backache. Digital Skills, Perceptions of Dr. Google, and Attitudes of e-Health Solutions among Polish Physicians: A Cross-Sectional Survey Study Joanna Burzyńska, Anna Bartosiewicz, Paweł Januszewicz Subject: Medicine & Pharmacology, Nursing & Health Studies Keywords: online health information; digital literacy; e-Health; e-Health solutions; Dr. Google The investment in digital e-Health services is a priority direction in the development of global health care systems. While people are increasingly using the Web for health information, it is not entirely clear what is the physicians' attitude towards digital transformation, and the acceptance of new technologies in healthcare. The aim of this cross-sectional survey study was to investigate physicians' self-digital skills, and their opinions on obtaining online health knowledge by patients, as well as the recognition of physicians' attitudes towards e-Health solutions. Principal Component Analysis (PCA) was performed to emerge the variables from self-designed questionnaire, and cross-sectional analysis comparing descriptive statistics and correlations for dependent variables using the one-way ANOVA (F-test). 307 physicians participated in the study, reported using the internet mainly several times a day (66.8%). Most participants (70.4%) were familiar with new technologies and rate their e-Health literacy high, although 84.0% reported the need for additional training in this field, and reported a need to introduce a larger number of subjects shaping digital skills (75.9%) in medical studies 53.4% of physicians perceived Internet-sourced information as sometimes reliable, and in general assessed the effects of using it by their patients negatively (41.7%). Digital skills increased significantly with frequency of internet use (F = 13.167; p = 0.0001), and decreased with physicians' age, and the need for training. Those who claimed that patients often experienced health benefits from online health showed higher digital skills (-1.06). Physicians most often recommended their patients to obtain laboratory test results online (32.2%), and to arrange medical appointments via the Internet (27.0%). Along with the deterioration of physicians' digital skills, the recommendation of e-Health solutions decreased (r = 0.413), and lower the assessment of e-Health solutions for the patient (r = 0.449). Physicians perceive digitization as a sign of the times, and frequently use its tools in daily practice. The evaluation of Dr. Google's phenomenon and online health is directly related to their own e-Health literacy skills, but there is still a need for practical training to deal with digital revolution. A Novel Web-Based eMemo for Tertiary Institutions in Developing Nations: From Conceptualization to Implementation Olusola Olabanjo Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: e-record; COVID-19 era; electronic memorandum; e-memo; ICT; university administration The new need to move to an online mode of administration has necessitated the adoption of Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). The conventional paper-based administrative memorandum is relatively costly, requires manpower, insecure and non-confidential, prone to unauthorized access and has a relatively slow cycle time; among others. An electronic memo (e-memo) can save cost, offer high speed of processing, reliability and security. The aim of this work was to conceptualize, design and develop a novel semantic web-based memo platform that allows for handling memos within an organization. The tools used include HTML, CSS, PHP, JavaScript, jQuery and AJAX, MySQL. The developed system was tested in three stages. Testing was done in the Department of Computer Science of the institution of case study, then in the Faculty of Science and finally in the entire Lagos State University. Agile development method was used in the development process. The designed system was hosted on a cloud-server and a total of 43 test cases were considered and an average of four minutes was recorded for a complete memo cycle. eMemo allows users to generate, sign, treat and memo electronically. It is such a useful tool that will enhance information processing in a faster, standardized and more secure manner. E-Petitions and Mobilisation Dynamics: The Importance of Local Anchoring. An Environmental Case Study Martine Legris, Régis Matuszewicz Subject: Social Sciences, Political Science Keywords: e democracy; e petition; public engagement; environmental movements; digital mobilization; sustainability; participation format E-petitioning is a useful object of study for observing the potential emergence of a new relationship to politics and new forms of political participation. Access to a dataset of hundreds of thousands of users of an electronic petitioning platform, provides the opportunity to overcome a certain number of limitations that are associated with traditional methods of studying political participation, since it allows us to focus on the reality of the signatories' behaviour rather than on their declarations. We follow the traces left by the petitioners on this site to better understand the process of dissemination of an online petition, and its linked with offline activities. Our examination of the three most signed petitions in the 'environment' category, combining an analysis of their petitioning dynamics and an analysis of the comments attached to them, allows us to show: firstly, that there is an interwoven relationship between the local anchoring of the mobilisation and the processes of dissemination by which petitions extend from local signatories to signatories who are geographically more distant; and secondly, that it is not accurate to imagine that just anyone can sign any petition, since petitioning dynamics proceed from one person to the next, whether these dynamics start from a pre-existing local anchorage on the ground, or act through a platform effect which is dependent on the attractiveness of the petition in question. Molecular Epidemiology of Enteroaggregative Escherichia coli (EAEC) Isolates of Hospitalized Children from Bolivia Reveal High Heterogeneity and Multidrug-Resistance Enrique Joffre, Volga Iñiguez Subject: Life Sciences, Biochemistry Keywords: Enteroaggregative E. coli; infant diarrhea; genetic diversity; severity; multidrug-resistance E. coli; Bolivia Enteroaggregative Escherichia coli (EAEC) is an emerging pathogen frequently associated with acute diarrhea in children and travelers to endemic regions. EAEC was found the most prevalent bacterial diarrheal pathogen from hospitalized Bolivian children less than five years of age with acute diarrhea from 2007 to 2010. Here, we further characterized the epidemiology of EAEC infection, virulence genes, and antimicrobial susceptibility of EAEC isolated from 414 diarrheal and 74 non-diarrheal cases. EAEC isolates were collected and subjected to a PCR-based virulence gene screening of seven virulence genes and a phenotypic resistance test to nine different antimicrobials. Our results showed that atypical EAEC (a-EAEC, AggR-negative) was significantly associated with diarrhea (OR, 1.62, 95% CI, 1.25 to 2.09, P < 0.001) in contrast to typical EAEC (t-EAEC, AggR-positive). EAEC infection was most prevalent among children between 7 - 12 months of age. The number of cases exhibited a biannual cycle with a major peak during the transition from warm to cold season (April – June). Both typical and a-EAEC infections were graded as equally severe; however, t-EAEC harbored more virulence genes. aap, irp2, and pic were the most prevalent genes. Surprisingly, we detected 60% and 52.6% of multidrug resistance (MDR) EAEC among diarrheal and non-diarrheal cases. Resistance to ampicillin, sulfonamides, and tetracyclines was most common, being the corresponding antibiotics, the ones that are frequently used in Bolivia. Our work is the first study that provides comprehensive information on the high heterogenicity of virulence genes in t-EAEC and a- EAEC and the large prevalence of MDR EAEC in Bolivia. Developing A Contingency Model of Export Marketing Strategy on E-commerce for MSMEs Kihyon Kim, Gyoo Gun Lim Subject: Social Sciences, Business And Administrative Sciences Keywords: E-commerce; Cross Border e-commerce; Export Marketing Strategy (EMS); E-commerce drivers; Contingency model; Micro, Small and Meduim Sized Enterprise (MSME) For better export performance in cross border e-commerce, a contingency model integrates e-commerce into traditional export marketing strategy (EMS) with internal and external determinants of EMS. However, EMS of Micro, small, and medium-sized enterprises (MSMEs) is understudied. This study validates and modifies existing model for MSMEs. As a qualitative study, multiple sources of data including interviews, internal documents, and group discussions were used regarding business cases of entrepreneurs and supporting organizations in Mongolia and Korea. This research suggets contingency model for MSMEs with modified factors and different strategies in each factor. Specifically, intermal determinants are managerial capability, product competitiveness, and strategic marketing orientation. External determinants are export market competitivceness, export market infrastructure and entry barriers. EMS for MSMEs consists of the same factors with the original model but come up with different strategies. Theoretical and managerial implications were discussed. Digital Citizens' Feelings in National #Covid19 Campaigns in Spain Sonia Santoveña-Casal, Javier Gil-Quintana, Laura Ramos Subject: Social Sciences, Accounting Keywords: Digital citizenship; Twitter; e-participation (1) Background: Spain launched an official campaign, #EsteVirusLoParamosUnidos, to try and unite the efforts of the entire country through citizen cooperation to combat coronavirus. The research goal is to analyze the Twitter campaign's repercussion on general citizen feeling. (2) Methods: The research is based on a composite design that triangulates from a theoretical model, a quantitative analysis and a qualitative analysis. (3) Results: Of the 7357 tweets in the sample, 72.32% were found to be retweets. Four content families were extracted: politics, education, messages to society and defense of occupational groups. The feelings expressed ranged along a continuum, from unity, admiration and support at one end to discontent and criticism regarding the health situation at the other. (4) Conclusions: The development of networked sociopolitical and technical measures that enable citizen participation facilitates the development of new patterns of interaction between governments and digital citizens, increasing citizens' possibilities of influencing the public agenda and therefore strengthening citizen engagement vis-à-vis such situations. Preprint CASE REPORT | doi:10.20944/preprints201806.0465.v1 Design and Implementation of a Sustainable Development Process Between Fitorremediation and Production of Bioethanol with E. crassipes Uriel Fernando Carreño Sayago Subject: Engineering, Other Keywords: E. crassipes; biomass; phytoremediation; bioethanol A E. Crassipes is considered a problem in different aquatic ecosystems, due to its abundance could become a solution to design and build economic and efficient treatment plants, and especially for the production of biofuels such as bioethanol. The objective of this research is to design and implement a sustainable development process between phytoremediation and bioethanol production with E. crassipes, evaluating the incidence of chromium adhered to the biomass of this plant in the production of bioethanol. Materials and methods: A system was installed to evaluate the phytoremediation with E crassipes with water loaded with chromium, determining the effectiveness of this plant to remove this heavy metal even if it is alive in a body of water. After this process, we proceeded to bring the biomass loaded with chromium to bioreactors to evaluate the production of bioethanol, assessing three types of biomass, one without chromium adhered and the other two with chromium adhered to its plant structure. There was an impact of the ethanol production of the E crassipes due to the presence of chromium, but this production can be taken into account for the assembly of an integral system of phytoremediation and bioethanol production, making the most of this biomass. CARDIOSIM©: The First Italian Software Platform for Simulation of the Cardiovascular System and Mechanical Circulatory and Ventilatory Support Beatrice De Lazzari, Roberto Badagliacca, Domenico Filomena, Silvia Papa, Carmine Dario Vizza, Massimo Capoccia, Claudio De Lazzari Subject: Engineering, Biomedical & Chemical Engineering Keywords: CARDIOSIM©; numerical simulator; lumped parameter model; e-learning; mechanical circulatory support; ventilatory; cardiovascular system; heart failure; clinician This review is devoted to present the history of CARDIOSIM© software simulator platform, which was developed in Italy to simulate the human cardiovascular and respiratory system. The first version of CARDIOSIM© was developed at the Institute of Biomedical Technologies of the National Research Council in Rome. The first platform version published in 1991 ran on PC with disk operating system (MS-DOS) and was developed using the Turbo Basic language. The last version runs on PC with Microsoft Windows 10 operating system; it is implemented in Visual Basic and C++ languages. The platform has a modular structure consisting of seven different general sections, which can be assembled to reproduce different pathophysiological conditions. The software simulator can reproduce the most important circulatory phenomena in terms of pressure and volume relationships. It represents the whole circulation using a lumped-parameter model and enables the simulation of different cardiovascular conditions according to Starling's law of the heart and a modified time-varying elastance model. Different mechanical ventilatory and circulatory devices have been implemented in the platform including thoracic artificial lung, ECMO, IABP, pulsatile and continuous right and left ventricular assist devices, biventricular pacemaker and biventricular assist devices. CARDIOSIM© is used in clinical and educational environment. On the Characteristic Properties of Geodesic Sub-$ (\alpha,b,s )$-Preinvex Subject: Mathematics & Computer Science, Geometry & Topology Keywords: E-convex sets; preinvex; E-convex functions; geodesic; sub-preinvex; Riemannian manifolds; Hadamard manifolds. In the present work we study the properties of geodesic sub-$ (\alpha,b,s) $-preinvex functions on Hadamard manifolds and establish some basic properties in both general and differential cases. Further, we study sufficient conditions of optimality and proved some new inequalities under geodesic sub-($ \alpha,b,s $)-preinvexity. Urban Planning, Information Technology and Artificial Intelligence: The Theory of Evolution Anutosh Das Subject: Social Sciences, Geography Keywords: Urban Planning, Artificial Intelligence, Information Technology, Smart Cities, E-Governance, E-participation and M-participation This article is an effort to scrutinize the role of Information Technology development in the chronological transformation of Urban Planning domain using the exploratory research approach. In this research, it is argued that the theoretical and practical understanding of Urban Planning should absorb and integrate the bright outcome of the rise of Information and Technology to foster congruent future urban development. The article addresses the trends of transformation in the urban planning domain through the myopic lens of the expansion of information and communications technology era followed by investigating the key drivers shaping the interaction between modern-day urban planning and information technology considering both the dark and bright sides into account. Impact of Conductive Yarns on Embroidery Textile Moisture Sensor Marc Martinez-Estrada, Raul Fernandez-Garcia, Ignacio Gil Subject: Engineering, Electrical & Electronic Engineering Keywords: sensor; e-textile, embroidery, moisture, capacitive. In this work, two embroidered textile moisture sensors are characterized with three different conductive yarns. The sensors are based on a capacitive interdigitated structure embroidered on a cotton substrate with an embroidered conductor yarn. The performance comparison of 3 different type of conductive yarns has been addressed. In order to evaluate the sensor sensitivity, the impedance of the sensor has been measured by means of a LCR meter from 20 Hz to 20 kHz on a climatic chamber with a sweep of the relative humidity from 30% to 65% at 20 ºC. The experimental results show a clear and controllable dependence of the sensor impedance with the relative humidity and the used conductor yarns. This dependence points out the optimum conductive yarn to be used to develop wearable applications for moisture measurement. Conceptualising and Modelling e-Recruitment Process for Enterprises through a Problem Oriented Approach Saleh Alamro, Huseyin Dogan, Deniz Cetinkaya, Nan Jiang, Keith Phalp Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: enterprise recruitment; problem definition; e-recruitment Internet-led labour market has become so competitive forcing many organisations from different sectors to embrace e-recruitment. However, realising the value of the e-recruitment from a Requirements Engineering (RE) analysis perspective is challenging. This research is motivated by the results of a failed e-recruitment project conducted in military domain which is used as a case study in this research. After reviewing the various challenges faced in that project through a number of related research domains, this research focuses on two major problems which are the (1) difficulty of scoping, representing, and systematically transforming recruitment problem knowledge towards e-recruitment solution specification; and (2) the difficulty of documenting e-recruitment best practices for reuse purposes in an enterprise recruitment environment. In this paper, a Problem-Oriented Conceptual Model (POCM) with a complementary Ontology for Recruitment Problem Definition (Onto-RPD) is proposed to contextualise the various recruitment problem viewpoints from an enterprise perspective and to elaborate those problem viewpoints towards a comprehensive recruitment problem definition. The POCM and Onto-RPD are developed incrementally using action-research conducted on three real case studies: (1) Secureland Army Enlistment, (2) British Army Regular Enlistment, and (3) UK Undergraduate Universities and Colleges Admissions Service (UCAS). They are later evaluated in a focus group study against a set of criteria. The study showed that the POCM and Onto-RPD provide a strong foundation for representing and understanding the e-recruitment problems from different perspectives. The Value of Providing Smokers with Free E-Cigarettes: Smoking Reduction and Cessation Associated with the Three-Month Provision to Smokers of a Refillable Tank-Style E-Cigarette Neil McKeganey, Joanna Miler, Farhana Haseen Subject: Behavioral Sciences, Other Keywords: E-cigarettes Smoking Cessation Free Provision Despite the uptake of tobacco smoking declining in the UK, smoking is still the leading cause of preventable poor health and premature death. While improved approaches to smoking cessation are necessary, encouraging and assisting smokers to switch by using substantially less toxic non-tobacco nicotine products may be a possible option. To date few studies have investigated the rates of smoking cessation and smoking reduction associated with the free provision of electronic-cigarettes (e-cigarette) to smokers. In this study the Blu Pro e-cigarette was given to smokers for use in place of tobacco for 90 days. The rates of smoking abstinence and daily smoking patterns were assessed at baseline 30 days, 60 days and 90 days. The response rate was 87%. After 90 days, the complete abstinence rate was 36.5% from 0% at baseline. Frequency of daily smoking reduced from 88.7% to 17.5% (P<0.001) and median consumption of cigarettes/day from 15 to 5 (P<0.001). Median days per month participants smoked also dropped from 30 to 13 after 90-days (P<0.001). On the basis of these results there may be value in smoking cessation services and other services ensuring that smokers are provided with e-cigarettes at zero or minimal costs for at least a short period of time. Identification of Levels of Sustainable Consciousness of Teachers in Training Through an E-Portfolio Pilar Colás-Bravo, Patrizia Magnoler, Jesús Conde-Jiménez Subject: Social Sciences, Education Studies Keywords: sustainability; consciousness; education; e-portfolios; ICT The contents of Education for Sustainable Development (ESD) should be included in teachers' initial and advanced training programs. In order to determine the key competences for sustainability, creating a Sustainable Consciousness is one of the main foundations. However, there are not many empirical studies that deal with consciousness from education. In this context, e-portfolio appears as a tool that promotes reflection and critical thinking, key competences for consciousness development. This work intends to propose a categorization system to extract types of consciousness and identify trainee teachers' levels of consciousness. For this research work, of an eminently qualitative nature, we have selected twenty-five portfolios of students in the last year of the School of Education at the University of Macerata (Italy). The qualitative methodological procedure followed allowed to deduce three bases that shape trainee teachers' consciousness: thinking, representation of reality and type of consciousness. We concluded that the attainment of a Sustainable Consciousness in teachers requires activating and developing higher levels of thinking, as well as a projective and macrostructural representation of reality. Mismatch between Student and Tutor Evaluation of Training Needs: An Exploratory Study of Traumatology Rotations Fernando Santonja-Medina, María Paz García-Sanz, Sara Santonja-Renedo, Joaquín García-Estañ Subject: Medicine & Pharmacology, Other Keywords: e-portfolio; clinical skills; competences; medicine Clinical training in medical schools in Spain is performed by rotations in university hospitals. During these internships, students are expected to acquire and master basic procedural skills. However, the assessment tools available rarely check whether these skills are completely acquired by the students. We have used an e-portfolio to determine the optimal number of times the students need to repeat a procedure to be able to perform it independently. The results were compared with the actual performance during the internships. An e-portfolio collected qualitative information about the internships. Quantitative information was also requested about the number of times each clinical skill was performed. Later, a survey asked these students and their teachers the optimal number of times each skill should be repeated before it could be considered fully acquired. The questionnaire was answered by 98.6% of the students and 70.3% of their teachers. Out of the 21 clinical skills and procedures selected, both students and their tutors agreed in a similar optimal value in 16 of them; only in five of them, teachers thought that students needed a greater number of times than that selected by the students. When these optimal values were compared with the actual values recorded in the portfolio during the internships, it was found that about half of the clinical skills were carried out less frequently than expected, thus providing an important feedback about the internships. Quantitative information collected in portfolios reveals a moderate mismatch between students and tutors perceptions of their training needs. An E-Squence Approach to the 3x + 1 Problem SanMin Wang Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: 3x+1 problem; E-sequence approach; Ω−Divergence of non-periodic E-sequences; the Wendel's inequality For any odd positive integer $x$, define $(x_n)_{n\geqslant 0} $ and $(a_n )_{n\geqslant 1} $ by setting $x_{0}=x, \,\, x_n =\cfrac{3x_{n-1} +1}{2^{a_n }}$ such that all $x_n $ are odd. The 3x+1 problem asserts that there is an $x_n =1$ for all $x$. Usually, $(x_n )_{n\geqslant 0} $ is called the trajectory of $x$. In this paper, we concentrate on $(a_n )_{n\geqslant 1} $ and call it the E-sequence of $x$. The idea is that, we consider any infinite sequence $(a_n )_{n\geqslant 1} $ of positive integers and call it an E-sequence. We then define $(a_n )_{n\geqslant 1} $ to be $\Omega-$convergent to $x$ if it is the E-sequence of $x$ and to be $\Omega-$divergent if it is not the E-sequence of any odd positive integer. We prove a remarkable fact that the $\Omega-$divergence of all non-periodic E-sequences implies the periodicity of $(x_n )_{n\geqslant 0} $ for all $x_0$. The principal results of this paper are to prove the $\Omega-$divergence of several classes of non-periodic E-sequences. Especially, we prove that all non-periodic E-sequences $(a_n )_{n\geqslant 1} $ with $\mathop {\overline {\lim } }\limits_{n\to \infty } \cfrac{b_n }{n}>\log _23$ are $\Omega-$divergent by using the Wendel's inequality and the Matthews and Watts's formula $x_n =\cfrac{3^n x_0 }{2^{b_n }}\prod\limits_{k=0}^{n-1} {(1+\cfrac{1}{3x_k })} $, where $b_n =\sum\limits_{k=1}^n {a_k } $. These results present a possible way to prove the periodicity of trajectories of all positive integers in the 3x + 1 problem and we call it the E-sequence approach. Fair Exchange and Anonymous E-Commerce by Deploying Clone-Resistant Tokens Ayoub Mars, Wael Adi Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: anonymous e-commerce; e-payment; fair exchange; anonymity; hardware tokens; secret unknown cipher; physical unclonable functions The majority of E-commerce transactions reveal private information such as customers' identities, order contents, and payment information during the transaction. Other personal information such as health conditions, religion, and even ethnicity may be also deduced. Even when deploying electronic cryptocurrencies such as Bitcoin, anonymity cannot be fully guaranteed. Also, many anonymous payment schemes suffer from possible double spending circumstances. E-commerce privacy is basically a difficult problem as it involves parties with concurring interests. Three major e-commerce requirements are highly difficult to resolve: anonymous purchase, anonymous delivery, and anonymous payment. This work presents a possible e-commerce system addressing all three anonymity requirements for electronic-items business on open networks. The system offers anonymous entities authentication mechanisms up to completing a fair anonymous e-commerce transaction. The system is based on deploying a physically clone-resistant hardware token for each relevant involved party. The tokens are made clone-resistant by accommodating a Secret Unknown Cipher (SUC) in each hardware-token as a digital PUF-like identity. A set of novel generic system-setups for units, protocols and e-commerce schemes is introduced. The proposed anonymization is basically attained by virtually-replacing relevant e-commerce entities by low-cost, unique and clone-resistant tokens/units using SUCs. The units act as trustable anonymous, authenticated and non-replaceable entities monitored by their acting users. Prevalence of BIDI® Stick E-Cigarette Use Among Youth and Young Adults in the United States Neil McKeganey, Andrea Patton, Venus Marza, Gabriel Barnard Subject: Behavioral Sciences, Other Keywords: Disposable E-cigarettes Youth Young Adult Prevalence Background: In the 2022 National Youth Tobacco Survey disposable e-cigarette devices are shown to be the most widely used e-cigarette devices amongst U.S. youth. In this paper we report the results of research designed to estimate the prevalence of use of BIDI® Stick branded e-cigarettes amongst youth (aged 13 to 17), and under-age young adults (aged 18 to 20) in the U.S. Methods: Cross sectional online survey of a nationally representative sample of 1,215 youth (13 17 years) recruited via the IPSOS probability-based KnowledgePanel and 3,370 young adults aged 18 to 24 - amongst whom 1,125 were aged 18 to 20. Results: Amongst youth aged 13 to 17, 0.91% [95% CI: 0.44-1.68] or 190,000 [95% CI: 90,000-350,000] youth reported having ever used a BIDI® Stick branded product and 0.04% [95% CI: 0.00-0.38] or less than 10,000 youth reported currently using a BIDI® Stick branded product. Amongst those young adults aged 18 to 20, 3.90% [95% CI: 2.49-5.81] or 470,000 [95%CI: 300,000-700,000] reported having ever used a BIDI® Stick product whilst 0.60% [95% CI: 0.17-1.55] or 70,000 [95% CI: 20,000-180,000] reported they now use a BIDI® Stick product "every day" or "some days". Conclusions: The low prevalence of youth and underage adult current use of the BIDI® Stick e cigarette suggests that this product is not responsible for the recent growth in the use of disposable e-cigarettes by youth within the U.S. as demonstrated by the 2022 National Youth Tobacco Survey. Anisotropy of the E Effect in Ni-based Magnetoelectric Cantilevers: A Finite Element Method Analysis Bernd Hähnlein, Neha Sagar, Hauke Honig, Stefan Krischok, Katja Tonisch Subject: Materials Science, General Materials Science Keywords: delta E effect; magnetoelectric sensor; Nickel; anisotropy Magnetoelectric sensors based on microelectromechanical cantilevers consisting of TiN / AlN / Ni are investigated using finite element simulations in regard of the anisotropy of the E effect and its impact on the sensor sensitivity. The E effect is derived from the anisotropic magnetostriction and magnetization of single crystalline Nickel. The magnetic hardening of Nickel in saturation is demonstrated for the (110) as well as the (111) orientation. It is shown further, that magnetostrictive bending of the cantilever has a negligible impact on the eigenfrequency and thus sensitivity. The intrinsic E effect of Nickel decreases in magnitude depending on the crystal orientation when integrated into the magnetoelectric sensor design. The transitions of the individual magnetic domain states are found to be the dominant influencing factor on the sensitivity for all crystal orientations. The peak sensitivity was determined to 41.3 T-1 for (110) in-plane orientated Nickel at a magnetic bias flux of 1.78 mT. It is found, that the transition from domain wall shift to domain rotation along the hard axes yields much higher sensitivity than the transition from domain rotation to magnetization reversal. The results achieved in this work show that Nickel as hard magnetic material is able to reach almost identical sensitivities as soft magnetic materials, such as FeCoSiB. Dysregulated Metabolites Serve as Novel Biomarkers for Metabolic Diseases Caused by Vaping and Cigarette Smoking Qixin Wang, Xianming Ji, Irfan Rahman Subject: Life Sciences, Biochemistry Keywords: Metabolome; TCA; Lipids; e-cigarette; cigarette; biomarkers Metabolites are essential intermediate products in metabolism, and metabolism dysregulation indicates different types of diseases. Previous studies have shown that cigarette smoke dysregulated metabolites; however, limited information is available with electronic cigarette (E-cig) vaping. We hypothesized that E-cig vaping and cigarette smoking altered systemic metabolites, and we propose to understand the specific metabolic signature between E-cig users and cigarette smokers. Plasma from non-smoker controls, cigarette smokers, and e-cig users were collected, and metabolites were identified by UPLC–MS (Ultraperformance liquid chromatography-mass spectrometer). Nicotine degradation was activated by e-cig vaping and cigarette smoking with increased concentrations of cotinine, cotinine N-oxide, (S)-nicotine, and (R)-6-hydroxynicotine. Additionly, we found significant decreased concentrations in metabolites associated with tricarboxylic acid (TCA) cycle pathways in e-cig users verses cigarette smokers, such as: D-glucose, (2R,3S)-2,3-dimethylmalate, (R)-2-hydroxyglutarate, O-phosphoethanolamine, malathion, D-threo-isocitrate, malic acid, and 4-acetamidobutanoic acid. Cigarette smoking significant up-regulated sphingolipid metabolites, such as D-sphingosine, ceramide, N-(octadecanoyl)-sphing-4-enine, N-(9Z-octadecenoyl)-sphing-4-enine, and N-[(13Z)-docosenoyl]sphingosine, verses e-cig vaping. Overall, e-cig vaping dysregulated TCA cycle realted metabolites while cigarette smoking altered sphingolipid metabolites. Both e-cig and cigarette smoke increased nicotinic metabolites. Therefore, specific metabolic signature altered by e-cig vaping and cigarette smoking could serve as potential systemic biomarkers for early cardiopulmonary diseases. The Role of E-Tourism Mediation in the Relationship between Climate Change and the Amount of Income from Tourism Industry in Iran Seyyed Mohammad Ghaem Zabihi, Khashayar Safarzaei Subject: Social Sciences, Economics Keywords: E-tourism; Climate; Climate Change; Tourism Industry In the recent century, the tourism industry and within it the tourism economy are one of the most important and fundamental sectors of engaged business. E-tourism can be used as a dynamic tool in up to date areas of informative information and tourism marketing will be considered as a suitable field for the tourism industry. The aim of this study was to investigate the relationship between climate change and the amount of revenues from the tourism industry relying on a tool called e-tourism, and informing and providing services through this way so that Iran can achieve a greater share of export of a single-product oil economy combined with economic growth and sustainable development goals. The method of this research is descriptive-analytical. E-reputation Management of Hotel Indusrty Mayang Anggani, Herlan Suherlan Subject: Social Sciences, Business And Administrative Sciences Keywords: e-reputation management; online management; hotel industry The purpose of this study is to find out the e-reputation management of the hotel industry, as well as the social media channels used as the tools of hotel e-reputation building. This study used a qualitative approach by analyzing the in-depth interviews with hotel marketing communication practitioners of 15 hotel companies in Bandung City. The findings identified that e-reputation is considered a crucial factor in determining hotel performance due to the change of customers' behaviors today and identified also three types of e-reputation management activities implemented by hotel companies, such as online activities, offline activities, and online/offline activities. The results of this study have implications for the hospitality industry, as a reference for formulating their marketing strategies. Enhancing the Use of E-mail in Scientific Research and in the Academy Mario Pagliaro Subject: Keywords: e-mail; scientific productivity; internet; digital era From professors overwhelmed by anxiety-driven e-mails from students, through faculty and administrative staff wasting valued time on e-mail minutia, misuse of electronic mail in the academy has become ubiquitous. After a brief overview of the unique features of e-mail communication, this study provides insight and guidelines to plan new educational activities on healthy and productive utilization of e-mail in the academy of the digital era. The overall aim is to prioritize scholarly deep work by focusing on teaching and research work, freeing working time wasted on unproductive use of e-mail. Impact of Manufacturing Variability and Washing on Embroidery Textile Sensor Marc Martinez-Estrada, Bahared Moradi, Raul Fernandez-Garcia, Ignacio Gil Subject: Engineering, Electrical & Electronic Engineering Keywords: sensor; e-textile, embroidery, moisture, conductive yarn In this work, an embroidered textile moisture sensor is presented. The sensor is based on a capacitive interdigitated structure embroidered on a cotton substrate with an embroidery conductor yarn composed by 99% pure silver plated nylon yarn 140/17 dtex. In order to evaluate the sensor sensitivity, the impedance of the sensor has been measured by means of a LCR meter from 20 Hz to 20 kHz on a climatic chamber with a sweep of the relative humidity from 25% to 65% at 20 ºC. The experimental results show a clear and controllable dependence of the sensor impedance with the relative humidity. Moreover, the reproducibility of the sensor performance subject to the manufacturing process variability and washing process is also evaluated. The results show that the manufacturing variability introduce a moisture measurement error up to 4%. The washing process impact on the sensor behavior after applying the first washing cycle implies a sensitivity reduction higher than 14%. Despite these effect, the textile sensor keeps its functionality and can be reused in standard conditions. Therefore, these properties point out the usefulness of the proposed sensor to develop wearable applications on health and fitness scope including the user needs to have a life cycle longer than one-time use Screening for Viral Hepatitis and Other Infectious Diseases in a High-Risk Health Care Group in Mexico Oscar Lenin Ramírez-Pérez, Vania César Cruz-Ramon, Paulina Chinchilla-López, Héctor Baptista-González, Rocío Trueba-Gómez, Fany Rosenfeld-Mann, Elsa Roque-Álvarez, Nancy Edith Aguilar-Olivos, Guadalupe Ponciano-Rodriguez, Carlos Esteban Coronel-Castillo, Jocelyn Contreras-Carmona, Nahum Méndez-Sánchez Subject: Medicine & Pharmacology, Gastroenterology Keywords: hepatitis c viruses; hepatitis e virus; dentists Health care workers (HCWs), specifically dentists, are at the front line for acquiring blood-borne virus infections. The highest proportion of occupational transmission is through percutaneous injuries via hollow-bore needles. Several studies around the world have reported that hepatitis viruses and human immunodeficiency virus are the main pathogens for most cases of occupationally acquired blood-borne infection. We aim to investigate the prevalence of hepatitis B virus (HBV), hepatitis C virus (HCV), hepatitis E virus (HEV), and human immunodeficiency virus (HIV) among Mexican dentists. Methods. We included 159 dentists who attended the annual meeting at the Medica Sur Clinic & Foundation held in Mexico City in May 2016. A survey was applied in order to obtain data of occupational exposure to blood-borne viruses (BBV). Serum samples were screened serologically using enzyme-linked immunosorbent assays. Results. Two dentists (1.2%) were positive for antibodies against HCV antigen, one (0.6%) was positive for antibodies against HBV antigen and three (1.8%) were positive for the detection of IgG antibodies against HEV. Two cases (1.2%) were positive for antibodies against HIV. Conclusions. The infection by HEV was the most prevalent among dentists. However, the prevalence of BBV in dentists was similar to that in the general population. Digital Skills as A Significant Factor of Human Resources Development Jana Stofkova, Adela Poliakova, Katarina Repkova Stofkova, Peter Malega, Matej Krejnus, Vladimira Binasova, Naqibullah Daneshjo Subject: Social Sciences, Other Keywords: digital skills; DESI index; EGDI index; e-Government Digital technologies play a key role in reviving the world economy. The EU has pledged to combine recovery support with resilient digital transformation. The COVID-19 pandemic highlighted the lack of digitization in Slovakia and the shortcomings of digital skills in citizens and communication with institutions. Digital skills are important and should form part of educational policy. ICT skills can help people succeed in the labour market and improve communication with public administration. Digitization and globalization increase the importance to communicate through the Internet, applications and other e-based gadgets. Digital skills are one of the essential parts of e-Government, so people can use e-Government services in communication with public administration. The current crisis citizens' use of online services. In-dices concerning the digital economy are analysed, such as the digital economy and society index DESI and e-government digital skills (EGDI) from 2018 to 2021 revealed a stagnant state in 2018 and 2019 and 2020, there was a decrease in basic digital skills". The next index is E-Government Digital Index It focuses on human capital and digital skills in these indices. The paper analyses and identifies the digital skills of citizens in the context of e-Government development and describes the use of e-Government services by EU citizens with a focus on the Slovak republic. The data were collected through a questionnaire survey with citizens in Slovak republic's digital skills according to selected categories, the use of e-Government services as well as awareness of e-Government services. Solutions that improve e-government in the Slovak Republic are gradually being implemented. Improving digital skills according to National Coalition for Digital Skills and professions in the Slovak republic and is one of the priorities of The Ministry of Education, Science, Research and Sport of the Slovak Republic which has adopted an action plan for 2019 – 2022 to improve the results in the DESI index by 2025 and focus on the digital skills required by employers. The survey revealed that in Slovakia, the majority of schools offer only weak support for digital education (about the EU-27 average of 68% and 45%, respectively). The research revealed also decreased level of digital literacy among young people. These competencies are very important to gain a position in the labour market in the digital society. The projects aim to support the development of digital skills of primary and secondary school students, and the integration of new technologies into teaching. Alterations of Mitochondrial Network by Cigarette and e-Cigarette Vaping Manasa Kanithi, Sunil Junapudi, Syed Islamuddin Shah, Matta Reddy Alavala, Ghanim Ullah, Bojjibabu Chidipi Subject: Life Sciences, Molecular Biology Keywords: Cigarette smoking; e-cigarette smoking; mitochondria; fusion; fission Toxins present in cigarette and e-cigarette smoke constitute a significant cause of illnesses and are known to have fatal health impacts. Specific mechanisms by which toxins present in smoke impair cell repair are still being researched and are of prime interest for developing more effective treatments. Current literature suggests toxins present in cigarette smoke and aerosolized e-vapor trigger abnormal intercellular responses, damage mitochondrial function, and consequently disrupt the homeostasis of the organelle's biochemical processes by increasing reactive oxidative species. Increased oxidative stress sets off a cascade of molecular events, disrupting optimal mitochondrial morphology and homeostasis. Furthermore, smoking-induced oxidative stress may also amalgamate with other health factors to contribute to various pathophysiological processes. An increasing number of studies show that toxins may affect mitochondria even though exposure to secondhand or thirdhand smoke. This review assesses the impact of toxins present in tobacco smoke and e-vapor on mitochondrial health, networking, and critical structural processes including mitochondria fission, fusion, hyperfusion, fragmentation, and mitophagy. The efforts are focused on discussing current evidence linking toxins present in first, second, and thirdhand smoke to mitochondrial dysfunction Review on the Integration of Microelectronics for E-Textile Abdella Ahmmed Simegnaw, Benny Malengier, Gideon K. Rotich, Melkie Getnet Tadesse, Lieva Van Langenhove Subject: Engineering, Automotive Engineering Keywords: Microelectronics; E-textile; Smart textile; Interconnection; textile-adapted Modern electronic textiles are moving towards flexible wearable textiles, so-called e-textiles that have micro-electronic elements embedded onto the textile fabric that can be used for varied classes of functionalities. There are different methods of integrating rigid microelectronic components into/onto textiles for the development of smart textiles, which include, but are not limited to, physical, mechanical and chemical approaches. The integration systems must satisfy being flexible, lightweight, stretchable and washable to offer a superior usability, comfortability and non-intrusiveness. Furthermore, the resulting wearable garment needs to be breathable. In this review work, three levels of integration of the microelectronics into/onto the textile structures are discussed, the textile-adapted, the textile-integrated, and the textile-based integration. The textile-integrated and the textile- adapted e-textiles have failed to efficiently meet being flexible and washable. To overcome the above problems, researchers studied the integration of microelectronics into/onto textile at fiber or yarn level applying various mechanisms. Hence, a new method of integration, textile-based, has risen to the challenge due to the flexibility and washability advantages of the ultimate product. In general, the aim of this review is to provide a complete overview of the different interconnection methods of electronic components into/onto textile substrate. Occurrence of Shiga Toxin-Producing Escherichia coli Carrying Antimicrobial Resistance Genes in Sheep on Smallholdings in Bangladesh Mukta Das Gupta, Arup Sen, Mishuk Shaha, Avijit Dutta, Ashutosh Das Subject: Life Sciences, Biochemistry Keywords: Sheep; E. coli; shiga-toxin; antimicrobial resistance genes Inappropriate antimicrobial treatment can pose a risk for developing resistance against antimi-crobial drugs in bacteria. Close human contact might have a higher chance of being transmitted to humans from sheep if the sheep population is a potential reservoir of zoonotic pathogens such as shiga toxin-producing Escherichia coli (E. coli) (STEC). Therefore, this study aimed to exam-ine the sheep population in rural Bangladesh for antimicrobial resistant STEC. We screened 200 faecal samples collected from sheep in three Upazila from the Chattogram district. Phenotypical-ly positive E. coli isolates were examined for two shiga toxin-producing genes – stx1 and stx2. PCR positive STEC isolates were investigated for the presence of antimicrobial resistance genes- blaTEM, sul1 and sul2. In total, 123 of the 200 tested samples were confirmed positive E. coli by cul-tured based methods. PCR results show 17(13.8%) E. coli isolates harboured ≥ one virulent gene (stx1 or/and stx2) of STEC. Six of the tested STEC isolates exhibited blaTEM gene; eight STEC isolates had sul1 gene, and sul2 gene was detected in ten STEC isolates. To our knowledge, this study is the first to reveal a significant proportion of STEC isolated from sheep in rural Bangla-desh harbouring antimicrobial resistance genes. Bibliometric Knowledge Mapping of E-Commerce Platform Operation on Data Mining Min Ye, Hongxia Li Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: e-commerce; big data; bibliometric analysis; knowledge mapping The e-commerce platform in the digital economy era has evolved into a data platform ecosystem built around data resources and data mining technology systems. The most typical applications of big data are also concentrated in the field of e-commerce. E-commerce companies should first grasp the interactive relationship among the three major factors of data, technology and innovation, e-commerce platform operation is a multidisciplinary research field. It is not easy for researchers to obtain a panoramic view of the knowledge structure in this field. Knowledge graph is a kind of graph that shows the development process and structure relationship of knowledge with the field of knowledge as the object. It is not only a visual knowledge mapping, but also a serialized knowledge pedigree, which provides researchers with a quantitative research method for the development trend of statistics and academic status. The purpose of this research is to help researchers understand the key knowledge, evolutionary trends and research frontiers of current research. This study uses Citespace bibliometric analysis to analyze the data of the Science Net database and finds that: 1) The development of the research field has gone through three stages, and some representative key scholars and key documents have been recognized; 2) the common knowledge mapping of literature The co-occurrence of citations and keywords shows research hotspots; 3) The results of burst detection and central node analysis reveal research frontiers and development trends. Today, the visualization of big data brings different challenges. The abstraction between the world and today's data visualization occurs when the data is captured. Every user sees his own visualization data generated by standardized calculations. At the same time, there are still many controversies in the theoretical model, structure and structural dimensions. This is the direction that future researchers need to further study. Parametric Cost Estimation Model for Li-ion Battery Pack of E-motorcycle Conversion based on Activity Based Costing Sofi Desi Susanti, Yuniaristanto Yuniaristanto, Wahyudi Sutopo, Rina Wiji Astuti Subject: Engineering, Industrial & Manufacturing Engineering Keywords: activity-based costing; battery pack; e-motorcycle conversion Universitas Sebelas Maret (UNS) through SMART UNS Company has conducted research and development of e-motorcycle conversion using Li-ion battery pack as a substitute for ICE energy source from the conventional motorcycle. Currently, the battery-pack that used for e-motorcycle conversion is in the development phase towards commercialization. The challenge of estimating production costs is the complicated production process and storing hidden expenses that can be a problem. This hidden cost is often a missing or varied factor that costs less or more expensive. This study presents an integrated parametric cost estimation model with activity-based cost assignments to estimate production costs through cost calculations for each activity. Activity-based costs break the production process into a specific cost element for each step. Each activity's cost is put into a parametric cost estimation model to calculate the cost of each activity into the total cost of production. Cost estimation results will be analyzed using a regression method to determine which variables most affect the production cost of Li-ion battery packs for the conversion of e-motorcycles in the SMART UNS company. Numerical Analysis for Aerodynamic Behaviour of Hyperloop Pods Yadawendr Kumar Singh, Kamyar Mehran Subject: Engineering, Electrical & Electronic Engineering Keywords: Hyperloop; CFD; K-e model; Aerodynamics; Energy efficiency Based on K-ε Standard Wall turbulence model (2-Equation) and Navier-Stokes (N-S) equations defined for incompressible fluids, fluid flow behaviour around hyperloop pods in an evacuated tube was simulated using ANSYS fluent solver assuming steady state and two dimensional conditions. In this research, to develop the case studies, using combination of different head and tail shape profile, four kind of hyperloop pods were developed with the aid of SolidWorks. These four pods have been investigated for their aerodynamic behaviour as four different case scenarios. The results of simulation depicts that an atmospheric pressure of 100 Pa with blockage ratio of 0.36 in tube provides the best possible aerodynamic behaviour for the designed hyperloop pod models. This research finds that overall aerodynamic behaviour of hyperloop pods can be varied by changing the head and tail shape profile of pods and a particular combination of head and tail shape profile can provide optimally best aerodynamic capabilities. Thus, this research paper provides a novel method of obtaining best aerodynamic capabilities in hyperloop pods by designing head profile optimally in combination with tail profile. This outcome will provide major contribution towards the development of hyperloop pods in future with better aerodynamic behaviour resulting in lesser electrical energy required to propel the hyperloop pods in evacuated tube. Draft Genome of a Bovine Enterovirus recovered from Sewage in Nigeria Temitope Faleye, Moses Olubusuyi Adewumi, Moses Olubusuyi Adewumi, Olayinka Oluseyi Adebowale, Emmanuel Donbraye, Bolaji Oluremi, Uwem George, Oluwadamilola A Arowolo, Ewean Chukwuma Omoruyi , Maryjoy Ijeoma Ifeorah , Adefunke Oyewumi Oyero, Johnson Adekunle Adeniji Subject: Life Sciences, Virology Keywords: Bovine enterovirus, EV-E, Nigeria, Sewage, Complete Genome We describe the draft genome of a Bovine enterovirus (EV) recovered from sewage in Nigeria. The virus replicates on both RD and L20B cell lines, but is negative for all EV screens in use by the GPEI. It contains 7,368nt, with 50.2% G+C content and an ORF with 6,525nt (2,174aa). Modelling the Effect of Weed Competition on Long-Term Volume Yield of Eucalyptus globulus Plantations Across an Environmental Gradient Felipe Vargas, Carlos A. Gonzalez-Benecke, Rafael Rubilar, Manuel Sanchez-Olate Subject: Biology, Forestry Keywords: weed control; competing vegetation; yield modelling; E. globulus Several studies have quantified the responses of Eucalyptus globulus plantations to weed control on its early development (2-3 years after establishment). However, long-term results of competing vegetation effects have been rarely incorporated into growth and yield models that forecast the long-term effects of reducing the intensity of competing vegetation control and its interaction with site resource availability on stem volume production close to rotation age. We compared several models predicting stand stem volume yield of Eucalyptus globulus plantations established across a water and fertility gradient growing under different intensity levels of free area of competing vegetation maintained during the first 3 years of stand development. Four sites were selected encompassing a gradient in rainfall and amount of competing vegetation. Treatments were applied at stand establishment and were monitored periodically until age 9 years. Competing vegetation control intensity levels considered 0, 5, 20, 44 and 100% weed-free cover around individual E. globulus seedlings. Maximum competing vegetation biomass production during the first growing season were 2.9, 6.5, 2.2 and 12.9 Mg ha-1, for sites ranging from low to high annual rainfall. As expected, reductions in volume yield at age 9 years were observed as competing vegetation control intensity decreased during the first growing season. A strong relationship was established between stem volume yield loss and the intensity of competing vegetation control, the amount of competing vegetation biomass produced during the first growing season and mean annual rainfall. The slope of the relationship was different among sites and was related mainly to water and light limitations. Our results, suggest that the biomass of competing vegetation (intensity of competition) affecting site resource availability, contribute to observed long-term effects on E. globulus plantations productivity. The site with the lowest mean annual rainfall showed the highest volume yield loss at age 9 years. Sites with highest rainfall showed contrasting results related to the amount of competing vegetation biomass. Microfluidic Cultivation and Laser Tweezers Raman Spectroscopy of E. coli under Antibiotic Stress Zdeněk Pilát, Silvie Bernatová, Jan Ježek, Johanna Kirchhoff, Astrid Tannert, Ute Neugebauer, Ota Samek, Pavel Zemánek Subject: Physical Sciences, Optics Keywords: Raman microspectroscopy; optical tweezers; optofluidics; E. coli; antibiotics Analyzing the cells in various body fluids can greatly deepen the understanding of the mechanisms governing the cellular physiology. Because of the variability of physiological and metabolic states, it is important to be able to perform such studies on individual cells. Therefore, we developed an optofluidic system in which we precisely manipulated and monitored individual cells of Escherichia coli. We used laser tweezers Raman spectroscopy (LTRS) in a microchamber chip to manipulate and analyze individual E. coli cells. We subjected the cells to antibiotic cefotaxime, and we observed the changes by the time-lapse microscopy and Raman spectroscopy. We found observable changes in the cellular morphology (cell elongation) and in Raman spectra, which were consistent with other recently published observations. We tested the capabilities of the optofluidic system and found it to be a reliable and versatile solution for this class of microbiological experiments. Promoting Sustainability Transparency in European Local Governments: An Empirical Analysis Based on Administrative Cultures Andrés Navarro-Galera, Mercedes Ruiz-Lozano, Pilar Tirado-Valencia, Araceli de los Ríos-Berjillos Subject: Social Sciences, Business And Administrative Sciences Keywords: sustainability; transparency; local governments; administrative cultures; e-government Nowadays, the transparency of governments with respect to the sustainability of public services is a very interesting issue for stakeholders and academics. It has led to previous research and international organisations (EU, IMF, OECD, United Nations, IFAC, G-20, World Bank) to recommend promotion of the online dissemination of economic, social and environmental information. Based on previous studies about e-government and the influence of administrative cultures on governmental accountability, this paper seeks to identify political actions useful to improve the practices of transparency on economic, social and environmental sustainability in European local governments. We perform a comparative analysis of sustainability information published on the websites of 72 local governments in 10 European countries grouped into main three cultural contexts (Anglo-Saxon, Southern European and Nordic). Using international sustainability reporting guidelines, our results reveal significant differences in local government transparency in each context. The most transparent local governments are the Anglo-Saxon ones, followed by Southern European and Nordic governments. Based on individualized empirical results for each administrative style, our conclusions propose useful policy interventions to enhance sustainability transparency within each cultural tradition, such as development of legal rules on transparency and sustainability, tools to motivate local managers for online diffusion of sustainability information and analysis of information needs of stakeholders. The Relation between Frequency of E-Cigarette Use and Frequency and Intensity of Cigarette Smoking among South Korean Adolescents Jung Ah Lee, Sungkyu Lee, Hong-Jun Cho Subject: Medicine & Pharmacology, Other Keywords: electronic cigarette; e-cigarette; smoking; adolescent; frequency; tobacco Introduction The prevalence of adolescent electronic cigarette (e-cigarette) use has increased in most countries. This study determines the relation between the frequency of e-cigarette use and the frequency and intensity of cigarette smoking. Furthermore, it evaluates the association between the reasons for e-cigarette use and the frequency of its use. Materials and Methods Participants were 68,043 middle and high school students aged 13–18 years from the 2015 Korean Youth Risk Behavior Web-Based Survey. Of the 68,043 participants, we analyzed 6,655 adolescents with an experience of e-cigarette use. Results The prevalence of ever using and current (past 30 days) use of e-cigarettes was 10.1% and 3.9%, respectively. Of the ever e-cigarette users, approximately 40% used e-cigarettes for ≥1/month and 8.1% used e-cigarettes daily. Daily e-cigarettes users were 10 times greater among daily cigarette smokers than among cigarette users for <1/month (18.1% vs. 1.8%) and 16 times more prevalent among those smoking ≥20 cigarettes/day than among those smoking <1 cigarette/month (38.9% vs. 2.4%). The most common reason for e-cigarette use was curiosity (22.9%), followed by less harmful than conventional cigarettes (18.9%), smoking cessation (13.1%), and indoor use (10.7%). Curiosity was the most common reason among less frequent e-cigarette users; however, smoking cessation and indoor use were the most common reasons among more frequent users. Conclusions Results showed a positive relation between frequency or intensity of conventional cigarette smoking and frequency of e-cigarette use among Korean adolescents, and frequency of e-cigarette use differed according to the reason for the use of e-cigarettes. Proteomic Comparative Analysis of Pregnancy Serum Facilitating Hepatitis E Virus Replication in Hepatoma Cells Yi Li, Yanhong Bi, Wenhai Yu, Chenchen Yang, Jue Wang, Feiyan Long, Yunlong Li, Fen Huang, Xiuguo Hua Subject: Biology, Animal Sciences & Zoology Keywords: hepatitis E virus; proteomic comparative analysis; pregnancy serum Hepatitis E virus (HEV) is a common cause of acute hepatitis worldwide, accounting for approximately 25% of deaths among pregnant women. We previously reported that pregnancy serum facilitates HEV replication in vitro. However, the differences in host cells with HEV infection induced by pregnancy serum and fetal bovine serum (FBS) are unclear. In this study, differentially expressed proteins were identified in HEV-infected hepatoma cells (HepG2) supplemented with different sera by using isobaric tags for relative and absolute quantitation. Proteomic analysis indicated that HEV infection significantly induced 1014 differentially expressed proteins in HEV-infected HepG2 cells when supplemented with FBS compared with pregnancy serum. Further validation by Western blot confirmed that filamin A, heat-shock proteins 70 and 90, Cytochrome c, and Thioredoxin were associated with HEV infection. This comparative analysis provides an important basis to further investigate HEV pathogenesis in pregnant women and HEV replication. Immunization with E Plus NS1-2a Enhanced Protection against Dengue Virus Serotype 2 in Mice Yanhua Wu, Shuyu Fang, Xiaoyun Cui, Na Gao, Dongying Fan, Jing An Subject: Medicine & Pharmacology, General Medical Research Keywords: Dengue virus; E; NS1-2a; electroporation; DNA vaccine Dengue virus (DENV), the causative agent of dengue fever (DF), is one of the most important mosquito-borne viruses that can infect humans. Although much effort has been made on prevention and control of dengue, there are currently no anti-viral drugs or worldwide approved vaccines yet. In this study, we immunized six-week-old Balb/c mice with DNA vaccine candidates E and NS1-2a of DENV serotype 2 or the combination of them (E+NS1-2a) via an electroporation (EP)-assisted intramuscular gene delivery system and evaluated the immune response and protection. The highest specific antibody titres and cytokine levels secreted by splenocytes as well as the highest survival rate were observed in the E+NS1-2a group, followed by E group and NS1-2a group. Our data suggested that the combination of E and NS1-2a delivered by EP may be a superior preventive strategy against DENV. 21st Century Technology Renascence A Driven Impacting Factor For Future Energy, Economy, Ecommerce, Education, or Any Other E-Technologies Bahman Zohuri Subject: Engineering, General Engineering Keywords: Modern Technology; Traditional Technology; Technology Renascence; E-Banking; Ecommerce; Education; Energy; Economy and Other E-Technologies; Artificial Intelligence; Business Intelligence Abstract: The human race has always innovated, and in a relatively short time went from building fires and making stone-tipped arrows to creating smartphone apps and autonomous robots. Today, technological progress will undoubtedly continue to change the way we work, live, and survive in the coming decades. Since the beginning of the new millennium, the world has witnessed the emergence of social media, smartphones, self-driving cars, and autonomous flying vehicles. There have also been huge leaps in energy storage, artificial intelligence, and medical science. We are facing immense challenges in global warming and food security, among many other issues. While human innovation has contributed to many of the problems we are facing, it is also human innovation and ingenuity that can help humanity deal with these issues "New directions in science are launched by new tools much more often than by new concepts. The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained". (F. Dyson, 1997 In this article, we review the impact of technology as evolving at beginning of 21st Century on future prospect of Energy demand either renewable or non-renewable form, Economy, to Ecommerce, Education and any other E-related of Modern Technology. Evidence for the Magnetoionic Nature of Oblique VHF Reflections from Midlatitude Sporadic-E Layers Chris Deacon, Cathryn Mitchell, Robert Watson, Ben Witvliet Subject: Earth Sciences, Atmospheric Science Keywords: sporadic-E; Es; ionosphere; mesosphere-lower thermosphere; VHF; polarization. Mid-latitude sporadic-E (Es) is an intermittent phenomenon of the lower E region of the ionosphere. Es clouds are thin, transient, and patchy layers of intense ionization, with ionization densities which can be much higher than in the background ionosphere. Oblique reflection of radio signals in the very high frequency (VHF) range is regularly supported, but the mechanism for it has never been clearly established - specular reflection, scattering, and magnetoionic double refraction have all been suggested. This article proposes using the polarization behaviour of signals reflected from intense midlatitude sporadic-E clouds as an indicator of the true reflection mechanism. Results are presented from a measurement campaign in the summer of 2018, which gathered a large amount of data at a receiving station in the UK using 50 MHz amateur radio beacons as signal sources. In all cases the signals received were elliptically polarized, despite being transmitted with linear polarization; there were also indications that polarization behaviour varied systematically with the orientation of the path to the geomagnetic field. This represents, for all the examples recorded, clear evidence that signals were reflected from midlatitude Es by magnetoionic double refraction. Concentrations of Ciprofloxacin in the World's Rivers Are Associated With the Prevalence of Fluoroquinolone Resistance in E. coli: A Global Ecological Analysis Chris Kenyon Subject: Medicine & Pharmacology, Other Keywords: Rivers; one-health; E. coli; fluoroquinolones; antimicrobial resistance; AMR Extremely low concentrations of ciprofloxacin may select for antimicrobial resistance. A recent global survey found that ciprofloxacin concentrations exceded safe levels at 64 sites. We assessed if national median ciprofloxacin concentrations in rivers were associated with fluoroquinolone resistance in Escherichia coli. Methods Spearman's regression was used to assess the country-level association between the national prevalence of fluoroquinolone resistance in E. coli and the median ciprofloxacin concentration in the countries rivers. Results The prevalence of fluoroquinolone resistance in E. coli was positively correlated with the concentration of ciprofloxacin in rivers (ρ=0.36; P=0.011; N=48). Conclusions Steps to reducing the concentrations of fluoroquinolones in rivers may help prevent the emergence of resistance in E. coli and other bacterial species. One Step E-Beam Radiation Cross-Linking of Quaternary Hydrogels Dressings Based on Chitosan-Poly(Vinyl-Pyrrolidone)-Poly(ethylene Glycol)-Poly(Acrylic Acid) Ion Călina, Maria Demeter, Anca Scărișoreanu, Veronica Sătulu, Bogdana Mitu Subject: Chemistry, Analytical Chemistry Keywords: hydrogel; e-beam cross-linking; swelling; ibuprofen; network parameters We report on the successful preparation of wet dressings hydrogels based on Chitosan-Poly(N-Vinyl-Pyrrolidone)-Poly(ethylene glycol)-Poly(acrylic acid) and Poly(ethylene oxide) by e-beam cross-linking in weakly acidic media, to be used for rapid healing and pain release of infected skin wounds. The structure and compositions of hydrogels investigated according to sol-gel and swelling studies, network parameters, as well as FTIR and XPS analyses showed the efficient interaction of the hydrogel components upon irradiation, maintaining the bonding environment while the cross-linking degree increasing with the irradiation dose and the formation of a structure with the mesh size in the range 11-67 nm. Hydrogels with gel fraction above 85% and the best-swelling properties in different pH solutions were obtained for hydrogels produced with 15 kGy. The hydrogels are stable in the simulated physiological condition of an infected wound and show appropriate moisture retention capability and the water vapor transmission rate up to 272.67 g m-2 day-1, to ensure fast healing. The hydrogels proved to have a significant loading capacity of ibuprofen (IBU), being able to incorporate a therapeutic dose for the treatment of severe pains. Simultaneously, IBU was released up to 25% in the first 2h, having a release maximum after 8h. Synthesis of Silver Modified Bioactive Glassy Materials with Antibacterial Properties via Facile and Low-Temperature Route Isabel Gonzalo-Juan, Fangtong Xie, Malin Becker, Dilshat Tulyaganov, Emanuel Ionescu, Stefan Lauterbach, Francesca De Angelis Rigotti, Andreas Fischer, Ralf Riedel Subject: Materials Science, Biomaterials Keywords: Bioactive glass; antibacterial; silver; nanocomposites; E. coli; ion release There is an increasing clinical need to develop novel biomaterials that combine regenerative and biocidal properties. In this work, we present the preparation of silver /silica based glassy bioactive (ABG) compositions via a facile, fast (20h), and low temperature (80 °C) approach and their characterization. The fabrication process included the synthesis of the bioactive glass (BG) particles followed by the surface modification of the bioactive glass with silver nanoparticles. The microstructural features of ABG samples before and after exposure to simulated body fluid (SBF) as well as their ion release behavior during SBF test were evaluated using infrared spectrometry (FTIR), ultraviolet- visible (UV-Vis) spectroscopy, X-ray diffraction (XRD), electron microscopies (TEM and SEM) and optical emission spectroscopy (OES). The antibacterial properties of the experimental compositions were tested against Escherichia coli (E. coli). The results indicated that the prepared ABG materials possess antibacterial activity against E. coli, which is directly correlated with the glass surface modification. Nikfar Domination in Neutrosophic Graphs Mohammadesmail Nikfar Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Neutrosophic graph, bridge, tree, e ective edge, nikfar domination. Many various using of this new-born fuzzy model for solving real-world problems and urgent requirements involve introducing new concept for analyzing the situations which leads to solve them by proper, quick and e cient method based on statistical data. This gap between the model and its solution cause that we introduce nikfar domination in neutrosophic graphs as creative and e ective tool for studying a few selective vertices of this model instead of all ones by using special edges. Being special selection of these edges a ect to achieve quick and proper solution to these problems. Domination hasn't ever been introduced. So we don't have any comparison with another de nitions. The most used graphs which have properties of being complete, empty, bipartite, tree and like stu and they also achieve the names for themselves, are studied as fuzzy models for getting nikfar dominating set or at least becoming so close to it. We also get the relations between this special edge which plays main role in doing dominating with other special types of edges of graph like bridges. Finally, the relation between this number with other special numbers and characteristic of graph like order are discussed. Determining the AMSR-E SST Footprint from Co-located MODIS SSTs Brahim Boussidi, Peter Cornillon, Gavino Puggioni, Chelle Gentemann Subject: Earth Sciences, Oceanography Keywords: footprint, constrained Least square, Bootstrap, SST, AMSR-E, MODIS This study was undertaken to derive and analyze the Advanced Microwave Scanning Radiometer - EOS (AMSR-E) sea surface temperature (SST) footprint associated with the Remote Sensing Systems (RSS) Level-2 (L2) product. The footprint, in this case, is characterized by the weight attributed to each 4 4 km square contributing to the SST value of a given AMSR-E pixel. High-resolution L2 SST fields obtained from the MODerate-resolution Imaging Spectroradiometer (MODIS), carried on the same spacecraft as AMSR-E, are used as the sub-resolution "ground truth" from which the AMSR-E footprint is determined. Mathematically, the approach is equivalent to a linear inversion problem, and its solution is pursued by means of a constrained least square approximation based on the bootstrap sampling procedure. The method yielded an elliptic-like Gaussian kernel with an aspect ratio 1.58, very close to the AMSR-E 6.93GHz channel aspect ratio, 1.7. (The 6.93GHz channel is the primary spectral frequency used to determine SST.) The semi-major axis of the estimated footprint is found to be alignedwith the instantaneous field-of-view of the sensor as expected fromthe geometric characteristics of AMSR-E. Footprintswere also analyzed year-by-year and as a function of latitude and found to be stable – no dependence on latitude or on time. Precise knowledge of the footprint is central for any satellite-derived product characterization and, in particular, for efforts to deconvolve the heavily oversampled AMSR-E SST fields and for studies devoted to product validation and comparison. A preliminarly analysis suggests that use of the derived footprint will reduce the variance between AMSR-E and MODIS fields compared to the results obtained. Maternal Supplementation with RRR-α-Tocopherol (400 IU) and Its Relationship with Serum and Breast Milk Retinol Juliana Fernades Dos Santos Dametto, Larissa Queiroz de Lira, Larisse Rayanne Miranda Melo, Marina Mendes Damasceno, Nathalia Lorena do Nascimento Silva, Roberto Dimenstein Subject: Life Sciences, Biochemistry Keywords: Vitamin E; vitamin A; maternal serum; lactation; liquid chromatography. Vitamin A and E are important during pregnancy, the neonatal period, and childhood. The objective of this study was to assess whether maternal RRR-a-tocopherol supplementation affects serum and breast milk retinol. Serum was collected at baseline and twenty days later, and breast milk, at baseline, and on days 1, 7, and 20 after delivery. After the baseline serum collection, the supplemented group (n=16) received a single 400 IU of RRR-α-tocopherol. The control group (n=18) was only performed collections. Retinol and alpha tocopherol levels were determined by liquid chromatography. Serum retinol and alpha tocopherol at baseline and 20 days after delivery indicated proper vitamin A (> 20 µg/dL) and E (> 516 μg/dL) statuses in the control and supplemented groups (p > 0.05). Colostrum retinol levels on days 1 and 7 after delivery were significantly higher in the supplemented group (p = 0.018 and p = 0.012, respectively). Maternal vitamin E supplementation increased colostrum retinol by 52.23% and 111.2%, 24 hours and 7 days, respectively. However, retinol in mature milk did not differ between the groups (p > 0.05). In conclusion, the supplementation with 400 IU of RRR-α-tocopherol improved vitamin A bioavailability in breast milk. High Bacterial Agglutination Activity in a Single-CRD C-Type Lectin from Spodoptera exigua (Lepidoptera: Noctuidae) Leila Gasmi, Juan Ferré, Salvador Herrero Subject: Life Sciences, Biotechnology Keywords: C-type lectin; agglutination; CRD; bacterial detection; E. coli Lectins are carbohydrate-interacting proteins playing a pivotal role in multiple physiological and developmental aspects of all organisms. They can specifically interact with different bacterial and viral pathogens through the carbohydrate-recognition domains (CRD). In addition, lectins are also of biotechnological interest because of their potential use as biosensor for capturing and identification of bacterial species. In this work, we have characterized the bacterial agglutination properties of three C-type lectins from the Lepidoptera Spodoptera exigua. One of these lectins, BLL2, was able to agglutinate cells from a broad range of bacterial species at an extremely low concentration, becoming a very interesting protein to be used as biosensor or other biotechnological applications involving bacterial capturing. What Do We Know Now about IgE-Mediated Wheat Allergy in Children? Grażyna Czaja-Bulsa, Michał Bulsa Subject: Medicine & Pharmacology, Allergology Keywords: wheat allergy; specific immunoglobulin E; children; gluten-related disorders IgE-mediated wheat allergy is a gluten-related disorder. Wheat is one of the five most common food allergens in children. However, the natural history of IgE-mediated wheat allergy has seldom been described in the research literature. This study presents the current state of knowledge about the IgE-mediated wheat allergy in children. Hepatitis E Virus Seroprevalence and Associated Risk Factors in Pregnant Women Attending Antenatal Consultations in Senegal Abou Abdallah Malick Diouara, Seynabou Lo, Cheikh Momar Nguer, Assane Senghor, Halimatou Diop Ndiaye, Noël Magloire Manga, Fodé Danfakha, Sidy Diallo, Marie Edouard Faye Dieme, Ousmane Thiam, Babacar Biaye, Ndèye Marie Pascaline Manga, Fatou Thiam, Habibou Sarr, Gora Lo, Momar Ndour, Sébastien Paterne Manga, Nouhou Diaby, Modou Dieng, Idy Diop, Yakhya Dieye, Coumba Toure Kane, Martine Peeters, Ahidjo Ayouba Subject: Life Sciences, Virology Keywords: Hepatitis E; Associated risk factors; Pregnant women; Environment; Prevention; Senegal In West Africa, research on the hepatitis E virus (HEV) is barely covered despite the recorded outbreaks. The still low level of access to safe water and adequate sanitation is one of the main factors of HEV spread in developing countries. HEV infection induces acute or sub-clinical liver diseases with a mortality rate ranging from 0.5 to 4%. The mortality rate is more alarming (15 to 25%) among pregnant women, especially in the last trimester of pregnancy. Here, we conducted a multicentric socio-demographic and seroepidemiological survey of HEV in Senegal among pregnant women. A total of 1,227 consenting participants attending antenatal clinics responded to our questionnaire. Plasma samples were collected and tested for anti-HEV IgM and IgG by using the WANTAI HEV-IgM and IgG ELISA assay. HEV global seroprevalence was 7.9% with 0.5% and 7.4% for HEV IgM and HEV IgG, respectively. One participant's sample was IgM/IgG positive, while four were declared indeterminate to anti-HEV IgM as per the manufacturer's instructions. From one locality to another, the seroprevalence of HEV antibodies varied from 0 to 1% for HEV IgM and from 1.5 to 10.5% for HEV IgG. The data also showed that seroprevalence varied significantly by marital status (p&lt;0.0001), by the regularity of income (p=0.0043) and by access to sanitation services (p=0.0006). These data could serve as a basis to setup national prevention strategies focused on socio-cultural, environmental and behavioral aspects for a better management of HEV infection in Senegal. Acute Severe Hepatitis of Unknown Etiology in Children: A Mini-Review Andri Frediansyah, Malik Sallam, Amanda Yufika, Khan Sharun, Muhammad Iqhrammullah, Deepak Chandran, Sukamto S. Mamada, Dina E. Sallam, Yousef Khader, Yohannes K. Lemu, Fauzi Yusuf, James-Paul Kretchy, Ziad Abdeen, J. S. Torres-Roman, Yogesh Acharya, Anastasia Bondarenko, Aamer Ikram, Kurnia F. Jamil, Katarzyna Kotfis, Ai Koyanagi, Lee Smith, Dewi Megawati, Marius Rademaker, Ziad A. Memish, Sandro Vento, Firzan Nainu, Harapan Harapan Subject: Life Sciences, Virology Keywords: Acute non hepA–E hepatitis; clinical manifestations; epidemiological characteristics; prevention The emergence of acute, severe non hepA–E hepatitis of unknown etiology (ASHUE) has attracted global concern owing to the very young age of the patients and its unknown etiology. Although this condition has been linked to several possible causes, including viral infection, drugs, and/or toxin exposure, the exact cause remains unknown; this makes treatment recommendations very difficult. In this review, we summarize recent updates on the clinical manifestations, complemented with laboratory results, case numbers with the global distribution and other epidemiological characteristics, and the possible etiologies. We also provide the proposed actions that could be undertaken to control and prevent further spread of this hepatitis. Since many etiological and pathological aspects of the acute non hepA–E hepatitis remain unclear, further research is needed to minimize the severe impact of this disease. Examining the impact of E-Supply Chain on Service Quality and Customer Satisfaction: A Case Study Maryam Abdirad, Krishna Krishnan Subject: Engineering, Industrial & Manufacturing Engineering Keywords: Service Quality; E-Supply Chain Management; Customer Satisfaction; online shopping The purposes of this study are to introduce the concept of Service Quality (SQ) in E-Supply Chain Management (E-SCM) and its impact on increasing Customer Satisfaction (CS) and provide insightful enhancements to the literature. In addition, the paper also examines the influence of SQ of E-SCM on CS in online shopping. After a comprehensive literature review, four key factors for measuring the E-Supply Chain (Process Control, Interaction with Supplier, Management Support, and Focus on Customers), four key factors for measuring CS (Informing Customers, Attention to Customers' Needs, Staff Performance Accuracy, and Easy Access to Services), and four factors for measuring the quality of identification services (Assurance, Accountability, Tangibility and Reliability) were selected. The proposed conceptual model was then presented. This model was validated by data collected through a survey of 150 respondents in order to identify customer satisfaction, including that of customers of online websites in Iran. The sample data was analyzed using SPSS21, after which the interrelationships between the model and factors were examined based on the Partial Least Square-Structural (PLS). Model fit indices were then calculated for the dataset. The proposed model was validated using factor analysis and structural equation modeling techniques. The results indicated that E-SCM has a direct impact on CS. The effect of SQ was also confirmed. A positive and significant relationship was identified between E-SCM and CS, E-SCM and SQ, as well as SQ and CS (P> 0.05). The first limitation was to convince respondents to cooperate with the researchers. The second one was the lack of research-related background due to the subject being relatively new. This study, to the best of the authors' knowledge, is the first empirical analysis on the CS assessment of SQ of E-Supply Chain in online shopping. This important link to online shopping has rarely been explored. It is expected that by filling this gap, this study will help in strengthening online shopping, which needs a change in the marketing area. Integrаtion the UTАUT2 Model: Аdoption of E-Commerce as Solution for Fаshion Industry in Bаndung Fаcing the COVID-19 Pаndemic Astri Wulandari, Donni Junipriansa, Bethani Suryawardani, Dandy Marcelino Subject: Social Sciences, Business And Administrative Sciences Keywords: e-commerce adoption; UTAUT2; fashion industry; digital marketing; covid pandemic. The growth of the fаshion industry in Bаndung, which is increаsing every yeаr, if it is not bаlаnced with the аpplicаtion of digitаlizаtion аnd the use of the lаtest technology, it would be very unfortunаte. Аpаrt from the increаse in the number of internet users, the online shopping style by the community is аlso one of the driving forces for the growth of e-commerce, especiаlly in the midst of the current Covid pаndemic situаtion. This study will exаmine the аdoption of e-commerce technology to sell online in the midst of the Covid pаndemic to fаshion industry plаyers in Bаndung using these vаriаbles: effort expectаncy, performаnce expectаncy, fаcilitаting conditions, sociаl influence, existing hаbits, hedonic motivаtion, and also the price vаlue in the UTАUT2 model. So thаt it cаn show how the contribution of the аdoption of e-commerce technology to the behаviourаl intention аnd use behaviour of the fаshion industry consumers in Bаndung. Our research use quаntitаtive approach, cаusаl research/study with Structurаl Equаtion Modeling (SEM) аnаlysis technique using the SMАRTPLS 3.2.9 softwаre. Reseаrchers chose the аccidentаl sаmpling with а totаl of 400 respondents. Аll exogenous vаriаbles аffect behаviourаl intention by 80.9% аnd 54.9% on use behаviour. Virulence factors of Enteric Pathogenic Escherichia coli: A Review Babak Pakbin, Wolfram Manuel Bruck, John W. A. Rossen Subject: Life Sciences, Microbiology Keywords: Enteric pathogenic Escherichia coli; E. coli pathotypes; Virulence factor genes Abstract: Escherichia coli are remarkably versatile microorganisms and important members of the normal intestinal microbiota of humans and animals. This harmless commensal organism can acquire a mixture of comprehensive mobile genetic elements that contain genes encoding viru-lence factors, becoming an emerging human pathogen capable of causing a broad spectrum of intestinal and extraintestinal diseases. Nine definite enteric E. coli pathotypes have been well characterized, causing diseases ranging from various gastrointestinal disorders to urinary tract infections. These pathotypes employ many virulence factors and effectors subverting the func-tions of host cells to mediate its virulence and pathogenesis. This review summarizes new de-velopments in our understanding of diverse virulence factors associated encoding genes used by different pathotypes of enteric pathogenic E. coli to cause intestinal and extraintestinal diseases in humans. Enterococcus faecalis Enhances Candida albicans Mediated Tissue Destruction in a Strain-Dependent Manner Akshaya Lakshmi Krishnamoorthy, Alex A Lemus, Adline Princy Solomon, Alex M Valm, Prasanna Neelakantan Subject: Medicine & Pharmacology, Allergology Keywords: biofilm; Candida albicans; E-cadherin; Enterococcus faecalis; FISH; oral mucosa. Candida albicans as an opportunistic pathogen exploits the host immune system and causes a variety of life-threatening infections. The polymorphic nature of this fungus gives it tremendous advantage to breach mucosal barriers and cause a variety of oral and disseminated infections. Enterococcus faecalis, another opportunistic pathogen co-exists with C. albicans in several niches in the human body, including the oral cavity and gastrointestinal tract. However, interactions between E. faecalis and C. albicans on oral mucosal surfaces remain unknown. Here, for the first time, we comprehensively characterized the interactive profiles between laboratory and clinical isolates of C. albicans (SC5314 and BF1) and E. faecalis (OG1RF and 846) on an organotypic oral mucosal model. Our results demonstrated that the two species formed robust biofilms on the mucosal tissue surface with profound surface erosion and fungal invasion. Specifically, this effect was more pronounced in the laboratory isolates than in the clinical isolates. Notably, several genes of C. albicans involved in tissue adhesion, hyphal formation, fungal invasion, and biofilm formation were significantly upregulated in the presence of E. faecalis. This study highlights the strain-dependent cross-kingdom interactions between E. faecalis and C. albicans on oral mucosa, demonstrating the requisite to study more substrate-dependent polymicrobial interactions. A Way of Marketing 3D Web in E-commerce, Applying at Car Showrooms Period of Industrial Revolution 4.0 Pham Minh Dat, Nguyen Thi Hang, Nguyen Van Huan Subject: Social Sciences, Business And Administrative Sciences Keywords: industrial revolution 4.0; enterprise; e-commerce; 3D web marketing model Marketing is one of the important stages, the decisive factor affecting the success of production and business activities of every business. Therefore, marketing is considered to be the first and most important stage in the process of introducing, bringing products to market and branding of businesses, especially in the period when the disease situation is taking place very complicated worldwide. This study provides both qualitative and quantitative results for the following research objectives. Firstly, the study will assess the situation and analyze the appropriateness of marketing models in the current market situation trend. Secondly, the study proposes an approach in building a new type of marketing model, which is 3D web-based marketing applied in e-commerce to support the modeling of enterprise products in the form of Interactive 3D products are similar to real products. Thirdly, testing and evaluating 3D e-commerce web marketing model at Truong Hai Auto Showroom, Thai Nguyen City branch. This study also proposes a number of solutions for the research and deployment of 3D web marketing model application for businesses in the current market situation. Energy-Optimized Pharmacophore Coupled Virtual Screening in the Discovery of Quorum Sensing Inhibitors of LasR Protein of Pseudomonas aeruginosa Zulkar Nain, Sifat Bin Sayed, Mohammad Minnatul Karim, Md. Ariful Islam, Utpal Kumar Adhikari Subject: Medicine & Pharmacology, Other Keywords: Pseudomonas aeruginosa; Quorum sensing; Virtual screening; E-pharmacophore; Drug discovery. Pseudomonas aeruginosa is an emerging opportunistic pathogen responsible for cystic fibrosis and nosocomial infections. In addition, empirical treatments are become inefficient due to their multiple-antibiotic resistance and extensive colonizing ability. Quorum sensing (QS) plays a vital role in the regulation of virulence factors in P. aeruginosa. Attenuation of virulence by QS inhibition could be an alternative and effective approach to control infections. Therefore, we sought to discover new QS inhibitors (QSIs) against LasR receptor in P. aeruginosa using chemoinformatics. Initially, a structure-based high-throughput virtual screening was performed using the LasR active site that identified 61404 relevant molecules. E-pharmacophore (ADAHH) screening of these molecules rendered 72 QSI candidates. In standard-precision docking, only 7 compounds were found as potential QSIs due to their higher binding affinity to LasR receptor (-7.53 to -10.32 kcal/mol compared to -7.43 kcal/mol of native ligands). The ADMET properties of these compounds were suitable to be QSIs. Later, extra-precision docking and binding energy calculation suggested ZINC19765885 and ZINC72387263 as the most promising QSIs. The dynamic simulation of the docked complexes showed good binding stability and molecular interactions. The current study suggested that these two compounds could be used in P. aeruginosa QS inhibition to combat bacterial infections. Differential Methylation in APOE Genes (Chr19; exon four; from 44,909,188 to 44,909,373) and Increased Apolipoprotein E Plasmatic Levels in Subjects with Mild Cognitive Impairment" Oscar Mancera Páez, Kelly Estrada Orozco, Maria Fernanda Mahecha, Francy Cruz Sanabria, Kely Bonilla-Vargas, Nicolas Sandoval, Esneyder Guerrero, David Salcedo-Tacuma, Jesús Melgarejo, Edwin Vega, Jenny Ortega, Gustavo C. Roman, Rodrigo Pardo-Turriago, Humberto Arboleda Subject: Medicine & Pharmacology, Clinical Neurology Keywords: APOE gene; Apolipoprotein E; DNA methylation; Mild cognitive impairment; Hispanics. Background: Biomarkers are essential for identification of individuals at high risk of mild cognitive impairment (MCI) for potential prevention of dementia. We investigated DNA methylation in the ApoE gene and plasmatic apolipoprotein E (ApoE) levels as MCI biomarkers in Colombian subjects with MCI and controls. Methods: 100 participants were included (71% women, average age, 70 yrs., range 43-91). MCI was diagnosed by neuropsychological testing, medical and social history, activities of daily living, cognitive symptoms and neuroimaging. Multivariate logistic regression models adjusted by age and gender were performed to examine the risk association of MCI with plasma ApoE and APOE methylation Results: MCI was diagnosed in 41 subjects (average age, 66.5±9.6 yrs.) and compared with 59 controls. Elevated plasma ApoE and APOE methylation of CpGs 165, 190, and 198 were risk factors for MCI (P<0.05). Higher CpG-227 methylation correlated with lower risk for MCI (P=0.002). Only CpG-227 was significantly correlated with plasmatic ApoE levels (correlation coefficient=-0.665; P=0.008). Conclusion: Differential APOE methylation and increased plasma ApoE levels were correlated with MCI. These epigenetic patterns can be used as potential biomarkers to identify early stages of MCI. Isolation and Characterisation of Bacteriophages with Lytic Activity Against Virulent Escherichia coli O157:H7: Potential Bio-Control Agents Collins Njie Ateba, Muyiwa Ajoke Akindolire Subject: Biology, Other Keywords: Bacteriophages; Bio-control; E. coli O157:H7; Podoviridae; TEM, safety Bacteriophages can provide alternative measures for the control of E. coli O157:H7 that is currently an emerging food-borne pathogen of severe public health concern. This study was aimed at characterising E. coli O157:H7 specific phages as potential biocontrol agents for these pathogens. Fifteen phages were isolated and screened against 69 environmental E. coli O157:H7. Only 3 phages displayed broad lytic spectra against environmental shiga toxin-producing E. coli O157:H7 strains. These 3 lytic phages were designated V3, V7 and V8. Subsequent characterization indicated that they displayed very high degree of similarities despite isolation from different locations. Transmission Electron microscopy (TEM) of the phages revealed that they all had isometric heads of about 73 – 77 nm in diameter and short tails ranging from 20 - 25 nm in diameter. Phages V3, V7 and V8 were assigned to the family Podoviridae based on their morphology. Pulsed field gel electrophoresis (PFGE) genome estimation of the 3 phages demonstrated identical genome sizes of ~ 69 nm. The latent periods of these phages were 20 min, 15 min, and 20 min for V3, V7 and V8 respectively while the burst sizes were 374, 349 and 419 PFU/ infected cell respectively. While all the phages were relatively stable over a wide range of salinity, temperatures and pH values, their range of infectivity or lytic profile was rather narrow on environmental E. coli O157:H7 strains isolated from cattle faeces. This study showed that the Podoviridae bacteriophages are the dominant E. coli O57:H7-infecting phages harboured in cattle faeces in the North-West Province of South Africa and due to their favourable characteristics can be exploited in the formulation of phage cocktails for the bio-control of E. coli O157:H7 in meat and other meat products. Meander Microwave Bandpass Filter on Flexible Textile Substrate Bahared Moradi, Raul Fernández-García, Ignacio Gil Subject: Engineering, Electrical & Electronic Engineering Keywords: band-pass filter; E-textile; stepped impedance resonator; meandered resonator This paper presents an alternative process to fabricate flexible bandpass filters by using embroidered yarn conductor on electronic-textile. The novelty of the proposed miniaturized filter is its complete integration on the outfit, with benefits in terms of compressibility, stretch ability and high geometrical accuracy, opening the way to develop textile filters in sport and medicine wearable applications. The proposed design consists of a fully embroidered microstrip topology with a length equal to quarter wavelength (λ/4) to develop a bandpass filter frequency response. A drastic reduction in size of the filter was achieved by taking advantage of a simplified architecture based on meandered-line stepped impedance resonator. The e-textile microstrip filter has been designed, simulated, fabricated and measured, with experimental validation at a 7.58 GHz frequency. The insertion loss obtained by simulation of the filter is substantially small. The return loss is greater than 20 dB for bands. To explore the relations between physical parameters and filter performance characteristics, theoretical equivalent circuit model of the filter constituent components were studied. The effect of bending of the e-textile filter is also studied. The results show that by changing the radius of bending up to 40 mm, the resonance frequency is shifted up 4.25 MHz/mm. Game Analysis of Low Carbonization for Urban Logistics Service Systems Jidong Guo, Shugang Ma Subject: Social Sciences, Organizational Economics & Management Keywords: 3PLs; E-business enterprise; low carbonization; game theory; Nash Equilibria To improve carbon efficiency for urban logistics service system composed of a third-party logistics service provider (3PLs) and an e-business enterprise, low-carbon operation game between them was studied. Considering low carbon technology investment cost and sales expansion effect of low carbon level, profit functions for both players were constituted. Based on their different bargaining capabilities, totally 5 types of game scenarios were designed. Through analytical solution, Nash Equilibria under varied scenarios were obtained. By analyzing these equilibria, 4 major propositions were given, in which, some key variables and system performance index were compared. Results show that the best system yields could only be achieved under the fully cooperative situation; limited cooperation only for carbon emission reduction would not benefit the system performance improvement; E-business enterprise-leading game's performance overtook 3PLs-leading ones. Deep Learning: A Review Rocio Vargas, Amir Mosavi, Ramon Ruiz Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: deep learning; machine learning; applied deep learning Deep learning is an emerging area of machine learning (ML) research. It comprises multiple hidden layers of artificial neural networks. The deep learn- ing methodology applies nonlinear transformations and model abstractions of high level in large databases. The recent advancements in deep learning architec- tures within numerous fields have already provided significant contributions in artificial intelligence. This article presents a state of the art survey on the contri- butions and the novel applications of deep learning. The following review chron- ologically presents how and in what major applications deep learning algorithms have been utilized. Furthermore, the superior and beneficial of the deep learning methodology and its hierarchy in layers and nonlinear operations are presented and compared with the more conventional algorithms in the common applica- tions. The state of the art survey further provides a general overview on the novel concept and the ever-increasing advantages and popularity of deep learning. Federated Learning for Education Data Analytics Christian Fachola, Agustin Tornaria, Paola Bermolen, Germán Capdehourat, Lorena Etcheverry, Maria Ines Fariello Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Federated Learning; Learning Analytics Federated learning techniques aim to train and build machine learning models based on distributed datasets across multiple devices, avoiding data leakage. The main idea is to perform training on remote devices or isolated data centers without transferring data to centralized repositories, thus mitigating privacy risks. Data analytics in education, in particular learning analytics, is a promising scenario to apply this approach to address the legal and ethical issues related to processing sensitive data. Indeed, given the nature of the data to be studied (personal data, educational outcomes, data concerning minors), it is essential to ensure that the conduct of these studies and the publication of the results provide the necessary guarantees to protect the privacy of the individuals involved and the protection of their data. In addition, the application of quantitative techniques based on the exploitation of data on the use of educational platforms, student performance, use of devices, etc., can account for educational problems such as the determination of user profiles, personalized learning trajectories, or early dropout indicators and alerts, among others. This paper presents the application of federated learning techniques to two learning analytics problems: dropout prediction and unsupervised student classification. The experiments allow us to conclude that the proposed solutions achieve comparable results from the performance point of view with the centralized versions, avoiding centralizing the data for training the models. MRI-GAN: MRI and Tissues Transfer Using Generative Adversarial Networks Afifa Khaled Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Deep learning; Machine learning We study the brain segmentation by dividing the brain into multiple tissues. Given possible brain segmentation by deep, machine learning can be efficiently exploited to expedite the segmentation process in the clinical practice. To accomplish segmentation process, a MRI and tissues transfer using generative adversarial networks is proposed. Given the better result, we propose the transfer model using GAN. For the case of the brain tissues, white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) are segmented. Empirical results show that this proposed model significantly improved segmentation results compared to the stat-of-the-art results. Furthermore, a dice coefficient (DC) metric is used to evaluate the model performance. Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics Amir Mosavi, Pedram Ghamisi, Yaser Faghan, Puhong Duan, Shahab Shamshirband Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: economics; deep reinforcement learning; deep learning; machine learning The popularity of deep reinforcement learning (DRL) methods in economics have been exponentially increased. DRL through a wide range of capabilities from reinforcement learning (RL) and deep learning (DL) for handling sophisticated dynamic business environments offers vast opportunities. DRL is characterized by scalability with the potential to be applied to high-dimensional problems in conjunction with noisy and nonlinear patterns of economic data. In this work, we first consider a brief review of DL, RL, and deep RL methods in diverse applications in economics providing an in-depth insight into the state of the art. Furthermore, the architecture of DRL applied to economic applications is investigated in order to highlight the complexity, robustness, accuracy, performance, computational tasks, risk constraints, and profitability. The survey results indicate that DRL can provide better performance and higher accuracy as compared to the traditional algorithms while facing real economic problems at the presence of risk parameters and the ever-increasing uncertainties. Geographic and Temporal Variability of Hepatitis E Virus Circulation in Non-Endemic Territories Mikhail I. Mikhailov, Anastasia A. Karlsen, Ilya A. Potemkin, Olga V. Isaeva, Vera S. Kichatova, Elena Yu. Malinnikova, Fedor A. Asadi Mobarkhan, Eugeniy V. Mullin, Maria A. Lopatukhina, Victor Manuylov, Elena P. Mazunina, Evgeniia N. Bykonia, Denis A. Kleymenov, Liubov I. Popova, Vladimir A. Gushchin, Artem P. Tkachuk, Andrey D. Polyakov, Ahmed M. El-Adly, Sergey A. Solonin, Ilya Gordeychuk, Karen Kyuregyan Subject: Life Sciences, Virology Keywords: Hepatitis E virus; Paslahepevirus balayani; seroprevalence; molecular epidemiology; zoonosis; disease outbreaks The factors influencing hepatitis E virus (HEV) circulation remain largely unexplored. We investigated HEV seroprevalence in humans and the prevalence of infection in farm pigs and rabbits in different regions of the Russian Federation, as well as the genetic diversity and population dynamics of HEV. Anti-HEV IgG antibody detection rates in the general population increase significantly with age, from 1.5% in children and adolescents under 20 years old to 4.8% in adults aged between 20 and 59 years old, to 16.7% in people aged 60 years and older. HEV seroprevalence varies between regions, with the highest rate observed in Belgorod Region (16.4% compared with the national average of 4.6%), which also has the country's highest pig population. When compared with the archival data, both increases and declines in HEV seroprevalence have been observed within the last 10 years, depending on the study region. Virus shedding has been detected in 19 out of the 21 pig farms surveyed. On one farm, circulation of the same viral strain for five years was documented. All human and animal strains belonged to the HEV-3 genotype, with its clade 2 sequences being predominant in pigs. Sequences from patients, pigs, and sewage from pig farms clustered together, suggesting a zoonotic infection in humans and possible environmental contamination. The HEV-3 population size predicted using SkyGrid reconstruction demonstrated exponential growth in the 1970s–1990s, with a subsequent decline followed by a short rise around the year 2010, the pattern being similar to the dynamics of the pig population in the country. The HEV-3 reproduction number (Re) predicted using Birth-Death Skyline analysis has fluctuated around 1 over the past 20 years in Russia, but is 10 times higher in Belgorod Region. In conclusion, HEV-3 circulation varies both geographically and temporally, even within a single country. The possible factors contributing to this variability are largely related to the circulation of the virus among farm pigs. Open Data Policy, E-Commerce Connectivity and Portfolio Size Predict a Global Brand's Online Popularity Martijn Hoogeveen Subject: Social Sciences, Marketing Keywords: Brand rank; content marketing; predictive model; open data policy; e-commerce Background Content marketing is increasingly important for online branding. Brand popularity can be more easily determined online than sales-based measures but is not yet well-explained from a content marketing perspective. Promising predictors are open data syndication policies, connectivity to e-commerce platforms, product reviews, data health, and the depth and width of a brands product portfolio. A predictive content marketing model can help brand owners to understand their e-commerce potential. Methods We used brand popularity (Brand Popularity Rank) and catalog data in combination with product reviews from an independent content aggregator. For all datasets, we selected the overlapping dataset for brand popularity and brand reviews based on a period of 90 days from June 10, 2022, till September 24, 2022 (n = 333 brands). Backward stepwise multiple linear regression was used to develop a predictive content marketing model of the Brand Popularity Rank. Results Through stepwise backward multiple linear regression five highly significant (p < 0.01) predictive factors for brand rank are selected in our content marketing model: the brand's data syndication policy, the number of connected e-commerce platforms, a brand's number of products, its number of products per category, and the number of product categories in which it is active. Our model explains 78% of the variance of Brand Popularity Rank and has a good and highly significant fit: F (5, 327) = 233.5, p < 0.00001. Conclusions We conclude that a content marketing model can adequately predict a Brand Popularity Rank based on online popularity. In this model an open content syndication policy, more connected e-commerce platforms, and catalog size, i.e., presence in more categories and more products per category are each related to a better (lower) Brand Popularity Rank score. Short Silk Fiber Reinforced PETG Biocomposite for Biomedical Applications Vijayasankar K. N., Sumanta Mukherjee, Falguni Pati Subject: Materials Science, Biomaterials Keywords: Biocomposites (A); Natural fibres (A); Thermomechanical properties (B); Annealing (E); Biocompatibility Several biomedical products, like scaffolds, implants, prostheses, and orthoses, require materials having superior physicochemical and biological properties. Polyethylene terephthalate glycol (PETG) is being increasingly used for various biomedical applications. There are a few studies on PETG-based composites, however, natural fibers like silk short fibers reinfored PETG composites have not been attempted. Being a cost-effective widely available material, PETG-Silk combination can be potential biocomposite for several biomedical applications. Here, we report a novel short silk fiber reinforced PETG composite prepared by a wet-mixing route, ensuring homogenous dispersion of the filler. Different ratios (2-10%) of short silk fibers were used to prepare composites with varied compositions. The mechanical, physicochemical, and biological properties of the prepared composites were analyzed. Thermogravimetric analysis showed that such composites are thermally stable up to 390 °C and can be used for thermal extrusion-based manufacturing. The tensile modulus of the samples increased with fiber content; however, the failure strain reduced with fiber content. Furthermore, upon annealing, the tensile modulus increased but, the failure strain of the composites decreased, XRD study revealed that heat treatment has altered the crystalline nature of the composites. Finally, we evaluated the cytocompatibility of the prepared composites to assess their suitability for various biomedical applications. Effect of Storage Conditions on the Quality of Arbequina Extra Virgin Olive Oil and the Impact on the Composition of Flavor-related Compounds (Phenols and Volatiles) Leeanny Caipo, Ana Sandoval, Betsabet Sepúlveda, Edwar Fuentes, Rodrigo Valenzuela, Adam Metherel, Nalda Romero Subject: Chemistry, Analytical Chemistry Keywords: olive oil; quality; storage conditions; phenols; volatile compounds; E-2-Nonenal Abstract: Commercialization of extra virgin olive oil (EVOO) requires a best before date recom-mended at up to 24 months after bottling, stored under specific conditions. Thus, it is expected that the product retains its chemical properties and preserves its 'extra virgin' category. However, in-adequate storage conditions could alter the properties of EVOO. In this study, Arbequina EVOO was exposed to five storage conditions for up to one year to study the effects on the quality of the oil and the compounds responsible for flavor. Every 15 or 30 days, samples from each storage condition were analyzed determining physicochemical parameters, the profiles of phenols, volatile compounds, α-tocopherol and antioxidant capacity. Principal component analysis was utilized to better elucidate the relationships between composition of EVOOs and the storage conditions. EVOOs stored at -23 and 23 °C in darkness and 23 °C with light, differed from the oils stored at 30 and 40 °C in darkness. The former were associated with higher quantity of non-oxidized phenolic compounds and the latter with higher elenolic acid, oxidized oleuropein and ligstroside derivatives, which also increased with storage time. E-2-Nonenal (detected at trace levels in fresh oil) was selected as a marker of the degradation of Arbequina EVOO quality over time, with significant linear regressions identified for the storage conditions at 30 and 40 °C. Therefore, early oxidation in EVOO could be monitored by measuring E2-Nonenal levels. Molecular Characterization and Genetic Diversity of Clade E-Human Head Lice from Guinea Alissa Hammoud, Meriem Louni, Mamadou Cellou Baldé, Abdoul habib Beavogui, Philippe Gautret, Raoult Didier, Florence Fenollar, Dorothée Missé, Oleg Mediannikov Subject: Biology, Entomology Keywords: Head lice, haplogroup E, PHUM540560 gene, Acinetobacter haemolyticus, Acinetobacter spp., Guinea. Pediculus humanus capitis, the head louse, is an obligate blood-sucking ectoparasite that occurs in six divergent mitochondrial haplogroups (A, D, B, F, C and E), each exhibiting a particular geographic distribution. A few years ago, several studies reported the presence of different pathogenic agents in head lice specimens from different clades collected worldwide. These findings suggest that head louse could be a vector for dangerous diseases and therefore a serious public health problem. Herein, we aimed to study the mitochondrial genetic diversity, the PHUM540560 gene polymorphisms profile of head lice collected in Guinea, as well as to screen for the pathogens present in these lice. In 2018, a total of 155 head lice were collected from 49 individuals at the Medicals Centers of rural (Maférinyah village) and urban (Kindia city) areas, in Guinea. All head lice were subjected to genetic analysis and screened for the presence of several pathogens using molecular tools. The results showed that all head lice belonged to the haplogroups C/E using the duplex qPCR which detects both clades. Standard PCR and sequencing revealed that all specimens belonged to the haplogroup E, including 8 haplotypes, whither 6 new identified for the first time in this study. The study of the PHUM540560 gene polymorphisms in our Guinean head lice revealed that 7/40 (17.5%) of our tested samples exhibit three different polymorphism profiles compared to the clade A-head lice PHUM540560 gene profile, while the remaining specimens 33/40 (82,5%) showed the same PHUM540560 gene polymorphism profile as the previously reported clade A-body lice. Molecular investigations of the targeted pathogens revealed only the presence of Acinetobacter species in 9% of our samples using real time PCR. Sequencing results identified highlighted the presence of several Acinetobacter species, including Acinetobacter baumannii (14.3%), Acinetobacter nosocomialis (14.3%), Acinetobacter variabilis (14.3%), Acinetobacter haemolyticus (7.2%), Acinetobacter towneri (7.2%). Furthermore, a candidate new species of Acinetobacter sp. (7.2%) was detected. Positive specimens were collected from 24,5% individuals in Maférinyah. We also investigated in our study the carbapenem's-resistant profile of A. baumannii, none of our specimens were positive for the following resistance genes blaOXA-21, blaOXA-24 and blaOXA-58. To the best of our knowledge, our study is the first to report the existence of the Guinean haplogroup E, the PHUM540560 gene polymorphism profile as well as the presence of Acinetobacter species in head lice collected from Guinea.
CommonCrawl
We are making some changes to CLU-IN. If you have any feedback or questions, please contact us. Privacy policy | Latest updates Contaminated Site Clean-Up Information Contaminated Site Clean-Up Ecological Land Use Measurement and Monitoring Technologies for the 21st Century Business Planning and Funding Connect | Remediation Technology Guides Characterization Technology Guides Borehole Nuclear Magnetic Resonance Electrical Resistivity Tomography Electromagnetic Methods Gamma Logging Gravity Methods Magnetometry Seismic Reflection and Refraction Self or Spontaneous Potential Ground penetrating radar (GPR) is a shallow, high-resolution geophysical method that uses high-frequency, pulsed, electromagnetic waves to image the subsurface. A GPR unit transmits electromagnetic energy into the ground which is reflected, refracted, or scattered back to the surface depending on the features it encounters (such as changes in geologic media or buried objects). Typically GPR is limited to depths of approximately 10 meters, but in highly resistive subsurface materials, such as salt or ice, depths of 100s of meters may be possible (Everett, 2013). High-frequency antennas (200-400 MHz range) can achieve resolutions of a few centimeters at shallow depths, while low-frequency antennas (50 MHz or less) may have a resolution of approximately one meter at greater depths (Everett, 2013). As with most geophysical techniques, the results are non-unique and should be compared with direct physical evidence, such as trench or boring data (Everett, 2013). Integration of GPR data with other surface geophysical methods, such as seismic, resistivity, or electromagnetic methods, reduces uncertainty in site characterization. Theory of Operation Data Display and Interpretation GPR is commonly used for environmental, engineering, archeological, and other high-resolution shallow investigations. It can be used to map subsurface features such as depth to bedrock, depth to the water table, depth and thickness of soil and sediment strata (including under freshwater bodies), buried stream channels, and the location of cavities and fractures in bedrock. Other applications include locating objects such as pipes, drums, tanks, cables, buried waste, or buried utilities; mapping landfill and trench boundaries; and mapping contaminated groundwater (Benson et al., 1984; U.S. EPA, 1993). Figure 1. An electromagnetic pulse is emitted from a transmitter antenna. When the wave encounters a layer or object with different electrical properties, part of the energy is reflected back toward the surface where a receiver antenna records the signal (ASTM, 2019). GPR uses a transmitter antenna to send high-frequency, pulsed, electromagnetic waves (typically from 10 MHz to 1,000 MHz; commercially available units can go up to 7000 MHz [ASTM 2019]) into the subsurface to acquire information. The wave spreads out and travels downward until it hits a buried object or boundary (e.g., a different geologic unit) with different electromagnetic properties (Figure 1). Part of the wave energy is reflected or scattered back to the surface, while part of the energy continues to travel downward. The wave is reflected back to the surface to a receiver antenna that records the amplitude of the reflected energy and the arrival time of the wave on a digital storage device (Benson et al., 1984; U.S. EPA, 1993). Electromagnetic waves travel at a velocity that is determined primarily by the electrical permittivity1 of the material. The velocity of the electromagnetic wave is different between materials with different electrical permittivity, and a signal passed through two materials with different permittivities over the same distance will arrive at different times. The time that it takes for the wave to travel from the transmitter antenna to the receiving antenna is called the transit time (measured in nanoseconds ns, where 1 ns=10-9 s) and is proportional to the depth of the buried object or target layer. Since the velocity of an electromagnetic wave in air is 0.3 m/ns, the travel time for an electromagnetic wave in air is approximately 3.3 ns/m traveled. The electromagnetic wave velocity in a non-magnetic medium is proportional to the inverse square root of the permittivity of the material (Everett, 2013). $$ V=C / \sqrt{\epsilon_r} $$ V = velocity of the material (m/ns) C = the speed of light in a vacuum (3 x 108 m/s) \( {\epsilon_r} \) = relative dielectric permittivity Since the permittivity of earth materials is always greater than the permittivity of air (Table 1), the travel time of a wave in a material other than air is always greater than 3.3 ns/m. The depth of the object or target layer can be calculated using this velocity by the equation (Benson et al., 1984): $$ D=\frac{C T}{2 \sqrt{\epsilon_r}}=\frac{V_m T}{2} $$ Vm = velocity of the material (m/ns) \( {\epsilon_r} \) = relative electrical permittivity (dielectric constant) T = two-way travel time in nanoseconds Table 1. Electromagnetic Properties of Various Earth Materials Relative Permittivity Wave Velocities (m/Ns) Conductivity (mS/m) Air 1 0.3 0 Fresh water 80 0.033 0.5 Sea water 80 0.01 3,000 Sand (dry) 3-5 0.15 0.01 Sand (wet) 20-30 0.06 0.1-1 Silts 5-30 0.07 1-100 Clays 5-40 0.06 2-1,000 Ice 3-4 0.16 0.01 Granite 4-6 0.13 0.01-1 Limestone 4-8 0.12 0.5-2 Adapted from Table 4-3 from USACE, 1995. Figure 2. An example of a GPR system with the antenna separate from the electronics and computer controls. The system can be towed by hand or mounted on a cart (Lucius et al., 2006). GPR equipment for measuring subsurface conditions normally consists of a radar control unit that also records data, one or two antennas for transmitting and receiving, a power source, cables, and data storage and display devices (Figure 2). One to two people can manage the equipment and survey (Lucius et al., 2006). The transmitter antenna converts electrical currents into electromagnetic waves that propagate into the material. The receiver antenna captures the reflected electromagnetic waves and converts them into current. The control unit records the reflected wave data. Antennas come in various sizes with larger sizes having lower frequencies. Lower frequencies are used to detect layers or objects at greater depths (15-20 meters); however, the spatial resolution is lower (0.5-1.5 meters). Mid-sized antennas can reach depths of 3 to 6 meters. The depth of investigation depends on the electrical conductivity and dielectric permittivity of the subsurface. High electrical conductivity (for example, highly conductive clays) or dielectric permittivity attenuate electromagnetic waves. If it is assumed that the desired spatial resolution is approximately 25% of the target depth, Table 2 can be used as a guide for selecting a frequency to use in the survey (Annan, 2001, as cited by Robinson et al., 2013). At frequencies close to 100 MHz, a resolution of 1 meter can be achieved. Table 2. Frequency Values Guideline Depth (m) Center Frequency (MHz)* 0.5 1,000 *If the desired spatial resolution is 25% of the target depth. Adapted Annan, 2001, as cited by Table 3 of Robinson et al., 2013 GPR systems are digitally controlled, and data are usually digitally recorded for post-survey processing and display. The digital control and display generally consist of a microprocessor, memory, and a mass storage medium to store the field measurements. A small micro-computer and standard operating system often are used to control the measurement process, store the data, and serve as a user interface. Figure 3. A generalized schematic showing the common offset configuration for reflective profiling (top) and idealized wave arrivals (bottom). The separation distance (s) between the transmitter and receiver is kept constant as the GPR unit is moved from station location to station location collecting measurements (Baker, et al., 2007). Figure 4. A generalized schematic showing the common midpoint configuration for the reflective profiling mode (top) and idealized wave arrivals (bottom). The transmitter (T) and receiver (R) are moved symmetrically away from a fixed midpoint to collect sequential measurements at the station location (Baker, et al., 2007). GPR data can be collected by deploying the equipment at the ground surface, in a borehole, on a boat or platform in low-conductivity2 water, or from a plane or drone. There are several configurations of transmitter and receiver antennas that may be used for surface or borehole data collection. The most common configuration is referred to as the Reflection Profiling Method and includes common offset and common midpoint configurations. In the common offset configuration (Figure 3), radar waves are transmitted, received, and recorded each time the transmitting and receiving antenna pair moves a fixed distance. In the common midpoint configuration (Figure 4), the transmitter and receiver antenna must be capable of being separated. Data are collected at increasing distances around a midpoint. In general, the common offset configuration of the transmitter and receiver antenna is used when collecting GPR data by boat/platform or by plane or drone. Figure 5. Example configurations of receivers and antennas in borehole GPR data collection. (Johnson and Joesten, 2005). Borehole applications have several possible configurations of transmitting and receiving antennas. One configuration includes a transmitter and receiver separated by a set distance that is lowered into a single borehole (Figure 5a). In cross-borehole configurations, the transmitting and receiving antennas can be configured for either the Zero-Offset Profiling (ZOP) or the Multiple-Offset Gathers. In ZOP (Figure 5b), both the receiver and transmitter antennas are lowered to equal, predetermined depths before a measurement is made, and the process is repeated over the depth of interest. In a Multiple-Offset Gathers (Figure 5c), the transmitter is held at a predetermined depth in one borehole while the receiver(s) is lowered in regular steps down the other borehole(s). After the receiver(s) collects data over the depth of interest, the transmitter is lowered to the next interval, and the process is repeated until the transmitter reaches the depth of interest. The readings are then manipulated to provide a detailed 2-D depiction of the subsurface between the boreholes (Annan, 2005; Kayen, 2000). Surface Mode For surface applications of GPR, the transmitting and receiving antennas are separated by a set distance and can be mounted either on a handheld device (Figure 6), on a cart (Figure 7a) or sled towed by hand (Figure 7b), or vehicle (Figure 7c) along the ground in a sampling grid or along a transect. Figure 6. GPR data are collected with handheld equipment. The lead individual carries the electronic equipment while the second person carries the receiving antennas (USGS). Figure 7. GPR equipment with: (a) Antenna and electronics mounted between the cart's wheels with monitor and computer controls mounted on the handle (Lucius et al., 2006); (b) antenna mounted on a sled and towed by hand (USGS); and (c) towed behind a truck along a transect (USGS). Borehole Mode As discussed above and depicted in Figure 5, borehole GPR equipment can be deployed either in a single borehole or cross-hole with various configurations of transmitter and receiver antenna. Figure 8 shows the cross-hole mode of deployment where a transmitter is deployed in one well, and the receiver is deployed in a second well. Figure 8. Cross-hole collection of GPR data; a receiver is deployed down one well and a transmitter is deployed down an adjacent well (USGS). Aerial and Watercraft Mode The depth of investigation depends on the electrical conductivity and dielectric permittivity of the subsurface. High electrical conductivity (for example, highly conductive clays) or dielectric permittivity attenuate electromagnetic waves. Figure 9. A helicopter (a) and drone (b) equipped with GPR instruments (USGS). Figure 10. Connecting power and data cables on a GPR system to set up the equipment to collect data from a canoe and kayak (USGS). Figure 11. A generalized schematic of (a) a one-dimensional trace collected at a single station location, and (b) a time-distance wiggle trace from multiple station locations along a survey transect (Daniels 2000). The objective of GPR data presentation is to display the processed data in an image that approximates the subsurface, including anomalies in their proper spatial positions. The most common display of GPR data, referred to as a trace, shows amplitude versus the two-way travel time (ns). A single GPR trace consists of the transmitted pulse followed by pulses that are reflected from objects or layers (Figure 11a). Several traces from the same location are typically stacked and averaged to provide better resolution of weaker reflections. When traces from along the survey transect are placed side by side, they create a GPR cross section or time-distance record that depicts a pseudo-image of the subsurface (Figure 11b). If the permittivities of the subsurface media are known, the two-way time of travel can be converted to depth using the equation3 in Theory of Operation (Daniels 2000 ). Figure 12. Examples of 2-D GPR data displays include (a) wiggle traces composed of individual one-dimensional traces; (b) gray-scale scans in which values have been assigned to the range of amplitudes; and (c) color scans, in which a color scale is assigned to the range of amplitude. The change in the scan highlighted by the dashed blue line could be indicative of subsurface structures such as a buried trench or a buried stream channel. The vertical axis is the two-way time of travel in ns, and the horizontal axis is the surface position in meters. (Daniels 2000 ). Displays of surface GPR data include: (1) one-dimensional (1-D) trace (single trace); (2) two-dimensional (2-D) cross section; and (3) 3-D display. A 2-D cross section, or wiggle trace, is composed of multiple adjacent 1-D traces (Figure 12a). The two-way travel time is plotted on the vertical axis, and the surface position is on the horizontal axis. When many traces are measured along a survey transect, the wiggle trace may be unclear. In these instances, the data are converted to a 2-D scan display by assigning a color or gray-scale to the amplitude (Figure 12b and 12c). Borehole data can also be displayed as a 2-D cross section (Daniels 2000 ). Data are usually recorded along profile lines in a continuous recording system, or at discrete points. Multiple 2-D line scans can then be combined to build a 3-D display of the subsurface (Figures 13 and 14). The 3-D blocks can be viewed from any angle as a solid block or as block slices (Daniels 2000 ). The accurate location of each trace is critical to producing accurate 3-D displays. 3-D block views can be constructed in a variety of ways, such as a solid block or as block slices (Figure 14) (Daniels 2000 ). Figure 13. An example of building a 3-D display from a series of 2-D cross sections (Daniels 2000 ). Figure 14. A depiction of (a) block view display of 3-D GPR data, and (b) dissecting the 3-D block into slices (Daniels 2000 ). Data display is an integral part of data interpretation. In general, targets of interest are easier to identify and isolate on 3-D data sets than on conventional 2-D profile lines. Simplifying the image by eliminating noise is the most important factor for interpretation. Image simplification may be achieved by: Carefully assigning the amplitude-color ranges. Displaying only one polarity of the GPR signal. Decreasing the size of the data set that is displayed as the complexity of the target increases. Carefully selecting the viewing angle. Further image simplification in cases involving very complex (or multiple) targets may also be achieved by displaying only the peak values (maximum and minimum values) for each trace. The performance of the GPR method depends upon the site-specific surface and subsurface conditions. Performance specifications include the parameters listed in Table 3. Table 3. Performance Specifications for GPR Frequency Megahertz (MHz) High-frequency antennas: 500-1,000 MHz Low-frequency antennas: 10-125 MHz Electrical Conductivity milliSiemens per meter (mS/m) 0-1000* Relative Permittivity Dimensionless 0-80* Dynamic Range Decibels (dB) 90-130 Depth of Investigation Meters 1-10s Resolution Meters Low-frequency antennas: ~ 1 meter High-frequency antennas: ~0.1 meters *See Table 1 for a range of conductivity values and relative permittivity values for various materials. (Benson, et al., 1984 and USACE, 1995) The frequency of the antenna used affects the resolution of the GPR profile and the attenuation of the radar signal. Resolutions of just a few centimeters can be obtained by using a higher frequency antenna (500-1000 MHz). However, higher frequencies provide less penetration (depth) with higher resolution. Lower frequencies (10-125 MHz) provide greater penetration depth with less resolution; objects or features smaller than one meter are unable to be defined. Electrical Conductivity The electrical conductivity of the subsurface materials is a factor in the attenuation of the radar signal emitted by the GPR antenna. Materials with higher conductivities will result in greater attenuation of the signal. Relative permittivity, or a materials dielectric constant, is based on how easily a material becomes polarized by the imposition of an electric field. It is a dimensionless number. Values range from one for air to 80 for both fresh water and sea water. Most GPR reflections are due to changes in the relative permittivity of materials (subsurface layers or buried objects). The greater the change in permittivity, the more signal is reflected. In addition to having a sufficient electromagnetic property contrast, the boundary between the two materials needs to be sharp. Areas with subsurface contamination often have very different permittivities than uncontaminated areas. GPR has been used to map highly conductive contaminated groundwater plumes (Porsani et al., 2004; Pomposiello et al., 2004 ). Also, studies have shown that weathered fuel releases create a "halo" of conductive soil and groundwater around them that are detectable by GPR (Sauck et al., 1998; Atekwana et al., 2002; Bradford, 2003). Non-aqueous phase, non-polar organic contaminants, such as fuels and chlorinated solvents, generally have very low permittivities. In theory, these should provide a good contrast, and studies have shown that GPR can track their movement in the subsurface during a controlled release (Sneddon et al., 2000; Brewster et al., 1995); however, in practice, differentiating relatively thin layers of free product from other reflectors, where the release area is unknown, has not been successful. Studies suggest that GPR can aid remediation monitoring by tracking changes in the subsurface conditions (Lane et al. 2004, Paterson, 1997 , and Bradford, 2004). The dynamic range or performance figure of a specific GPR system represents the total attenuation loss during the two-way transit of the radar signal that still allows reception of the reflected signal. Attenuation losses above the dynamic range will cause the signal to become undetectable. Attenuation is defined by the equation: $$ \alpha=1.69 \sigma / \epsilon^{\frac{1}{2}} $$ \( \alpha \) = attenuation in decibels/m (dB/m) \( \sigma \) = electric conductivity in mS/m \( \epsilon \) = dielectric constant (dimensionless) For a material such as a clay that has an electrical conductivity of 100 mS/m and some water content (\( \epsilon \) =20), the attenuation in decibels/m would be: \( \alpha \) = 1.69(100 mS/m)/200.5 = 38 dB/m If the GPR system has a dynamic range of 100 dB, then the radar signal would be attenuated in 2.6 meters of travel. Depth of Investigation The principal factor limiting the depth of GPR investigation is attenuation of electromagnetic waves in subsurface materials. Depth of penetration is commonly less than 10 meters in most soil and rock, although penetration in conductive clays (e.g., smectites) and in materials having conductive pore fluids may be less than one meter (Benson et al., 1984). On the other hand, depth of penetration can be more than 30 meters in water-saturated sands, while depths of up to 100s of meters in highly resistive materials, such as ice, may be possible (Everett, 2013). GPR provides the highest lateral and vertical resolution of any surface geophysical method. Various frequency antennas (10 to 1,000 MHz, though commercially available units can go up to 7,000 MHz [ASTM, 2019]) can be selected to collect appropriate data to meet project needs. Lower frequency provides greater penetration with less resolution. Higher frequencies provide less penetration with higher resolution. Resolution of a layer or anomaly a few centimeters thick can be obtained with high-frequency (1 GHz) antennas at shallow depths, while lower frequency antennas (100 MHz) may have a resolution of approximately one-meter thickness at greater depths. Horizontal resolution is determined by the distance between station measurements, and/or the sample rate, and the towing speed of the antenna. Additionally, the following factors should be considered when conducting a GPR survey: Interferences - GPR is sensitive to "noise" or unwanted reflections or scattering caused by various geologic and anthropogenic factors. Geologic noise can be caused by boulders, animal burrows, tree roots, and other inhomogeneities that cause unwanted reflections or scattering. Anthropogenic noise can be caused by nearby vehicles, buildings, fences, power lines, and trees. Shielded antennas can limit some of these unwanted reflections. Electromagnetic transmissions from cellular telephones, two-way radios, televisions, and radio and microwave transmitters may also cause noise on GPR records (Benson et al., 1984). Calibration - The manufacturer's recommendations should be followed for the calibration and standardization of GPR equipment. An operational check should be conducted before each project and before starting fieldwork each day. A routine check of equipment should be made on a periodic basis and after any problem encountered with the equipment. Quality Control - Quality control can be applied to the acquisition, processing, and interpretation phases of the survey. Good quality control requires following standard procedures (e.g., those provided in ASTM Standard Guide D6432-19) and appropriate documentation. Precision - Precision is defined as the repeatability of measurements. Factors affecting precision include the location of the antennas, tow speed, coupling of the antennas to the ground surface, variations in soil conditions, and ability and care involved in choosing reflections. Repeatability of GPR measurements can be 100% if soil conditions remain the same (e.g., soil moisture) (ASTM, 2019). Accuracy - The accuracy of depth to target determinations is dependent on how precisely travel time is related to the actual depth units. The velocity (two-way travel time per unit distance) of the specific subsurface layers at the site must be determined. Table 1 lists velocities for natural materials ranging from six to 40 nanoseconds per meter (ns/m). In addition to choosing appropriate travel times, accuracy is also affected by equipment positioning differences, and proper attention to processing, interpretation, and site-specific conditions. Advantages of using GPR include (Benson et al., 1984; Wightman et al., 2003): GPR measurements are non-intrusive and relatively easy to collect. GPR provides the highest lateral and vertical resolution of any surface geophysical method. Antennas may be pulled by hand, with a vehicle from 0.8 to 8 kph, or more. GPR data can often be interpreted in the field without data processing. Graphic displays of GPR data often resemble geologic cross sections. When GPR data are collected on closely spaced (less than one meter) lines, these data can be used to generate 3-D views that significantly improve interpretation of subsurface conditions. GPR can be used on paved or unpaved surfaces and in boreholes with various casings. Limitations of using GPR in site investigations include (Benson et al., 1984; Lucius et al., 2006): The presence of trees or vegetation may affect the ability to collect measurements in a line or grid for cross-section or 3D views. The depth of penetration may be limited by the presence of conductive clays or high-conductivity pore fluid. Interpretation of GPR data requires a trained operator. GPR can be susceptible to electromagnetic noise. The cost of GPR systems varies widely depending on the complexity of the systems. Most systems fall in the $15,000 to $50,000 (USD) range. GPR systems can be rented for about $1,000 (USD) per week and a $300 (USD) mobilization charge. GPR surveys can be conducted by contractors with costs ranging from $1,000 to $2,000 (USD) per day depending on data interpretation needs and if a report is required. GPR For Earth and Environmental Applications: Case Studies From India Sonkamble, S. and S. Chandra, Journal of Applied Geophysics, Volume 193, October 2021. A GPR survey using Terra Sirch-3000 single channel control GPR meter with 2D data collection was performed for civil engineering, environmental, archeological, geological features, aquifer contamination, and sea water intrusion studies. The scanned images were verified with excavations, electrical resistivity tomography, and soil and water chemistry, which complemented the GPR anomalies. Results helped to: 1) identify caving in trenchless MS pipe; 2) discover an ancient temple; 3) decipher a subsurface intrusive body; 4) detect clandestine pipes; 5) demarcate fresh groundwater zones within industrial clusters; and 6) identify sea water intrusion in coastal area. One-off Geophysical Detection of Chlorinated DNAPL During Remediation of an Industrial Site: A Case Study Fiorentine, E., et al., AIMS Geosciences, Vol 7(1), 1-21 pp., January 2021. A geophysical survey was performed on an industrial site to find the precise location of chlorinated DNAPL for treatment of the saturated zone. As the excavation neared the saturated zone, geophysical measurements were conducted at the bottom of the pit. Whereas electrical resistivity tomography measurements provided little information, GPR drew the remediation operations toward an area that preliminary point measurements had not identified as a possible source location. Hydrogeologic Barriers to the Infiltration of Treated Wastewater at the Joint Base McGuire-Dix-Lakehurst Land Application Site, Burlington County, New Jersey Fiore, A., U.S. Geological Survey Scientific Investigations Report 2016-5065, 83 pp., 2016. USGS, in cooperation with U.S. Department of Defense (DoD), investigated the potential hydrogeologic conditions preventing infiltration in two of 12 infiltration basins by testing the geophysical, lithological, and hydraulic characteristics of the aquifer material underlying the site. Ground penetrating radar surveys and additional water levels measured in piezometer wells adjacent to the infiltration basins indicated a lack of connectivity between the ponded basin water and the regional water table and demonstrated that perched conditions were not present in native formation materials outside the inoperable basins. Therefore, the near-surface low-permeability clay is likely preventing infiltration from the basin surface and causes the ineffectiveness of the two basins for wastewater land application operations. Hydrostratigraphic Analysis of the MADE Site With Full-Resolution GPR and Direct-push Hydraulic Profiling Dogan, M., et al., Geophysical Research Letters, Vol. 38, L06405, March 22, 2011. Full-resolution 3D GPR data were combined with high-resolution hydraulic conductivity (K) data from vertical direct-push (DP) profiles to characterize a portion of the highly heterogeneous MAcro Dispersion Experiment site. Statistical evaluation of DP data indicated non-normal distributions that have much higher similarity within each GPR facies than between facies. The analysis of GPR and DP data provides high-resolution estimates of the 3D geometry of hydrostratigraphic zones, which can then be populated with stochastic K fields. Surface-Geophysical Investigation of a Formerly Used Defense Site, Machiasport, Maine, February 2003 White, E., et al., U.S. Geological Survey Scientific Investigations Report 2004-5099, 60 pp. 2005. USGS and Argonne National Laboratory used surface-geophysical methods, including GPR and seismic refraction tomography, to characterize the lithology and structure of the bedrock at the site and to identify highly fractured areas that may provide pathways for groundwater flow and chlorinated solvent transport to offsite domestic water supply wells. Interpretation of the GPR data indicates that depth to the weathered bedrock surface is approximately 0.5 to 3 meters. Reflections from within the bedrock are visible throughout all GPR profiles, and zones of scattered electromagnetic energy may correlate to zones of highly fractured bedrock. Integrated interpretation of the results from GPR and seismic refraction tomography was used to locate boreholes along the surface-geophysical profiles. The U.S. Army Corps of Engineers will use an integrated analysis of information obtained from the surface- and borehole-geophysical surveys and test drilling to develop a conceptual site model of groundwater flow and solute transport. Application of Cross-borehole Radar to Monitor Field-scale Vegetable Oil Injection Experiments for Biostimulation Lane, J., et al., Symposium on the Application of Geophysics to Engineering and Environmental Problems Proceedings: 429-448 pp., 2004. Cross-borehole radar methods were used to monitor a field-scale biostimulation pilot project at the Anoka County Riverfront Park, located downgradient of the Naval Industrial Reserve Ordnance Plant. USGS collected cross-borehole radar data in five site visits over 1.5 years. This paper presents level-run (zero-offset profile) and time-lapse radar tomography data collected in multiple planes. Comparison of pre- and post-injection data sets provided valuable insights into the spatial and temporal distribution of both emulsified vegetable oil and the extent of groundwater with altered chemistry resulting from injections—information important for understanding microbial degradation of chlorinated hydrocarbons at the site. American Society for Testing and Materials (ASTM), 2019. ASTM D6432-19 Standard Guide for Using the Surface Ground Penetrating Radar Method for Subsurface Investigation. ASTM International, West Conshohocken, PA, 2019. Annan, A.P., 2001. Ground Penetrating Radar Workshop Notes. Sensors and Software, Inc: Mississauga. Annan., A.P., 2005. GPR Methods for Hydrogeological Studies. In: Rubin, Y., and S. Hubbard (eds) Hydrogeophysics. Water Science and Technology Library, vol 50. Springer, Dordrecht. P. 185-213. Atekwana, E. et al., 2002. Geophysical Investigation of Vadose Zone Conductivity Anomalies at a Hydrocarbon Contaminated Site: Implications for the Assessment of Intrinsic Bioremediation. Jour. of Environmental & Engineering Geophysics, V vol. 7, No. 3, pp. 103-110. Baker, G., et al., 2007. An Introduction to Ground Penetrating Radar (GPR). Geological Society of America Special Paper 432: Stratigraphic Analyses Using GPR, pp. 1-18. January. Benson, R., et al., 1984. Geophysical Techniques for Sensing Buried Wastes and Waste Migration. EPA-600/7-84-064. 256 pp. June. Bradford, J.H. 2004. 3D Multi-Offset, Multi-Polarization Acquisition and Processing of GPR Data: A Controlled DNAPL Spill Experiment: SAGEEP 2004 Proceedings, Symp. Appl. Geophys. Env. Eng. Prblm: Colorado Springs, CO, Env. Eng. Geophys. Soc., 514-527. Bradford, J.H. 2003. GPR Offset-Dependent Reflectivity Analysis for Characterization of a High-Conductivity LNAPL Plume, SAGEEP 2003 Symposium on the Application of Geophysics to Environmental and Engineering Problems: San Antonio, TX, Env. Eng. Geophys. Soc., p. 238-252. Brewster, M. et al., 995. Observed Migration of a Controlled DNAPL Release by Geophysical Methods. Ground Water; V 33 n6; p977-987. Daniels, J., 2000. Ground Penetrating Radar Fundamentals. U.S. EPA Region V. 21 pp. November. Everett, M., 2013. Near-Surface Applied Geophysics. Cambridge University Press, 441 pp. April. Greenhouse, J. et al., 1998. Reference Notes: Applications of Geophysics in Environmental Investigations. Environmental and Engineering Geophysical Society. Johnson, C. and P. Joesten, 2005. Analysis of Borehole-Radar Reflection Data from Machiasport, Maine, December 2003. U.S. Geological Survey, Scientific Investigations Report 2005-5087, 44 p. Kayen, R., et al., 2000. Non-Destructive Measurement of Soil Liquefaction Density Change by Crosshole Radar Tomography, Treasure Island, California. Geo-Denver 2000 Conference. July. Lane, Jr., J.W. et al., 2004. Application of Cross-Borehole Radar to Monitor Fieldscale Vegetable Oil Injection Experiments for Biostimulation. Symposium on the Application of Geophysics to Engineering and Environmental Problems (SAGEEP), 22 to 26 February 2004, Colorado Springs, Colorado, Proceedings of Environmental and Engineering Geophysical Society, 20 p. Lucius, J., et al., 2006. An Introduction to Using Surface Geophysics to Characterize Sand and Gravel Deposits. U.S. Geological Survey Open-File Report 2006-1257, 51 pp. Paterson, Norman. 1997. Remote Mapping of Mine Wastes. In Proceedings of Exploration '97, Fourth Decennial International Conference on Mineral Exploration, ed. A.G. Gubbins, 905-16. Toronto: Prospectors and Developers Association of Canada. Pomposiello, C. et al., 2004. Resistivity Imaging and Ground Penetrating Radar Survey at Gualeguaychú Landfill, Entre Ríos Province, Argentina: Evidence of a Contamination Plume. IAGA WG 1.2 on Electromagnetic Induction in the Earth Proceedings of the 17th Workshop, Hyderabad, India. Porsani, J.L. et al., 2004. The Use of GPR and VES in Delineating a Contamination Plume in a Landfill Site: A Case Study in SE Brazil. Journal of Applied Geophysics vol. 55, no3-4, pp. 199-209. Robinson, M., et al., 2013. Ground Penetrating Radar. Geomorphological Techniques. British Society for Geomorphology Remote Sensing Workshop, pp. 1-26. March. Sauck, W.A., et al., 1998. High Conductivities Associated with an LNAPL Plume Imaged by Integrated Geophysical Techniques. Jour. of Environ. and Engineering Geophysics, V vol. 2, N no. 3, pp. 203-212. Sneddon, K.W. et al., 2000. Determining and Mapping DNAPL Saturation Values from Noninvasive GPR Measurements: in Proc. of SAGEEP 2000, 21-25 February 2000, Arlington, VA, M.H. Powers, A-B. Ibrahim, and L. Cramer, eds., EEGS, Wheat Ridge, CO, p. 293-302. U.S. Army Corps of Engineers (USACE), 1995. Geophysical Exploration for Engineering and Environmental Investigations. EM 1110-1-1802. 208 pp. August. U.S. Environmental Protection Agency (U.S. EPA), 1993. Use of Airborne, Surface and Geophysical Techniques at Contaminated Sites: A Reference Guide. EPA/625/R-92/007. 304 pp. September. U.S. Environmental Protection Agency (U.S. EPA), 1993. Subsurface Characterization and Monitoring Techniques, A Desk Reference Guide, Volume 1: Solids and Ground Water Appendices A and B. EPA/625/R-93/003a. 498 pp. May. Wightman, W., et al., 2003. Application of Geophysical Methods to Highway Related Problems. 774 pp. September. Woods Hole Oceanographic Institute, 2022. Ground Penetrating Radar. Accessed August 12, 2022. Permittivity describes the ability of a material to store electric energy by separating opposite polarity charges in space. The relative dielectric permittivity is the ratio of the permittivity of a material to that of free space. ↩ High-conductivity materials limit the depth of investigation by GPR. Sea water has a higher conductivity than fresh water because of its salt content, which disperses the radio energy (Woods Hole Oceanographic Institute, 2022). ↩ The depth of the object or target layer can be calculated using this velocity by the equation (Benson et al., 1984): Next: "Magnetometry" » « Back: "Gravity Methods" Contaminated Site Clean-Up Information
CommonCrawl
Radical symbol In mathematics, the radical symbol, radical sign, root symbol, radix, or surd is a symbol for the square root or higher-order root of a number. The square root of a number x is written as ${\sqrt {11}},$ while the nth root of x is written as ${\sqrt[{n}]{x}}.$ It is also used for other meanings in more advanced mathematics, such as the radical of an ideal. In linguistics, the symbol is used to denote a root word. Principal square root Each positive real number has two square roots, one positive and the other negative. The square root symbol refers to the principal square root, which is the positive one. The two square roots of a negative number are both imaginary numbers, and the square root symbol refers to the principal square root, the one with a positive imaginary part. For the definition of the principal square root of other complex numbers, see Square root#Principal square root of a complex number. Origin The origin of the root symbol √ is largely speculative. Some sources imply that the symbol was first used by Arab mathematicians. One of those mathematicians was Abū al-Hasan ibn Alī al-Qalasādī (1421–1486). Legend has it that it was taken from the Arabic letter "ج" (ǧīm), which is the first letter in the Arabic word "جذر" (jadhir, meaning "root").[1] However, Leonhard Euler[2] believed it originated from the letter "r", the first letter of the Latin word "radix" (meaning "root"), referring to the same mathematical operation. The symbol was first seen in print without the vinculum (the horizontal "bar" over the numbers inside the radical symbol) in the year 1525 in Die Coss by Christoff Rudolff, a German mathematician. In 1637 Descartes was the first to unite the German radical sign √ with the vinculum to create the radical symbol in common use today.[3] Encoding The Unicode and HTML character codes for the radical symbols are: ReadCharacterUnicode[4]XMLURLHTML[5] Square root√U+221A&#8730; or &#x221A;%E2%88%9A&radic; or &Sqrt; Cube root∛U+221B&#8731; or &#x221B;%E2%88%9B Fourth root∜U+221C&#8732; or &#x221C;%E2%88%9C However, these characters differ in appearance from most mathematical typesetting by omitting the overline connected to the radical symbol, which surrounds the argument of the square root function. The OpenType math table allows adding this overline following the radical symbol. Legacy encodings of the square root character U+221A include: • 0xC3 in Mac OS Roman and Mac OS Cyrillic • 0xFB (Alt+251) in Code page 437 and Code page 866 (but not Code page 850) on DOS and the Windows console • 0xD6 in the Symbol font encoding[6] • 02-69 (7-bit 0x2265, SJIS 0x81E3, EUC 0xA2E5) in Japanese JIS X 0208[7] • 01-78 (EUC/UHC 0xA1EE) in Korean Wansung code[8] • 01-44 (EUC 0xA1CC) in Mainland Chinese GB 2312 or GBK[9] • Traditional Chinese: 0xA1D4 in Big5[10][11] or 1-2235 (kuten 01-02-21, EUC 0xA2B5 or 0x8EA1A2B5) in CNS 11643[11][12] The Symbol font displays the character without any vinculum whatsoever; the overline may be a separate character at 0x60.[13] The JIS,[14] Wansung[15] and CNS 11643[11][16] code charts include a short overline attached to the radical symbol, whereas the GB 2312[17] and GB 18030 charts do not.[18] Additionally a "Radical Symbol Bottom" (U+23B7, ⎷) is available in the Miscellaneous Technical block.[19] This was used in contexts where box-drawing characters are used, such as in the technical character set of DEC terminals, to join up with box drawing characters on the line above to create the vinculum.[20] In LaTeX the square root symbol may be generated by the \sqrt macro,[21] and the square root symbol without the overline may be generated by the \surd macro. References 1. "Language Log: Ab surd". Retrieved 22 June 2012. 2. Leonhard Euler (1755). Institutiones calculi differentialis (in Latin). 3. Cajori, Florian (2012) [1928], A History of Mathematical Notations, vol. I, Dover, p. 208, ISBN 978-0-486-67766-8 4. Unicode Consortium (2022-09-16). "Mathematical Operators" (PDF). The Unicode Standard (15.0 ed.). Retrieved 2023-07-16. 5. Web Hypertext Application Technology Working Group (2023-07-14). "Named Character References". HTML Living Standard. Retrieved 2023-07-16. 6. Apple Computer (2005-04-05) [1995-04-15]. Map (external version) from Mac OS Symbol character set to Unicode 4.0 and later. Unicode Consortium. SYMBOL.TXT. 7. Unicode Consortium (2015-12-02) [1994-03-08]. JIS X 0208 (1990) to Unicode. JIS0208.TXT. 8. Unicode Consortium (2011-10-14) [1995-07-24]. Unified Hangeul(KSC5601-1992) to Unicode table. KSC5601.TXT. 9. IBM (2002). "windows-936-2000". International Components for Unicode. 10. Unicode Consortium (2015-12-02) [1994-02-11]. BIG5 to Unicode table (complete). BIG5.TXT. 11. "[√] 1-2235". Word Information. National Development Council. 12. IBM (2014). "euc-tw-2014". International Components for Unicode. 13. IBM. Code Page 01038 (PDF). Archived from the original (PDF) on 2015-07-08. 14. ISO/IEC JTC 1/SC 2 (1992-07-13). Japanese Graphic Character Set for Information Interchange (PDF). ITSCJ/IPSJ. ISO-IR-168. 15. Korea Bureau of Standards (1988-10-01). Korean Graphic Character Set for Information Interchange (PDF). ITSCJ/IPSJ. ISO-IR-149. 16. ECMA (1994). Chinese Standard Interchange Code (CSIC) - Set 1 (PDF). ITSCJ/IPSJ. ISO-IR-171. 17. China Association for Standardization (1980). Coded Chinese Graphic Character Set for Information Interchange (PDF). ITSCJ/IPSJ. ISO-IR-58. 18. Standardization Administration of China (2005). Information Technology—Chinese coded character set. p. 8. GB 18030-2005. 19. Unicode Consortium (2022-09-16). "Miscellaneous Technical" (PDF). The Unicode Standard (15.0 ed.). Retrieved 2023-07-16. 20. Williams, Paul Flo (2002). "DEC Technical Character Set (TCS)". VT100.net. Retrieved 2023-07-16. 21. Braams, Johannes; et al. (2023-06-01). "The LATEX 2ε Sources" (PDF) (2023-06-01 Patch Level 1 ed.). § ltmath.dtx: Math Environments. Retrieved 2023-07-16.
Wikipedia
FoDS Home On adaptive estimation for dynamic Bernoulli bandits June 2019, 1(2): 227-247. doi: 10.3934/fods.2019010 EmT: Locating empty territories of homology group generators in a dataset Xin Xu , and Jessi Cisewski-Kehe Department of Statistics and Data Science, Yale University, New Haven, CT 06511, USA * Corresponding author: Xin Xu Full Text(HTML) Figure(13) / Table(1) Persistent homology is a tool within topological data analysis to detect different dimensional holes in a dataset. The boundaries of the empty territories (i.e., holes) are not well-defined and each has multiple representations. The proposed method, Empty Territory (EmT), provides representations of different dimensional holes with a specified level of complexity of the territory boundary. EmT is designed for the setting where persistent homology uses a Vietoris-Rips complex filtration, and works as a post-analysis to refine the hole representation of the persistent homology algorithm. In particular, EmT uses alpha shapes to obtain a special class of representations that captures the empty territories with a complexity determined by the size of the alpha balls. With a fixed complexity, EmT returns the representation that contains the most points within the special class of representations. This method is limited to finding 1D holes in 2D data and 2D holes in 3D data, and is illustrated on simulation datasets of a homogeneous Poisson point process in 2D and a uniform sampling in 3D. Furthermore, the method is applied to a 2D cell tower location geography dataset and 3D Sloan Digital Sky Survey (SDSS) galaxy dataset, where it works well in capturing the empty territories. Keywords: Alpha shapes, astrostatistics, data segmentation, persistent homology, topological data analysis. Mathematics Subject Classification: Primary: 62H30, 55U99; Secondary: 62-07. Citation: Xin Xu, Jessi Cisewski-Kehe. EmT: Locating empty territories of homology group generators in a dataset. Foundations of Data Science, 2019, 1 (2) : 227-247. doi: 10.3934/fods.2019010 S. Asaeedi, F. Didehvar and A. Mohades, α-concave hull, a generalization of convex hull, Theoretical Computer Science, 702 (2017), 48-59. doi: 10.1016/j.tcs.2017.08.014. Google Scholar P. Bendich, J. S. Marron, E. Miller, A. Pieloch and S. Skwerer, Persistent homology analysis of brain artery trees, The annals of applied statistics, 10 (2016), 198-218. doi: 10.1214/15-AOAS886. Google Scholar G. Carlsson, A. Zomorodian, A. Collins and L. J. Guibas, Persistence barcodes for shapes, International Journal of Shape Modeling, 11 (2005), 149-187. Google Scholar F. Chazal, B. Fasy, F. Lecci, B. Michel, A. Rinaldo, A. Rinaldo and L. Wasserman, Robust topological inference: Distance to a measure and kernel distance, The Journal of Machine Learning Research, 18 (2017), Paper No. 159, 40 pp. Google Scholar S. N. Chiu, D. Stoyan, W. S. Kendall and J. Mecke, Stochastic Geometry and Its Applications, John Wiley & Sons, 2013. doi: 10.1002/9781118658222. Google Scholar D. Cohen-Steiner, H. Edelsbrunner and J. Harer, Stability of persistence diagrams, Discrete & Computational Geometry, 37 (2007), 103-120. doi: 10.1007/s00454-006-1276-5. Google Scholar D. Kahle and H. Wickham, ggmap: Spatial Visualization with ggplot2, The R Journal, 5 (2013), 144-161. Google Scholar V. De Silva and G. E. Carlsson, Topological estimation using witness complexes., SPBG, 4 (2004), 157-166. Google Scholar Edelsbrunner, Letscher and Zomorodian, Topological persistence and simplification, Discrete & Computational Geometry, 28 (2002), 511–533, URL https://doi.org/10.1007/s00454-002-2885-2. Google Scholar H. Edelsbrunner and J. Harer, Persistent homology-a survey, Contemporary Mathematics, 453 (2008), 257-282. doi: 10.1090/conm/453/08802. Google Scholar H. Edelsbrunner and J. Harer, Computational Topology: An Introduction, American Mathematical Soc., 2010. Google Scholar H. Edelsbrunner, D. Kirkpatrick and R. Seidel, On the shape of a set of points in the plane, IEEE Transactions on information theory, 29 (1983), 551-559. doi: 10.1109/TIT.1983.1056714. Google Scholar H. Edelsbrunner and D. Morozov, Persistent homology: Theory and practice, European Congress of Mathematics, 31–50, Eur. Math. Soc., Zürich, 2013. Google Scholar H. Edelsbrunner and E. P. Mücke, Three-dimensional alpha shapes, ACM Transactions on Graphics (TOG), 13 (1994), 43-72. Google Scholar S. Emrani, T. Gentimis and H. Krim, Persistent homology of delay embeddings and its application to wheeze detection, IEEE Signal Processing Letters, 21 (2014), 459-463. Google Scholar B. T. Fasy, F. Lecci, A. Rinaldo, L. Wasserman, S. Balakrishnan and A. Singh et al., Confidence sets for persistence diagrams, The Annals of Statistics, 42 (2014), 2301-2339. doi: 10.1214/14-AOS1252. Google Scholar A. Galton, Pareto-optimality of cognitively preferred polygonal hulls for dot patterns, in Spatial Cognition VI. Learning, Reasoning, and Talking about Space (eds. C. Freksa, N. S. Newcombe, P. Gärdenfors and S. Wölfl), Springer Berlin Heidelberg, Berlin, Heidelberg, 45 (2008), 409–425. Google Scholar R. Ghrist, Barcodes: the persistent topology of data, Bulletin of the American Mathematical Society, 45 (2008), 61-75. doi: 10.1090/S0273-0979-07-01191-3. Google Scholar J. A. Hartigan and M. A. Wong, Algorithm as 136: A k-means clustering algorithm, Journal of the Royal Statistical Society. Series C (Applied Statistics), 28 (1979), 100-108. Google Scholar Y. Hoffman, O. Metuki, G. Yepes, S. Gottlöber, J. E. Forero-Romero, N. I. Libeskind and A. Knebe, A kinematic classification of the cosmic web, Monthly Notices of the Royal Astronomical Society, 425 (2012), 2049-2057. Google Scholar V. Icke, R. Weygaert et al., The galaxy distribution as a voronoi foam, Quarterly Journal of the Royal Astronomical Society, 32 (1991), 85. Google Scholar F.-S. Kitaura and R. E. Angulo, Linearization with cosmological perturbation theory, Monthly Notices of the Royal Astronomical Society, 425 (2012), 2443-2454. Google Scholar S. Lloyd, Least squares quantization in pcm, IEEE Transactions on Information Theory, 28 (1982), 129-137. doi: 10.1109/TIT.1982.1056489. Google Scholar J. MacQueen et al., Some methods for classification and analysis of multivariate observations, in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, Oakland, CA, USA, 1967,281–297. Google Scholar A. Moreira and M. Y. Santos, Concave hull: A k-nearest neighbours approach for the computation of the region occupied by a set of points. Google Scholar M. C. Neyrinck, ZOBOV: A parameter-free void-finding algorithm, Monthly Notices of the Royal Astronomical Society, 386 (2008), 2101-2109. Google Scholar P. Peebles, The void phenomenon, The Astrophysical Journal, 557 (2001), 495. Google Scholar A. Pisani, P. Sutter, N. Hamaus, E. Alizadeh, R. Biswas, B. D. Wandelt and C. M. Hirata, Counting voids to probe dark energy, Physical Review D, 92 (2015), 083531. Google Scholar R. R. Rojas, M. S. Vogeley, F. Hoyle and J. Brinkmann, Photometric properties of void galaxies in the sloan digital sky survey, The Astrophysical Journal, 617 (2004), 50. Google Scholar T. Sousbie, The persistent cosmic web and its filamentary structure–i. theory and implementation, Monthly Notices of the Royal Astronomical Society, 414 (2011), 350-383. Google Scholar M. A. Strauss, D. H. Weinberg, R. H. Lupton, V. K. Narayanan, J. Annis, M. Bernardi, M. Blanton, S. Burles, A. Connolly, J. Dalcanton et al., Spectroscopic target selection in the Sloan Digital Sky Survey: The main galaxy sample, The Astronomical Journal, 124 (2002), 1810. Google Scholar P. Sutter, G. Lavaux, B. D. Wandelt and D. H. Weinberg, A public void catalog from the sdss dr7 galaxy redshift surveys based on the watershed transform, The Astrophysical Journal, 761 (2012), 44. Google Scholar R. van de Weygaert, Fragmenting the universe. 3: The constructions and statistics of 3-d voronoi tessellations, Astronomy and Astrophysics, 283 (1994), 361-406. Google Scholar L. Vietoris, Über den höheren Zusammenhang kompakter Räume und eine Klasse von zusammenhangstreuen Abbildungen, Mathematische Annalen, 97 (1927), 454-472. doi: 10.1007/BF01447877. Google Scholar H. Wagner, C. Chen and E. Vuçini, Efficient computation of persistent homology for cubical data, in Topological Methods in Data Analysis and Visualization II, Springer, 2012, 91–106. doi: 10.1007/978-3-642-23175-9_7. Google Scholar L. Wasserman, Topological data analysis, Annual Review of Statistics and Its Application, 5 (2018), 501-535. doi: 10.1146/annurev-statistics-031017-100045. Google Scholar K. Xia and G.-W. Wei, Persistent homology analysis of protein structure, flexibility, and folding, International Journal for Numerical Methods in Biomedical Engineering, 30 (2014), 814-844. doi: 10.1002/cnm.2655. Google Scholar X. Xu, J. Cisewski-Kehe, S. B. Green and D. Nagai, Finding cosmic voids and filament loops using topological data analysis, Astronomy and Computing. Google Scholar X. Zhu, Persistent homology: An introduction and a new text representation for natural language processing., in IJCAI, 2013, 1953–1959. Google Scholar A. Zomorodian, Fast construction of the Vietoris-Rips complex, Computers & Graphics, 34 (2010), 263-271. Google Scholar A. Zomorodian and G. Carlsson, Computing persistent homology, Discrete & Computational Geometry, 33 (2005), 249-274. doi: 10.1007/s00454-004-1146-y. Google Scholar Figure 1. The three loops are equivalent in the sense that they all encircle the same hole. However, it can be desirable to find a representation that captures the empty territory, with no data point inside. The first two loops contain data points inside them, while the third one is empty. EmT is a method for finding the third type of representation at a specified level of complexity Figure Options Download as PowerPoint slide Figure 2. For a VR complex, if there are pairwise intersections between $ k $ points, the corresponding $ k $-simplex is added to the simplicial complex. (a) three circles with radius $ \frac{\delta}{2} $ that intersect pairwise, and $ a, b, c $ are within a distance of $ \delta $ from each other. The $ VR(X, \delta) $ includes three points $ a, b, c $, the edges $ \{ab, bc, ca\} $, and the triangle $ \{abc\} $. (b) the resulting $ VR(X, \delta) $ Figure 3. (a) The same point cloud as in Fig. 1 is used to generate the persistence diagram of (b). The farther a point is from the diagonal, the longer it appears in the filtration. There are 40 $ H_0 $ generators and one prominent $ H_1 $ generator, which is consistent with the point cloud Figure 4. (a) The green circles occupy the area around the data points. The remaining area is the resulting alpha hull, outlined by the red curves. By straightening the arcs, it becomes an alpha shape, shown in blue lines. (b) A concave hull based on the alpha shape (blue polygon). (c) An inner shell (blue polygon) based on the same alpha shape as (a) Figure 5. (a), (b) and (c) are three different ways of occupying the area from the interior of the dataset with $ \alpha $ equal to 0.8. (d) An $ R_{\alpha, i} $ with $ \alpha $ equal to 0.6 Figure 6. The same dataset from previous examples is used here to illustrate EmT. (a) Convex hull of the dataset in green lines. The red circles are set $ S_X $ and green crosses are set $ S_{X'} $. (b) The alpha hull in read lines and alpha shape in blue lines. Green circles are $ \alpha $ balls and green points are $ C_{\alpha} $. (c) Blue lines are $ CH_{\alpha} $ and red pluses are circle centers that are inside $ CH_{\alpha} $. (d) The comparison between the EmT output $ \widetilde{IS_{\alpha}} $ as blue diamonds and the original representation point set $ S_X $ returned by persistent homology as red circles Figure 7. The same outer loop is used in the four examples displayed in (a), (b), (c), and (d), but with different points inside. The black points are data points, the red circles are the raw representations from the persistent homology algorithm, the blue triangles are representations of the EmT algorithm, and in (c) the cyan curves are the corresponding alpha hull. In (a), the raw representation of persistent homology is not able to detect the inner points, while the EmT representation includes the inner points. In (b), both the raw representation and EmT have the inner points in their representations. In (c), while the raw representation contains the larger loop with the inner points, the EmT representation includes two separate loops, as indicated by the corresponding alpha hulls. In (d), due to the amount of scattered points inside the larger loop, the raw representation is not able to identify it; therefore the EmT representation is not able to find the larger loop either since EmT takes the raw representation as input Figure 8. A dataset from the homogeneous Poisson point process with intensity $ 300 $ in $ [0, 1]\times[0, 1] $. (a) Persistence diagram of the dataset. (b) The eight $ H_1 $ generators with the largest persistences reported by persistent homology in different colors. (c) The representations of the eight $ H_1 $ generators obtained by the EmT approach, where the boundary of the empty regions are better captured Figure 9. (a) A dataset generated from the uniform distribution in $ [0, 1]\times[0, 1]\times[0, 1] $; points inside the red regions are deleted from the dataset. (b) The persistence diagram of the dataset. (c) The raw representations from the persistent homology algorithm (blue and green points). The representations are displayed in blue and green shading, where the red points are observations inside the shaded volumes. (d) The representations from the EmT approach (blue and green points). The representations are displayed in blue and green shading, and there is no point inside the shading volumes Figure 10. (a) Cell tower locations in Minnesota where each of the black point indicates a cell tower. (b) The loop overlaps with the Red Lake Indian Reservation. (c) The persistence diagram of the dataset. (d), (e) Representations of eight $ H_1 $ generators with the largest persistences from persistent homology and the EmT approach, respectively Figure 11. (a) A Voronoi foam simulation. The eight large red points are the void seeds that generate eight void regions. The green crosses highlight one of the eight void regions. (b) Persistence diagram of the original dataset with 1200 points Figure 12. Persistence diagrams of the centers of k-means clustering for different k's. The k-means are made on the dataset shown in Fig. 11a Figure 13. (a) Cosmic voids representations ($ H_2 $ generators) reported by the persistent homology algorithm. (b) Cosmic voids representations from the EmT approach. As a qualitative comparison, the representations of EmT trace the empty volumes, while the raw representations reported by persistent homology do not define well the empty regions. (c) The raw representation of a particular cosmic void is displayed in the blue shading and the red points are galaxy inside the volume. (d) The EmT representation of a particular cosmic is displayed in the blue shading and there is no galaxy inside. The axis units are $ Mpc/h $, which megaparsec divided by a parameter $ h $ (which accounts for the uncertainty about the expansion rate of the universe). One parsec is approximately 3.26 lightyears Table 1. Summary table indicating which of the eight void regions is detected for different k values; 1 = detected and 0 = not detected. The corresponding labeled void regions are shown in Fig. 11a original 1 0 1 1 1 1 1 1 k=60 1 0 0 1 1 1 1 0 k=120 1 0 1 1 1 1 1 0 Download as excel Tyrus Berry, Timothy Sauer. Consistent manifold representation for topological data analysis. Foundations of Data Science, 2019, 1 (1) : 1-38. doi: 10.3934/fods.2019001 George Siopsis. Quantum topological data analysis with continuous variables. Foundations of Data Science, 2019, 1 (4) : 419-431. doi: 10.3934/fods.2019017 Jochen Abhau, Oswin Aichholzer, Sebastian Colutto, Bernhard Kornberger, Otmar Scherzer. Shape spaces via medial axis transforms for segmentation of complex geometry in 3D voxel data. Inverse Problems & Imaging, 2013, 7 (1) : 1-25. doi: 10.3934/ipi.2013.7.1 Esther Klann, Ronny Ramlau, Wolfgang Ring. A Mumford-Shah level-set approach for the inversion and segmentation of SPECT/CT data. Inverse Problems & Imaging, 2011, 5 (1) : 137-166. doi: 10.3934/ipi.2011.5.137 Zhouchen Lin. A review on low-rank models in data analysis. Big Data & Information Analytics, 2016, 1 (2&3) : 139-161. doi: 10.3934/bdia.2016001 Pankaj Sharma, David Baglee, Jaime Campos, Erkki Jantunen. Big data collection and analysis for manufacturing organisations. Big Data & Information Analytics, 2017, 2 (2) : 127-139. doi: 10.3934/bdia.2017002 Habibe Zare Haghighi, Sajad Adeli, Farhad Hosseinzadeh Lotfi, Gholam Reza Jahanshahloo. Revenue congestion: An application of data envelopment analysis. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1311-1322. doi: 10.3934/jimo.2016.12.1311 Runqin Hao, Guanwen Zhang, Dong Li, Jie Zhang. Data modeling analysis on removal efficiency of hexavalent chromium. Mathematical Foundations of Computing, 2019, 2 (3) : 203-213. doi: 10.3934/mfc.2019014 Mahdi Mahdiloo, Abdollah Noorizadeh, Reza Farzipoor Saen. Developing a new data envelopment analysis model for customer value analysis. Journal of Industrial & Management Optimization, 2011, 7 (3) : 531-558. doi: 10.3934/jimo.2011.7.531 Jiang Xie, Junfu Xu, Celine Nie, Qing Nie. Machine learning of swimming data via wisdom of crowd and regression analysis. Mathematical Biosciences & Engineering, 2017, 14 (2) : 511-527. doi: 10.3934/mbe.2017031 Jingmei Zhou, Xiangmo Zhao, Xin Cheng, Zhigang Xu. Visualization analysis of traffic congestion based on floating car data. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1423-1433. doi: 10.3934/dcdss.2015.8.1423 Matthew O. Williams, Clarence W. Rowley, Ioannis G. Kevrekidis. A kernel-based method for data-driven koopman spectral analysis. Journal of Computational Dynamics, 2015, 2 (2) : 247-265. doi: 10.3934/jcd.2015005 Cheng-Kai Hu, Fung-Bao Liu, Cheng-Feng Hu. Efficiency measures in fuzzy data envelopment analysis with common weights. Journal of Industrial & Management Optimization, 2017, 13 (1) : 237-249. doi: 10.3934/jimo.2016014 Massimiliano Guzzo, Giancarlo Benettin. A spectral formulation of the Nekhoroshev theorem and its relevance for numerical and experimental data analysis. Discrete & Continuous Dynamical Systems - B, 2001, 1 (1) : 1-28. doi: 10.3934/dcdsb.2001.1.1 Sebastien Motsch, Mehdi Moussaïd, Elsa G. Guillot, Mathieu Moreau, Julien Pettré, Guy Theraulaz, Cécile Appert-Rolland, Pierre Degond. Modeling crowd dynamics through coarse-grained data analysis. Mathematical Biosciences & Engineering, 2018, 15 (6) : 1271-1290. doi: 10.3934/mbe.2018059 Mohammad Afzalinejad, Zahra Abbasi. A slacks-based model for dynamic data envelopment analysis. Journal of Industrial & Management Optimization, 2019, 15 (1) : 275-291. doi: 10.3934/jimo.2018043 Zheng Dai, I.G. Rosen, Chuming Wang, Nancy Barnett, Susan E. Luczak. Using drinking data and pharmacokinetic modeling to calibrate transport model and blind deconvolution based data analysis software for transdermal alcohol biosensors. Mathematical Biosciences & Engineering, 2016, 13 (5) : 911-934. doi: 10.3934/mbe.2016023 Saber Saati, Adel Hatami-Marbini, Per J. Agrell, Madjid Tavana. A common set of weight approach using an ideal decision making unit in data envelopment analysis. Journal of Industrial & Management Optimization, 2012, 8 (3) : 623-637. doi: 10.3934/jimo.2012.8.623 Sho Nanao, Hiroyuki Masuyama, Shoji Kasahara, Yutaka Takahashi. Queueing analysis of data block synchronization mechanism in peer-to-peer based video streaming system. Journal of Industrial & Management Optimization, 2011, 7 (3) : 699-716. doi: 10.3934/jimo.2011.7.699 Stefano Galatolo. Orbit complexity and data compression. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 477-486. doi: 10.3934/dcds.2001.7.477 Impact Factor: HTML views (510) Xin Xu Jessi Cisewski-Kehe
CommonCrawl
Research | Open | Published: 25 February 2019 Mohamed Iadh Ayari1, Zead Mustafa2 & Mohammed Mahmoud Jaradat2 Fixed Point Theory and Applicationsvolume 2019, Article number: 7 (2019) | Download Citation The primary objective of this paper is the study of the generalization of some results given by Basha (Numer. Funct. Anal. Optim. 31:569–576, 2010). We present a new theorem on the existence and uniqueness of best proximity points for proximal β-quasi-contractive mappings for non-self-mappings $S:M\rightarrow N$ and $T:N\rightarrow M$. Furthermore, as a consequence, we give a new result on the existence and uniqueness of a common fixed point of two self mappings. In 1969, Fan in [2] proposed the concept best proximity point result for non-self continuous mappings $T:A\longrightarrow X$ where A is a non-empty compact convex subset of a Hausdorff locally convex topological vector space X. He showed that there exists a such that $d(a,Ta)=d(Ta,A)$. Many extensions of Fan's theorems were established in the literature, such as in work by Reich [3], Sehgal and Singh [4] and Prolla [5]. In 2010, [1], Basha introduce the concept of best proximity point of a non-self mapping. Furthermore he introduced an extension of the Banach contraction principle by a best proximity theorem. Later on, several best proximity points results were derived (see e.g. [6,7,8,9,10,11,12,13,14,15,16,17,18,19]). Best proximity point theorems for non-self set valued mappings have been obtained in [20] by Jleli and Samet, in the context of proximal orbital completeness condition which is weaker than the compactness condition. The aim of this article is to generalize the results of Basha [21] by introducing proximal β-quasi-contractive mappings which involve suitable comparison functions. As a consequence of our theorem, we obtain the result of Basha in [21] and an analogous result on proximal quasi-contractions is obtained which was first introduced by Jleli and Samet in [20]. Preliminaries and definitions Let $(M,N)$ be a pair of non-empty subsets of a metric space $(X,d)$. The following notations will be used throughout this paper: $d(M,N):=\inf\{d(m,n):m\in M, n\in N\}$; $d(x,N):=\inf\{d(x,n):n\in N\}$. Definition 2.1 ([1]) Let $T:M\rightarrow N$ be a non-self-mapping. An element $a_{\ast}\in M$ is said to be a best proximity point of T if $d(a_{\ast },Ta_{\ast})=d(M,N)$. Note that in the case of self-mapping, a best proximal point is the normal fixed point, see [22, 23]. ([21]) Given non-self-mappings $S:M\rightarrow N$ and $T:N\rightarrow M $. The pair $(S,T)$ is said to form a proximal cyclic contraction if there exists a non-negative number $k<1$ such that $$ d(u,Sa)=d(M,N)\quad\mbox{and}\quad d(v,Tb)=d(M,N)\Longrightarrow d(u,v)\leq kd(a,b)+(1-k)d(M,N) $$ for all $u,a\in M$ and $v,b\in N$. A non-self-mapping $S: M\rightarrow N$ is said to be a proximal contraction of the first kind if there exists a non-negative number $\alpha<1 $ such that $$ d(u_{1},Sa_{1})=d(M,N) \quad\mbox{and}\quad d(u_{2},Sa_{2})=d(M,N) \Longrightarrow d(u_{1},u_{2})\le\alpha d(a_{1},a_{2}) $$ for all $u_{1},u_{2},a_{1},a_{2} \in M$. Let $\beta\in(0,+\infty)$. A β-comparison function is a map $\varphi:[0,+\infty)\rightarrow{}[0,+\infty)$ satisfying the following properties: $(P_{1})$ : φ is nondecreasing. $\lim_{n\rightarrow\infty}\varphi _{\beta }^{n}(t)=0$ for all $t>0$, where $\varphi_{\beta}^{n}$ denote the nth iteration of $\varphi_{\beta}$ and $\varphi_{\beta}(t)=\varphi (\beta t)$. There exists $s\in(0,+\infty)$ such that $\sum_{n=1}^{\infty}\varphi_{\beta}^{n}(s)<\infty$. $(\mathrm{id}-\varphi_{\beta} ) \circ\varphi _{\beta}(t) \leq\varphi_{\beta} \circ(\mathrm{id}-\varphi_{\beta})(t) \mbox{ for all } t \geq0$, where $\mathrm{id}: [0,\infty) \longrightarrow[0,\infty) $ is the identity function. Throughout this work, the set of all functions φ satisfying $(P_{1}), (P_{2})$ and $(P_{3})$ will be denoted by $\varPhi_{\beta}$. Remark 2.1 Let $\alpha,\beta\in(0,+\infty)$. If $\alpha<\beta$, then $\varPhi_{\beta}\subset\varPhi_{\alpha}$. We recall the following useful lemma concerning the comparison functions $\varPhi_{\beta}$. Lemma 2.1 Let $\beta\in(0,+\infty)$ and $\varphi\in \varPhi_{\beta}$. Then $\varphi_{\beta}$ is nondecreasing; $\varphi_{\beta} (t) < t$ for all $t > 0$; $\sum_{n=1}^{\infty}\varphi_{\beta}^{n}(t) < \infty$ for all $t > 0 $. A non-self-mapping $T:M\rightarrow N$ is said to be a proximal quasi-contraction if there exists a number $q\in {}[ 0,1)$ such that $$ d(u,v)\leq q\max\bigl\{ d(a,b),d(a,u),d(b,v),d(a,v),d(b,u)\bigr\} $$ whenever $a,b,u,v\in M$ satisfy the condition that $d(u,Ta)=d(M,N)$ and $d(v,Tb)=d(M,N)$. Main results and theorems Now, we start this section by introducing the following concept. Let $\beta\in(0,+\infty)$. A non-self mapping $T:M\rightarrow N$ is said to be a proximal β-quasi-contraction if and only if there exist $\varphi\in\varPhi_{\beta }$ and positive numbers $\alpha_{0},\ldots,\alpha_{4}$ such that $$ d(u,v)\leq\varphi\bigl(\max \bigl\{ \alpha_{0}d(a,b),\alpha _{1}d(a,u),\alpha _{2}d(b,v),\alpha_{3}d(a,v), \alpha_{4}d(b,u) \bigr\} \bigr). $$ For all $a,b,u,v\in M$ satisfying, $d(u,Ta)=d(M,N)$ and $d(v,Tb)=d(M,N)$. Let $(M,N)$ be a pair of non-empty subsets of a metric space $(X,d)$. The following notations will be used throughout this paper: $M_{0}:=\{u\in M:\text{ there exists }v\in N\text{ with }d(u,v)=d(M,N)\} $;$N_{0}:=\{v\in N:\text{ there exists }u\in M\text{ with }d(u,v)=d(M,N)\} $. Our main result is giving by the following best proximity point theorems. Theorem 3.1 Let $(M,N)$ be a pair of non-empty closed subsets of a complete metric space $(X,d)$ such that $M_{0}$ and $N_{0}$ are non-empty. Let $S:M\longrightarrow N$ and $T:N\longrightarrow M$ be two mappings satisfying the following conditions: $(C_{1})$ : $S(M_{0})\subset N_{0}$ and $T(N_{0})\subset M_{0}$; there exist $\beta_{1}, \beta_{2}\geq\max\{ \alpha_{0},\alpha_{1},\alpha_{2},\alpha_{3}, 2\alpha_{4}\}$ such that S is a proximal $\beta_{1}$-quasi-contraction mapping (say, $\psi\in\varPhi_{\beta_{1}}$) and T is a proximal $\beta_{2} $-quasi-contraction mapping (say, $\phi\in\varPhi_{\beta_{2}}$). The pair $(S,T)$ forms a proximal cyclic contraction. Moreover, one of the following two assertions holds: ψ and ϕ are continuous; $\beta_{1},\beta_{2}>\max\{\alpha_{2},\alpha _{3}\} $. Then S has a unique best proximity point $a_{\ast}\in M$ and T has a unique best proximity point $b_{\ast}\in N$. Also these best proximity points satisfy $d(a_{\ast},b_{\ast})=d(M,N)$. Since $M_{0}$ is a non-empty set, $M_{0}$ contains at least one element, say $a_{0}\in M_{0}$. Using the first hypothesis of the theorem, there exists $a_{1}\in M_{0}$ such that $d(a_{1},Sa_{0})=d(M,N)$. Again, since $S(M_{0})\subset N_{0}$, there exists $a_{2}\in M_{0}$ such that $d(a_{2},Sa_{1})=d(M,N)$. Continuing this process in a similar fashion to find $a_{n+1}\in M_{0}$ such that $d(a_{n+1},Sa_{n})=d(M,N)$. Since S is a proximal $\beta_{1}$-quasi-contraction mapping for $\psi\in\varPhi_{\beta_{1}}$ and since $$ d(a_{n+1},Sa_{n})=d(a_{n},Sa_{n-1})=d(M,N) \text{,} $$ then by Definition 3.1 we have d ( a n + 1 , a n ) ≤ ψ ( max { α 0 d ( a n , a n − 1 ) , α 1 d ( a n , a n + 1 ) , α 2 d ( a n , a n − 1 ) , α 4 d ( a n + 1 , a n − 1 ) } ) ≤ ψ ( max { α 0 d ( a n , a n − 1 ) , α 1 d ( a n , a n + 1 ) , α 2 d ( a n , a n − 1 ) α 4 d ( a n − 1 , a n ) + α 4 d ( a n , a n + 1 ) } ) ≤ ψ ( max { α 0 d ( a n , a n − 1 ) , α 1 d ( a n , a n + 1 ) , α 2 d ( a n , a n − 1 ) 2 α 4 max { d ( a n − 1 , a n ) , d ( a n , a n + 1 ) } } ) ≤ ψ ( β 1 max { d ( a n , a n − 1 ) , d ( a n , a n + 1 ) } ) = ψ β 1 ( max { d ( a n , a n − 1 ) , d ( a n , a n + 1 ) } ) . Now, if $\max\{ d(a_{n},a_{n-1}), d(a_{n},a_{n+1})\}= d(a_{n},a_{n+1})$, then by Lemma 2.1 the above inequality becomes $$d(a_{n+1},a_{n})\leq\psi_{\beta_{1}}\bigl(d(a_{n+1},a_{n}) \bigr)< d(a_{n+1},a_{n}), $$ which is a contradiction. Thus, $\max\{ d(a_{n},a_{n-1}), d(a_{n},a_{n+1}) \}= d(a_{n},a_{n-1})$, then the above inequality (2) becomes $$ d(a_{n+1},a_{n})\leq \psi_{\beta_{1}}\bigl(d(a_{n-1},a_{n}) \bigr)). $$ By applying induction on n, the above inequality gives $$ d(a_{n+1},a_{n})\leq \psi_{\beta_{1}}^{n} \bigl(d(a_{0},a_{1})\bigr)\quad \forall n\geq1. $$ Now, from the axioms of metric and Eq. (3), for positive integers $n< m$, we get $$ d(a_{n},a_{m})\leq\sum_{k=n}^{m-1}d(a_{k},a_{k+1}) \leq \sum_{k=n}^{m-1}\psi_{\beta_{1}}^{k} \bigl(d(a_{1},a_{0})\bigr)\leq \sum _{k=1}^{\infty}\psi_{\beta_{1}}^{k} \bigl(d(a_{1},a_{0})\bigr)< \infty. $$ Hence, for every $\epsilon>0$ there exists $N>0$ such that $$ d(a_{n},a_{m})\leq\sum_{k=n}^{m-1}d(a_{k},a_{k+1})< \epsilon \quad\text{for all }m>n>N. $$ Therefore, $d(a_{n},a_{m})<\epsilon$ for all $m>n>N$. That is $\{ a_{n}\}$ is a Cauchy sequence in M. But M is a closed subset of the complete metric space X, then $\{a_{n}\}$ converges to some element $a_{\ast}\in M$. Since $T(N_{0})\subset M_{0}$, by using a similar argument as above, there exists a sequence $\{b_{n}\}\subset N_{0}$ such that $d(b_{n+1},Tb_{n})=d(M,N)$ for each n. Since T is a proximal $\beta _{2}$-quasi-contraction mapping (say $\phi\in\varPhi_{\beta_{2}}$) and since $d(b_{n+1},Tb_{n})=d(b_{n},Tb_{n-1})=d(M,N)$, we deduce from Definition 3.1 that d ( b n + 1 , b n ) ≤ ϕ ( max { α 0 d ( b n , b n − 1 ) , α 1 d ( b n , b n + 1 ) , α 2 d ( b n , b n − 1 ) , α 4 d ( b n − 1 , b n + 1 ) } ) ≤ ϕ ( max { α 0 d ( b n , b n − 1 ) , α 1 d ( b n , b n + 1 ) , α 2 d ( b n , b n − 1 ) , α 4 d ( b n − 1 , b n ) + α 4 d ( b n , b n + 1 ) } ) ≤ ϕ ( max { α 0 d ( b n , b n − 1 ) , α 1 d ( b n , b n + 1 ) , α 2 d ( b n , b n − 1 ) , 2 α 4 max { d ( b n − 1 , b n ) , d ( b n , b n + 1 ) } } ) ≤ ϕ ( β 2 max { d ( b n , b n − 1 ) , d ( b n , b n + 1 ) } ) = ϕ β 2 ( max { d ( b n , b n − 1 ) , d ( b n , b n + 1 ) } ) . Using a similar argument as in the case of $\{a_{n}\}$, one can show that $\{b_{n}\}$ is a Cauchy sequence in the closed subset N of the complete space X. Thus $\{b_{n}\}$ converges to $b_{\ast}\in N$. Now we shall show that $a_{\ast}$ and $b_{\ast}$ are best proximal points of S and T, respectively. As the pair $(S,T)$ forms a proximal cyclic contraction, it follows that $$ d(a_{n+1},b_{n+1})\leq kd(a_{n},b_{n})+(1-k)d(M,N). $$ Taking the limit as $n\longrightarrow+\infty$, in Eq. (4) we get $d(a_{\ast},b_{\ast})\leq kd(a_{\ast},b_{\ast})+(1-k)d(M,N)$, and so, $(1-k) d(a_{\ast},b_{\ast})\leq (1-k)d(M,N)$. This implies $$ d(a_{\ast},a_{\ast})\leq d(M,N). $$ Using the fact that $d(M,N)\leq d(a_{\ast},b_{\ast})$ and (5), we get $d(a_{\ast},b_{\ast})=d(M,N)$. Therefore, we conclude that $a_{\ast }\in M_{0}$ and $b_{\ast}\in N_{0}$. From one hand, since $S(M_{0})\subset N_{0}$ and $T(N_{0})\subset M_{0}$, there exist $u\in M$ and $v\in N$ such that $$ d(u,Sa_{\ast})=d(v,Tb_{\ast})=d(M,N). $$ On the other hand, by (1), (6) and using the hypothesis of the theorem that S is a proximal $\beta_{1}$-quasi-contraction mapping, we deduce that $$\begin{aligned} &d(a_{n+1},u) \\ &\quad\leq\psi\bigl(\max\bigl\{ \alpha_{0}d(a_{n},a_{\ast}), \alpha _{1}d(a_{n},a_{n+1}),\alpha_{2}d(a_{\ast},u), \alpha _{3}d(a_{n},u),\alpha _{4}d(a_{\ast},a_{n+1}) \bigr\} \bigr). \end{aligned}$$ For simplicity, we denote $$\rho=d(a_{\ast},u) $$ $$ A_{n}=\max\bigl\{ \alpha _{0}d(a_{n},a_{\ast}), \alpha_{1}d(a_{n},a_{n+1}),\alpha_{2}d(a_{\ast },u), \alpha_{3}d(a_{n},u),\alpha_{4}d(a_{\ast},a_{n+1}) \bigr\} . $$ $$ {\lim_{n\longrightarrow+\infty}A_{n}=\max\{\alpha_{2}, \alpha_{3}\} \rho}. $$ Now, we show by contradiction that $\rho=0$. Suppose that $\rho>0$. First, we consider the case where the assertion (i) of $(C_{4})$ is satisfied, that is, ψ is continuous. Then, taking the limit as $n\rightarrow\infty$ in (7) and using (8) and Lemma 2.1, we obtain $$ \rho\leq\psi\bigl(\max\{\alpha_{2},\alpha_{3}\}\rho\bigr) \leq\psi(\beta _{1}\rho)=\psi_{\beta_{1}} (\rho) < \rho, $$ which is a contradiction. Now, we assume the case where the assertion (ii) of $(C_{4})$ is satisfied, that is, $\beta_{1}>\max\{\alpha_{2},\alpha _{3}\}$. Then there exist $\epsilon>0$ and integer $N>0$ such that, for all $n>N$, we have $$ A_{n}< \bigl(\max\{\alpha_{2},\alpha_{3}\}+ \epsilon\bigr)\rho\quad\text{and}\quad \beta_{1}>\max\{ \alpha_{2},\alpha_{3}\}+\epsilon. $$ Therefore, the inequality (7) turns into the following inequality: $$\begin{aligned} d(a_{n+1},u) &\leq\psi(A_{n}) \\ &\leq\psi\bigl(\bigl(\max\{\alpha_{2},\alpha_{3}\}+ \epsilon\bigr)\rho\bigr)=\psi _{\beta_{1}}\biggl(\frac{\max\{\alpha_{2},\alpha_{3}\}+\epsilon}{\beta _{1}}\rho\biggr). \end{aligned}$$ Since $\psi\in\varPhi_{\beta_{1}}$, by Lemma 2.1 we have $$ d(a_{n+1},u)< \frac{\max\{\alpha_{2},\alpha_{3}\}+\epsilon}{\beta _{1}}\rho< \rho. $$ By letting $n\rightarrow\infty$, the above inequality yields $$ \rho\leq\frac{\max\{\alpha_{2},\alpha_{3}\}+\epsilon}{\beta _{1}}\rho < \rho, $$ which is a contradiction as well. Thus, in both two cases we get $0=\rho =d(a_{\ast},u)$, which means that $u=a_{\ast}$ and so from equation (6) we get $d(a_{\ast},Sa_{\ast})=d(M,N)$. That is $a_{\ast}$ is a best proximity point for S. Similarly, by using word by word the above argument after replacing u by v, S by T, $\beta_{1}$ by $\beta_{2}$ and ψ by ϕ, we get that $v=b_{\ast}$ and hence by (6) $b_{\ast}$ is a best proximity point for the non-self mapping T. Now, we shall prove that the obtained best proximity points $a_{\ast}$ of S is unique. Assume to the contrary that there exists $x\in M $ such that $d(x,Sx)=d(M,N)$ and $x\neq a_{\ast}$. Since S is a proximal $\beta_{1}$-quasi-contractive mapping, we obtain $$\begin{aligned} d(a_{\ast},x)& \leq\psi\bigl(\max \bigl\{ \alpha_{0}d(a_{\ast},x), \alpha _{1}d(x,x),\alpha _{2}d(a_{\ast},a_{\ast}), \alpha_{3}d(a_{\ast},x),\alpha _{4}d(a_{\ast},x) \bigr\} \bigr) \\ & \leq\psi \bigl(\max\{\alpha_{0},\alpha_{3}, \alpha_{4}\} d(a_{\ast },x) \bigr) \\ & \leq\psi\bigl(\beta_{1}d(a_{\ast},x) \bigr)= \psi_{\beta_{1}} \bigl(d(a_{\ast },x) \bigr) \\ & < d(a_{\ast},x), \end{aligned}$$ which is a contradiction. Similarly, using the same as above and the fact that T is a proximal $\beta_{2}$-quasi-contractive mapping, we see that the best proximity point $b_{\ast}$ of T is unique. □ In Theorem 3.1 by taking $\alpha_{0}=\alpha_{1}=\alpha_{2}=\alpha_{3}=0 , \alpha _{4}=1,\beta_{1}=\beta_{2}=1$ and $\psi(t)=\phi(t)=qt$ which is a continuous function and belongs to $\varPhi_{1}$, we obtain Corollary 3.3 in [21]. Corollary 3.1 Let $(M,N)$ be a pair of non-empty closed subsets of a complete metric space $(X,d)$ such that $M_{0}$ and $M_{0}$ are non-empty. Let $S:M\longrightarrow N$ and $T:N\longrightarrow M$ be mappings satisfy the following conditions: $(d_{1})$ : $S(A_{0})\subset M_{0}$ and $T(M_{0})\subset N_{0}$. S and T are proximal quasi-contractions. The pair $(S,T)$ form a proximal cyclic contraction. Then S has a unique best proximity point $a_{\ast }\in M$ such that $d(a_{\ast},Sa_{\ast})=d(M,N)$ and T has a unique best proximity point $b_{\ast}\in N$ such that $d(b_{\ast},Tb_{\ast })=d(M,N)$. Also, these best proximity points satisfies $d(a_{\ast },b_{\ast })=d(M,N)$. The result follows immediately from Theorem 3.1 by taking $\alpha _{0} = \alpha_{1} =\alpha_{2} =\alpha_{3} = 1 $ and $\alpha_{4} = \frac {1}{2} $, $\beta_{1}=\beta_{2} = 1$ and $\psi(t)=\phi(t)=qt$. □ The following definition, which was introduced in [24], is needed to derive a fixed point result as a consequence of our main theorem. Let X be a non-empty set. A mapping $T:X\longrightarrow X$ is called β-quasi-contractive, if there exist $\beta>0$ and $\varphi \in\varPhi_{\beta}$ such that $$ d(Ta,Tb)\leq\varphi\bigl(H_{T}(a,b)\bigr), $$ $$ H_{T}(a,b)=\max\bigl\{ \alpha_{0}d(a,b), \alpha_{1}d(a,Ta),\alpha _{2}d(b,Tb),\alpha_{3}d(a,Tb), \alpha_{4}d(b,Ta)\bigr\} , $$ with $\alpha_{i}\geq0$ for $i=0, 1,2,3,4$. Let $(X,d)$ be a complete metric space. Let $S,T:X\longrightarrow X$ be two self-mappings satisfying the following conditions: $(E_{1})$ : S is $\beta_{1}$-quasi-contractive ( say, $\psi\in\varPhi_{\beta_{1}}$) and T is $\beta_{2}$-quasi-contractive (say, $\phi\in\varPhi_{\beta_{2}}$). For all $a,b\in X,d(Sa,Tb)\leq kd(a,b)$ for some $k\in(0,1)$. Moreover, one of the following assertions holds: $\beta_{1},\beta_{2}>\max\{\alpha _{2},\alpha_{3}\}$. Then S and T have a common unique fixed point. This result follows from Theorem 3.1 by taking $M=N=X$ and noticing that the hypotheses $(E_{1})$ and $(E_{2})$ of the corollary coincide with the first, second and the third conditions of Theorem 3.1. □ Example 3.1 Let $X=\mathbb{R}$ with the metric $d(x,y)=|x-y|$, then $(X,d)$ is complete metric space. Let $M=[0,1]$ and $N=[2,3]$. Also, let $S:M\longrightarrow N$ and $T:N\longrightarrow M$ be defined by $S(x)=3-x$ and $T(y)=3-y$. Then it is easy to see that $d(M,N)=1$, $M_{0}=\{1\}$ and $N_{0}=\{2\}$. Thus, $S(M_{0}) = S(\{1\}) = \{2\} = N_{0}$ and $T(M_{0}) = T(\{2\}) = \{1\} = M_{0}$. Now we show that the pair $(S,T)$ forms a proximal cyclic contraction. $d(u,Sa) = d(M,N) =1$ implies that $u=a=1 \in M$ and $d(v,Tb = d(M,N) =1$ implies that $v=b=2 \in N$. Now, since $d(u,Sa)=d(1,S(1))= d(1,2)=1=d(M,N)$ and $d(v,Tb)=d(2,T(2))= d(2,1)=1=d(M,N)$. Therefore, $$\begin{aligned} 1&= d(u,v) = d(1,2) \\ &\leq k \bigl(d(1,2)\bigr) + (1-k) d(M,N) \\ &= k + (1-k) = 1. \end{aligned}$$ So, $(S,T)$ are proximal cyclic contraction for any $0\leq k<1$. Now we shall show that S is proximal $\beta_{1}$-quasi-contraction mapping with $\psi(t)=\frac{1}{7}t,\beta_{1}=2$ and $\alpha_{i}=\frac{1}{5}$ for$i=0,1,2,3$ and $\alpha_{4} = \frac{1}{100}$. Note that $\psi(t)= \frac{1}{7}t \in\varPhi_{2} $ since $\psi_{\beta_{1}}t= \psi_{2}t= \frac{2}{7} t $. As above the only $a,b,u,v\in M$ such that $d(u,Sa)=d(M,N)=1=d(v,Sb)$ is $a=b=u=v =1 \in M$. But $$\begin{aligned} 0&=d(u,v) =d(1,1) \\ &\leq\frac{1}{7}\max\biggl\{ \frac{1}{6}d(a,b), \frac{1}{6}d(a,u),\frac{1}{6} d(b,v), \frac{1}{6}d(a,v),\frac{1}{100}d(b,u)\biggr\} \\ &= \psi\biggl(\max\biggl\{ \frac{1}{6}d(1,1),\frac{1}{100}d(1,1) \biggr\} \biggr) \\ &= \psi\bigl(\max\{0,0,0,0,0\}\bigr) \\ &= 0. \end{aligned}$$ So, S is a proximal $\beta_{1}$-quasi-contraction mapping. We deduce using our Theorem 3.1, that S has a unique best proximity point which is $a_{\ast} =1$ in this example. Similarly, by using the same argument as above, we can show that T is proximal $\beta_{2}$-quasi-contraction mapping with $\phi(t)=\frac{1}{8}t,\beta_{2}=3$ and $\alpha_{i}=\frac{1}{6}$ for$i=0,1,2,3$ and $\alpha_{4} = \frac{1}{100}$. Note that $\phi(t)= \frac{1}{8}t \in\varPhi_{3} $ since $\phi_{\beta_{2}}t= \phi_{3}(t)= \frac{3}{8} t $. As above the only $a,b,u,v\in N$ such that $d(u,Ta)=d(M,N)=1=d(v,Tb)$ is $a=b=u=v =2 \in M$. But $$\begin{aligned} 0&=d(u,v) =d((2,2) \\ &\leq\frac{1}{8}\max\biggl\{ \frac{1}{6}d(a,b), \frac{1}{6}d(a,u),\frac {1}{6}d(b,v),\frac{1}{6}d(a,v), \frac{1}{100}d(b,u)\biggr\} \\ &= \phi\biggl(\max\biggl\{ \frac{1}{6}d(2,2), \frac{1}{100}d(2,2) \biggr\} \biggr) \\ &= \phi\bigl(\max\{0,0,0,0,0\}\bigr) \\ &= 0. \end{aligned}$$ So, T is a proximal $\beta_{2}$-quasi-contraction mapping. We deduce, using Theorem 3.1, that T has a unique best proximity point which is $b_{\ast} =2$. Finally, $\psi(t)$ and $\phi(t)$ are continuous mappings as well as $\beta_{1}, \beta_{2} > \max_{0\leq i \leq3}\{\alpha_{i} \} $. Therefore $$ d(a_{\ast},b_{\ast})=d(1,2)=1=d(M,N). $$ Improvements to some best proximity point theorems are proposed. In particular, the result due to Basha [21] for proximal contractions of first kind is generalized. Furthermore, we propose a similar result on existence and uniqueness of best proximity point of proximal quasi-contractions introduced by Jleli and Samet in [20]. This has been achieved by introducing β-quasi-contractions involving β-comparison functions introduced in [24]. Basha, S.S.: Extensions of Banach's contraction principle. Numer. Funct. Anal. Optim. 31, 569–576 (2010) Fan, K.: Extension of two fixed point theorems of F.E Browder. Math. Z. 112, 234–240 (1969) Reich, S.: Approximate selections, best approximations, fixed points and invariant sets. J. Math. Anal. Appl. 62, 104–113 (1978) Sehgal, V.M., Singh, S.P.: A generalization to multifunctions of Fan's best approximation theorem. Proc. Am. Math. Soc. 102, 534–537 (1988) Prolla, J.B.: Fixed point theorems for set valued mappings and existence of best approximations. Numer. Funct. Anal. Optim. 5, 449–455 (1983) Basha, S.S.: Best proximity point theorems generalizing the contraction principle. J. Nonlinear Anal. Optim., Theory Appl. 74, 5844–5850 (2011) Basha, S.S.: Best proximity point theorems an exploration of a comon solution to approximation and optimization problems. Appl. Math. Comput. 218, 9773–9780 (2012) Basha, S.S., Sahrazad, N.: Best proximity point theorems for generalized proximal contractions. Fixed Point Theory Appl. 2012, 42 (2012) Sadiq Basha, S., Veeramani, P.: Best approximations and best proximity pairs. Acta Sci. Math. 63, 289–300 (1997) Sadiq Basha, S., Veeramani, P., Pai, D.V.: Best proximity pair theorems. Indian J. Pure Appl. Math. 32, 1237–1246 (2001) Sadiq Basha, S., Veeramani, P.: Best proximity pair theorems for multifunctions with open fibres. J. Approx. Theory 103, 119–129 (2000) Raj, V.S.: A best proximity point theorem for weakly contractive non-self-mappings. Nonlinear Anal. 74, 4804–4808 (2011) Karapinar, E.: Best proximity points of Kannan type-cyclic weak phi-contractions in ordered metric spaces. An. Ştiinţ. Univ. 'Ovidius' Constanţa 20, 51–64 (2012) Vetro, C.: Best proximity points: convergence and existence theorems for P-cyclic-mappings. Nonlinear Anal. 73, 2283–2291 (2010) Samet, B., Vetro, C., Vetro, P.: Fixed points theorems for α-ψ-contractive type mappings. Nonlinear Anal. 75, 2154–2165 (2012) Jleli, M., Karapinar, E., Samet, B.: Best proximity points for generalized α-ψ proximal contractives type mapping. J. Appl. Math. 2013, Article ID 534127 (2013) Aydi, H., Felhi, A.: On best proximity points for various α-proximal contractions on metric like spaces. J. Nonlinear Sci. Appl. 9(8), 5202–5218 (2016) Aydi, H., Felhia, A., Karapinar, E.: On common best proximity points for generalized α-ψ-proximal contractions. J. Nonlinear Sci. Appl. 9(5), 2658–2670 (2016) Ayari, M.I.: Best proximity point theorems for generalized α-β-proximal quasi-contractive mappings. Fixed Point Theory Appl. 2017, 16 (2017) Jleli, M., Samet, B.: An optimisation problem involving proximal quasi-contraction mapping. Fixed Point Theory Appl. 2014, 141 (2014) Sadiq, S.: Basha best proximity point theorems generalizing the contraction principle. Nonlinear Anal. 74, 5844–5850 (2011) Shatanawi, W., Mustafa, Z., Tahat, N.: Some coincidence point theorems for nonlinear contraction in ordered metric spaces. Fixed Point Theory Appl. 2011, 68 (2011) Shatanawi, W., Postolache, M., Mustafa, Z., Taha, N.: Some theorems for Boyd–Wong type contractions in ordered metric spaces. Abstr. Appl. Anal. 2012, Article ID 359054 (2012). https://doi.org/10.1155/2012/359054 Ayari, M.I., Berzig, M., Kedim, I.: Coincidence and common fixed point results for β-quasi contractive mappings on metric spaces endowed with binary relation. Math. Sci. 10(3), 105–114 (2016) Please contact the authors for data requests. Institute National Des Sciences Appliquée et de Technologie, de Tunis, Carthage University, Tunis, Tunisie Mohamed Iadh Ayari Department of Mathematics, Satistics and Physics, Qatar University, Doha, Qatar Zead Mustafa & Mohammed Mahmoud Jaradat Search for Mohamed Iadh Ayari in: Search for Zead Mustafa in: Search for Mohammed Mahmoud Jaradat in: The authors contributed equally to the preparation of the paper. The authors read and approved the final manuscript. Correspondence to Mohamed Iadh Ayari. Best proximity points Proximal β-quasi-contractive mappings on metric spaces and proximal cyclic contraction
CommonCrawl
What's in a face? The role of facial features in ratings of dominance, threat, and stereotypicality Heather Kleider-Offutt ORCID: orcid.org/0000-0001-8002-10791, Ashley M. Meacham1, Lee Branum-Martin1 & Megan Capodanno1 Faces judged as stereotypically Black are perceived negatively relative to less stereotypical faces. In this experiment, artificial faces were constructed to examine the effects of nose width, lip fullness, and skin reflectance, as well as to study the relations among perceived dominance, threat, and Black stereotypicality. Using a multilevel structural equation model to isolate contributions of the facial features and the participant demographics, results showed that stereotypicality was related to wide nose, darker reflectance, and to a lesser extent full lips; threat was associated with wide nose, thin lips, and low reflectance; dominance was mainly related to nose width. Facial features explained variance among faces, suggesting that face-type bias in this sample was related to specific face features rather than particular characteristics of the participant. People's perceptions of relations across these traits may underpin some of the sociocultural disparities in treatment of certain individuals by the legal system. Significance statement Faces judged as stereotypically Black (i.e., Afrocentric) are perceived negatively relative to less stereotypical faces, and this face-type bias influences a variety of real-world outcomes including employment and legal decisions. Dominance is a first-impression trait that is cued by facial structure and is associated with threat and criminality. In this experiment, we investigated whether facial features that are perceived as dominant and threatening, may be consistent with stereotypically Black features and thereby explain some of the biased treatment of people who have this face-type. Artificial faces were constructed to manipulate facial features to study the relations among perceived dominance, threat, and Black stereotypicality. People were shown faces with different combinations and variations, of facial features typically associated with stereotypicality; nose width, lip fullness, and variations in skin tone (here manipulated as reflectance; shadowing and texture). After presentation, people judged how well each face represented the three factors of interest (traits). Results showed that stereotypicality was related to wide nose and darker reflectance and to a lesser extent full lips; threat was associated with wide nose, thin lips, and low reflectance; dominance was mainly related to nose width. People were influenced by the facial features when making trait judgments, while the demographics of the perceiver (race, age, gender), did not change how the faces were judged. These results suggest that the extent to which people perceive dominance, threat, and stereotypicality as related, may underpin some of the sociocultural disparities in treatment of certain individuals in an applied context. The "Barbie Bandits", two attractive teenage girls who robbed banks in Georgia (Joseph, 2009), likely were successful during their heists because they surprised bank tellers with their atypical appearance. Jeremy Meeks, the "Sexy mugshot guy" (Rayne, 2016), who was arrested for robbery and assault, gained notoriety and a modeling contract as a result of good looks, despite his criminal activity. People judge faces quickly, making first impression judgments in as little as 100 ms (Bar et al., 2006; Willis & Todorov, 2006). Speeded judgments are often biased and based on little or no information about actual behavior (Oosterhof & Todorov, 2008). Instead, people form impressions of one another and assume character traits based in part on facial structure and the extent to which facial cues support preconceived expectations for behavior (Blair et al., 2004a, 2004b; Dotsch & Todorov, 2012; Kleider-Offutt et al., 2017a, 2017b). Face judgment research finds commonalities in facial structure that lead to judgments of dominance, trustworthiness, and a variety of other trait-based assumptions (for review Oosterhof & Todorov, 2008; Zebrowitz et al., 2011,). These judgments may play a role in how people are perceived and may relate to important applied decisions, such as political elections (Todorov et al., 2005), military rank (Mazur et al., 1984; Mueller & Mazur, 1998), and court system outcomes relating to sentence severity and guilty verdicts (Blair et al., 2004a, 2004b; Kleider-Offutt et al., 2017a, 2017b; Porter et al., 2010). These face trait judgments occur for race- and gender-ambiguous faces, suggesting that susceptibility to biased assessment may be ubiquitous (Ito et al., 2011; Kaminska et al., 2020). However, in scientific research and the news media, Black faces specifically garner biased judgment (Dixon, 2017; Dixon & Azocar, 2007; Kleider-Offutt, 2019; Kleider-Offutt et al., 2017a, 2017b). The focus of the current study is to identify facial features associated with assumed behavioral traits that underpin biased judgments of Black individuals. Black men, specifically, are vulnerable to face-type bias and assumed criminality due to associations with the Black man criminal stereotype (Kleider et al., 2012; Kleider-Offutt, 2019; Kleider-Offutt et al., 2018; Knuycky et al., 2014). Black men with stereotypically Black features are often judged more negatively and more criminal in real-world and laboratory settings than are their counterparts who possess more atypical features (Blair et al., 2004a, 2004b; Kleider et al., 2012). In addition, men with more stereotypical features are more likely to be misidentified (Flowe & Humphries, 2011; Kleider-Offutt et al., 2017a, 2017b) and given more punitive sentences (Eberhardt et al., 2006) than are Black men judged as possessing fewer stereotypical features in criminal cases. For example, Black men who were misidentified as the perpetrator in a crime, incarcerated, and later exonerated based on DNA evidence (i.e., factually innocent), were judged by an independent sample of people as being more stereotypically Black than were Black exonerates who were falsely incarcerated for reasons other than eyewitness identification error (Kleider-Offutt et al., 2017a, 2017b). These findings suggest a bias to associate certain face-types with negative (e.g., criminal) actions (Kleider-Offutt et al., 2017a, 2017b). Discussions around what drives this bias suggest that stereotypically Black features may activate negative racial stereotypes that can result in associations with fear (Golkar et al., 2015; Olsson et al., 2005). A body of research is focused on identifying what aspects of a Black face lead to negative associations for White participants. Some studies find that darker skin tone is what drives the effect (Maddox & Gray, 2002). Alternatively, some research suggests that facial features and skin tone are used together (Deregowski et al., 1975; Livingston & Brewer, 2002), while others argue that they are used independently to inform these negative associations (see for a review, Hagiwara et al., 2012; Stepanova & Strube, 2009). Although this is important work that aims to better understand what features cue negative responses, these studies did not test the specific features, or combination of features, that compose a stereotypical Black face—which is the next step in understanding why some within-race faces are judged especially harshly. One study did test specific features to determine prototypicality for several race groups. Strom et al. (2012) tested how facial metrics (e.g., face width, feature size) and skin tone influenced judgments of prototypicality across Black, White, and Korean faces. Results for Black faces showed that facial metrics had the biggest influence on White perceivers' prototypicality ratings, while skin tone was consistently impactful for Black and Korean perceivers. Black face prototypicality was not specifically identified by metrics; however, relative to White faces, Black faces were rated as having a wider nose, thicker lips, and a wider jawline (Strom et al., 2012). Aside from this study, the bulk of the research that attributes behavioral associations to Black face-types generally suggests that stereotypicality includes some combination of a wider nose, fuller lips, and darker skin (e.g., Blair, 2006; Blair et al., 2004a, 2004b). Thus, testing and identifying what features specifically define a stereotypically Black face will inform what cues associations to criminality and negative judgments. People have stereotypes about what makes a criminal face (MacLin & Herrera, 2006; MacLin & MacLin, 2004): they have long, shaggy, dark hair; tattoos; beady eyes; pockmarks; and scars. Faces rated high in criminality may also be identified from police lineups on appearance alone (Flowe & Humphries, 2011), and such a response is associated with Criminal face-type bias. Similarly, participants making speeded first impression judgments of convict faces revealed that criminality was determined immediately and was related to judgments of low trustworthiness and high dominance (Klatt et al., 2016). These studies focused on Caucasian faces, but similar biases occur for Black faces (e.g., Kleider et al., 2012). How people form these judgments so quickly is a point of discussion. One idea is that people infer personality traits from the similarity of a person's facial features to emotional expressions (i.e., the Emotion Overgeneralization hypothesis; Zebrowitz, 2004). Emotionally neutral faces that look angry are perceived high in dominance, while neutral faces that appear happy are perceived as trustworthy. To test the influence of these traits on criminality, Flowe and Humphries (2011) had participants rate cropped faces, such that there was no clothing or background information available, of actors and inmates on criminality, anger, dominance, trustworthiness, and maturity (i.e., baby-facedness). Results showed that, regardless of face group, both male and female faces that were judged high in criminality were also judged as high in dominance and low in trustworthiness, with angry faces being perceived as the most dominant. This suggests that a possible cue to determining that a face is threatening (i.e., associated with fear) and also criminal, is the extent to which the face looks dominant. This relationship is born out of face trait models that show that the more dominant a face is perceived, the more threatening it is judged; and these impressions of threat are closely tied to criminal appearance (Funk et al., 2017). To investigate the relationship between facial cues and trait assessments, Oosterhof and Todorov (2008) hypothesized a framework for face evaluation. They used a data-driven approach, based on principal components analysis of 2D facial images, wherein people made judgments of face traits and then determined which facial features mapped onto which traits. Through this computational modeling approach, they could model social perception of faces tied to facial structure that influenced a specific judgment, such as dominance or trustworthiness. Using this approach, they could modify the structure of new faces to increase or decrease how trustworthy or dominant they looked. These models have been examined in several studies (Oosterhof & Todorov, 2008; Todorov et al., 2013; Walker & Vetter, 2009), suggesting that spontaneous trait inferences made based on facial appearance are derived from valence and dominance. In Todorov et al. (2008, 2011, 2013) model of face evaluation, valence is a cue to whether a person should be approached or avoided, while dominance cues the likelihood of a person inflicting physical harm. Features of faces associated with happiness and anger (i.e., valence) are overgeneralized to determine whether a person is trustworthy and should be approached or avoided. Facial features that appear dominant (e.g., looking more masculine or mature) are used to evaluate physical strength. From an evolutionary standpoint, these findings suggest that these cues are adaptive for determining who to approach and who to avoid. In support of this idea, Todorov et al. (2013) found that assessments of threat derived from facial appearance are negatively associated with perceptions of trustworthiness and positively associated with perceptions of dominance. In a similar vein, Hehman et al. (2017) investigated the contribution of dominance, trustworthiness, and youthful-attractiveness on face judgments focusing on the different contributions of the perceiver and the stimuli. They found that trait-based factors representing character (e.g., dominance) are driven more by the perceiver than are factors based on appearance (e.g., attractiveness). The authors explained how cross-classified regression can estimate the amount of variance due to faces, raters, and error, and that trait impressions are derived from several sources. What makes a face dominant, trustworthy, and threatening is well established; what is less clear is what features or combination of features, makes a face stereotypically Black, and how those features may relate to these other traits. Could it be that features that are consistently rated as dominant are consistent with features that are rated as stereotypically Black, and therefore threatening? The current study will take the next step in addressing this gap in the literature. We plan to evaluate whether specific facial features, or combinations of features, considered stereotypically Black are also associated with dominance and threat. We hypothesize that Black stereotypicality, dominance, and threat will be positively related traits. To test this expectation, we will focus on three main aims: (1) to examine how lip width, nose width, and skin reflectance correspond to ratings of dominance, threat, and stereotypicality; (2) to examine the extent to which rater characteristics may affect face ratings; (3) to evaluate the extent to which ratings of dominance, threat, and stereotypicality are related to each other after accounting for the effects of facial features and rater demographics. Together these results will help to determine whether some of the bias found in judgments of more versus less-stereotypically Black faces are underpinned by feature judgments that are afforded to all faces with these features. In addition, the participant sample used in this study is primarily Black women, while much of the research to date on face-type bias focuses on a White sample. Assessing trait judgments in a sample of people who are the target of the biased judgments, will aid in understanding not only the cultural implications of face-type bias but the ubiquitous nature of such judgments. Moreover, this work addresses the need for face perception research to extend beyond primarily White samples as the fluidity of face judgments maybe based on context and the racial group that one identifies with (Willadsen-Jensen & Ito, 2008). Participants were 341 Georgia State University (GSU), undergraduate psychology students. All of the students participated for course credit and self-identified their age (range = 18–51 + years old; 89.2% 18–21 years old), gender (260 female, 74 male, 5 non-binary, 2 prefer not to respond), and race (126 Black, 64 White, 71 Asian, 47 Hispanic/Latinx, 1 Native American, 28 Bi/Multi-racial, 4 other). All participants provided informed consent. FaceGen Modeller software (Singular Inversions, Toronto, Canada) was used to generate an average, baseline face (i.e., no feature manipulations) that was subsequently altered on different feature dimensions to create our core stimulus set. Stimuli faces were computer-generated (Fig. 1) to afford complete control over feature manipulations. Additionally, faces were presented without hair or specific skin tone (i.e., faces were racially ambiguous), such that each face was initially generated as a 'European' face in FaceGen Modeller and further adjusted to appear slightly darker in complexion utilizing the software, to isolate responses to the manipulated features as much as possible. Faces were presented in full color to participants. Sample stimuli. Top row from left to right: Baseline face (average nose and average lips); average nose and thin lips; average nose and full lips. Middle row from left to right: Thin nose and average lips; thin nose and thin lips; thin nose and full lips. Bottom row from left to right: Wide nose and average lips; wide nose and thin lips; wide nose and full lips. Each column from left to right: no reflectance, medium reflectance, high reflectance. Participants viewed full color images of the faces Building from the average, baseline face, each successive stimulus face was manipulated to contain a specific level of nose width (wide, average, thin), lip fullness (full, average, thin), and/or reflectance (skin texture and brightness; none, medium, or high). Nose and lip features, specifically, were adjusted using the built-in sliding scale controls in FaceGen Modeller. Furthermore, each level of each feature (e.g., thin nose, full lips, etc.) was scaled to the same value for each face with that specific feature. To achieve varying levels of reflectance, we altered the contrast of the photographs (i.e., no contrast [no reflectance], 50% contrast [medium reflectance], 100% contrast [high reflectance]). It is important to note that reflectance is not meant to cue race in this paradigm, but rather we are interested in whether manipulations of skin texture and brightness, which have previously been shown to signal dominance and threat, interact with nose and lip manipulations to influence judgments of perceived stereotypically Black faces. In total, nine faces were created with different combinations of nose width and lip fullness, and each of these nine faces was further manipulated for each level of reflectance. These three features, with three levels each, yielded a set of 27 distinct stimulus faces in total. While the stimuli set is relatively small, we have maintained maximal control over the unique faces which allowed us to assess the individual and combined influence of each feature on our outcomes of interest. Furthermore, pre-ratings of the stimuli were not collected since the goal of the study was to obtain information regarding first impressions of specifically manipulated facial characteristics (nose, lips, and reflectance). In a computer laboratory with seven partitioned workspaces, each participant was randomly presented with the 27 unique facial models sequentially at the center of their computer screen. Before the presentation of each stimulus face, a fixation cross appeared in the center of the screen for 500 ms. The fixation cross was then replaced by a stimulus face for an additional 500 ms. Although prior literature has shown that individuals can form a first impression in as little as 38 ms (Oosterhof & Todorov, 2008), initial test subjects were given 100 ms to view a face. However, participants expressed stress and discomfort concerning the speed of presentation time. Thus, the stimulus presentation time was increased to 500 ms to reduce the likelihood of a potential stress response among participants, while also maintaining the desired speeded nature of the task. Following the presentation of each face, participants provided judgments on a variety of randomized perceived inherent traits of the face and behavioral attributes (e.g., dominance, stereotypicality, threat). The full list of traits and application-based questions that were assessed, including those not used in the current report, can be found in "Appendix 1". The response scale ranged from 1 (not at all) to 7 (extremely) for each trait judgment. Participants had unlimited time to make their response via keypress (1–7). No two participants saw the facial stimuli presented in the exact same order, nor did participants make behavioral/applied judgments in the same order for each face. Given that both the facial stimuli and associated judgments were fully randomized for each participant, we did not expect any carryover effects. It is important to note that each of the 27 stimulus faces was shown for a total of 14 consecutive trials in which respondents would rate the face on eight traits and then respond to six applied judgment questions. These trait ratings and judgment questions are listed in "Appendix 2". In these trials, a face would appear for 500 ms, then a judgment question, then the same face would appear for another 500 ms followed by a different judgment question, and so on until that specific stimulus had been rated on 14 different trait and judgment questions. As an initial complex multivariate model, we present in this paper an analysis of the three traits of stereotypicality, dominance, and threat. After completing the face rating task, participants completed the Symbolic Racism 2000 ScaleFootnote 1 (Henry & Sears, 2002; not included in the following models) and a brief demographics questionnaire. The model was a joint set of three multilevel regressions: responses to 27 faces within 341 raters, where each rating (dominance, threat, or stereotypicality) was predicted by facial features and rater characteristics. The general form of this model of face f by rater r can be conceptually represented by: $$Rating_{fr} = Nose_{f} + Lip_{f} + Reflectance_{f} + Race_{r} + Age_{r} + Gender_{r} + e_{r}$$ where Ratingfr represents the response for that trait (stereotypicality, dominance, or threat); Nosef represents the level of nose width (thin, average, or wide) for that face; Lipf represents the level of lip fullness (thin, average, full) for that face; Reflectancef represents skin texture and brightness (low, moderate, high) for that face, plus all two-way interactions for these three features (not shown); Racer represents the race of the rater (Black, White, Hispanic/Latinx, Asian, Biracial, or other); and er is random error. The full representation of these variables and how they were coded is presented in "Appendix 2". This regression was fit jointly for the three traits: dominance, threat, and stereotypicality (i.e., as a simultaneous structural equation model of three rating outcomes). All models were fit in Mplus version 8.1 (Muthén & Muthén, 2017), treating the 7-point ratings as continuous (findings were highly similar when we treated the ratings as categorical, so we chose to report here the simpler, continuous score model). The demographics of the 341 participants were modeled as dummy variables, such that White (19%), Asian (21%), Hispanic/Latinx (14%), Biracial (8%), other race (2%), non-female (male, non-binary, and prefer not to respond) participants (24%), and those older than 21 (11%) were compared to participants who identified as Black, women, and no older than 21 years old. Because ratings were nested within faces and within raters (i.e., were cross-classified; Hehman et al., 2017), we initially fit a trivariate model of the three outcome ratings nested within faces and raters. However, in this cross-classified model, the face level was nearly fully explained, with near-zero residual variances—an understandable finding because we modeled all 27 feature patterns which were designed into the study. We therefore fit the same data to a two-level model of ratings in raters, and the model fit essentially the same (cross-classified DIC = on 96 parameters; two-level DIC = on 90 parameters). Moreover, we graphed the model-based predictions and found no strong substantive differences. We therefore present the technically simpler two-level results. In addition, we wish to evaluate the pattern of all possible effects and to discourage the dichotomous yes/no thinking for individual effects (especially in the presence of interactions). We therefore focus on the overall model-implied effects in the graphs, presented in the appendices, which are expected to be invariant under different coding schemes. Indeed, we wish to discourage unrealistic, overly narrow reliance on p-values for individual effects because our model is attempting to capture the design of all 3 features, each with 3 levels. We therefore rely on the graphs of the model-implied effects, rather than estimates of individual parameters. Question 1: what features predict dominance, threat, and Black stereotypicality? Effects of facial features Facial features were modeled as contrasts of two extremes, each around an average: noses were thin, average, or wide; lips were thin, average, or full; reflectance was none, medium, or high. These features were modeled as dummy variables for thin and for wide/full (versus average) noses and lips, and for none/high versus medium reflectance (see "Appendix 2" for equations). This coding scheme allowed us to directly model every condition in the experiment, and without making assumptions of linearity or equal intervals between low, average, and high conditions of the facial features. In addition, all two-way interactions of these dummy variables were also modeled. Because these six main effects and 12 two-way interactions can be cumbersome to display and difficult to interpret, we present graphs for the model-implied effects on each of the three outcomes. The parameter estimates for the predictions by facial features are presented in "Appendix 3" Table 2. Figure 2 presents the model-estimated ratings for dominance. The vertical axis is the predicted rating on the 7-point scale. The horizontal axis represents nose width: thin, average, and wide. Each panel represents one level of lip fullness: thin, average, and full. Within each panel, there is a separate line for reflectance: none (light gray), medium (gray), and high (black). The dotted horizontal line represents the model-predicted average (intercept). Model-predicted dominance ratings. The vertical axis is the predicted dominance score, with a dotted line for the model-implied mean. The horizontal axis represents the levels of nose width. The three lines represent degrees of reflectance (none, medium, high). Each panel represents the degree of lip fullness The steep upward slopes in Fig. 2 show an appreciable effect for nose width, suggesting that wider noses were seen as more dominant while thinner noses were seen as less dominant. The effect of nose width ranged up to around half a unit on the 7-point scale. The other lines did not differ much from each other, and all lines in the three panels fell within half a unit of four, suggesting only small effects of lip fullness and reflectance on judgments of dominance. Figure 3 shows the estimated ratings for threat. In all three panels, there is an upward trend for wide noses, but negligible differences for thin versus average noses. This suggests that wider noses were generally seen as more threatening. Low reflectance (light gray) is high, while there was little distinction between medium (gray) and high (black) reflectance. Thus, the absence of reflectance (i.e., none) was associated with increased judgments of threat. Thin lips (left panel) were generally more threatening than average and full lips (middle and right panels, respectively). Model-predicted threat ratings. The vertical axis is the predicted threat score, with a black, dotted line for the model-implied mean. The horizontal axis represents the levels of nose width. The three lines represent degrees of reflectance (none, medium, high). Each panel represents the degree of lip fullness Figure 4 shows the estimated total effects for stereotypicality. The steep upward slope across all three panels suggests that wider noses were seen as more stereotypical of Black faces, while thinner noses were seen as less stereotypical. In a similar fashion, there is a moderate amount of separation between high (black), medium (gray), and low (light gray) reflectance, with high reflectance positioned higher on the scale and low reflectance positioned lower. Therefore, higher reflectance was associated with increased judgments of stereotypicality (higher lines overall), whereas lower reflectance was associated with decreased judgments of stereotypicality (lower lines). Full lips (right panel) were slightly more stereotypical (higher lines) than average and thin lips (middle and left panels, respectively), although this distinction is somewhat diminished (i.e., modulated) by the combinations of nose width and reflectance. Model-predicted stereotypicality ratings. The vertical axis is the predicted stereotypicality score, with a black, dotted line for the model-implied mean. The horizontal axis represents the levels of nose width. The three lines represent degrees of reflectance (none, medium, high). Each panel represents the degree of lip fullness Question 2: do participant demographics predict judgments of dominance, threat, and Black stereotypicality? Effects of rater characteristics The parameter estimates for the rater level of the full, conditional model are presented in "Appendix 3" Table 3. The three predictors were older age (> 21 years old), race (White, Asian, Hispanic/Latinx, Biracial, or other, each coded as its own dummy variable), and non-females (91.4% male, 6.2% non-binary, 2.4% prefer not to respond). These predictors were in reference to younger adults, Black participants, and women, respectively. Age had no substantial effects on any of the three outcomes of interest. Gender had a small effect on judgments of dominance, such that non-female participants judged faces as less dominant (β = − 0.14). Additionally, participant race had an effect on judgments of Black stereotypicality, such that White (β = − 0.31), Asian (β = − 0.21), and Hispanic/Latinx (β = − 0.20) participants judged faces as less stereotypical. Race also had a small effect on perceptions of threat, such that Asian participants rated faces as more threatening (β = 0.16). Question 3: what is the relationship between dominance, threat, and Black stereotypicality after controlling for the effects of participant demographics? Trait correlations among raters Table 1 shows the correlations among the three outcomes after accounting for the effects of rater demographics (i.e., rater-level random effects). The model-implied means (intercepts) and standard deviations of the ratings are shown at the bottom of the table. The three traits were all positively correlated with one another (r = 0.34–0.55). This finding indicates that despite rater demographics and facial features, ratings across threat, dominance, and stereotypicality were positively related. Table 1 Model-based residual correlations of traits among raters Inequality in the way people are judged and ultimately treated in a variety of contexts, including the legal system, may begin with biased first impressions based on facial features. Previous research links first impression judgments of certain facial features/structure to perceptions of dominance and indicators of threat (Toscano et al., 2016). Moreover, facial features that are perceived as being stereotypically Black are touted as a harbinger for biased judgments related to criminality (e.g., Kleider et al., 2012; Kleider-Offutt et al., 2017a, 2017b). However, perceptions of faces likely result from a combination of the features of the faces as well as differences among the perceivers. The current study investigated what specific facial features were associated with judgments of Black stereotypicality and whether these features were also perceived as dominant and threatening. This research may provide initial information to better understand within-race variability in treatment and why some Black individuals are perceived as dominant and/or threatening without performing any overt actions to indicate negative behavior. Overview of findings Face judgment is complex, involving faces and raters (Hehman et al., 2017). Ignoring such differences due to faces and raters may be misleading with regard to relationships across traits. For example, zero-order correlations ("Appendix 3" Table 4) would suggest that dominance is positively related to both threat (r = 0.34) and stereotypicality (r = 0.20), but that ratings of threat and stereotypicality are essentially unrelated. However, our model shows that all three traits were positively related (r = 0.34–0.55) after controlling for facial features and rater differences. Also, our results suggest that participant demographics do little to explain the relationship between these traits. This suggests that the relationship across these three traits is largely driven by facial features and not driven by the specific perceiver demographics (i.e., race, age, gender) assessed in this study. It may be that the relationships among these traits are due in part to ubiquitous facial structure cues or due to features of the perceivers not tested here. Previous applied research on Black face-type bias describes stereotypically Black features as a combination of nose and lip width and skin tone (here reflectance); thus, this research focused on only those features. Importantly, we found that the effects of nose width, lip fullness, and reflectance had complex effects that differed by the trait being rated. A wide nose, thin lips, and the absence of reflectance were associated with higher ratings of threat. A wide nose and higher reflectance were associated with increased judgments of stereotypicality. A wide nose was the only feature substantially related to higher ratings of dominance. This suggests that a stereotypical Black face includes a wide nose and high skin reflectance but the only feature that is consistent with dominance is a wide nose. Black stereotypicality, dominance, and threat were related to faces with a wide nose. This potentially suggests that nose width is a cue indicative of a Black face but may simultaneously cue dominance and threat. Moreover, the finding that higher skin reflectance was related to Black stereotypicality but not dominance or threat, is inconsistent with other literature. If higher reflectance is a shading or texture of skin, it makes sense that this would be tied to Black stereotypicality in line with previous research (e.g., Livingston & Brewer, 2002; Maddox & Gray, 2002;). However, Todorov et al. (2013) found that in addition to dominance, threat is also cued by higher reflectance, while we found the opposite. In line with this idea, the three traits were positively correlated among raters (r = 0.34–0.55), suggesting moderate to strong consistency—personal biases, not explained by demographic differences—may influence trait judgments to a fair extent. Overall, this would suggest that even after controlling for facial features and demographics, participants agree that stereotypically Black faces are dominant and threatening, to a moderate to strong degree. Together, these results suggest that a stereotypical face-type is a combination of wide nose and higher reflectance and, to a lesser extent, full lips. Thus, a face is not likely to be judged as stereotypical based on full lips alone. This refines and validates previous work noting that a stereotypically Black face is some combination of a wide nose, full lips, and darker skin (e.g., Blair, 2006; Blair et al., 2004a, 2004b). The current study shows that among a diverse population of mostly non-White people, a stereotypical Black face is cued by a wide nose and higher reflectance. In addition, the relations among trait dominance, threat, and stereotypicality suggests that a wide nose, consistent for all three traits, may play a role in some Black people being judged as dominant and threatening. Compared to a person who is less stereotypically Black, with lighter reflectance and a relatively narrow nose, a stereotypically Black person is more likely to be judged as dominant and threatening, and potentially perceived negatively, by people making quick judgments. This work also suggests that for people who are not White, as in our sample, Black stereotypicality is related to threat and dominance (r = 0.34 each). Although demographic differences did not substantially influence our outcomes, we suggest that the racial makeup of our sample may be why some of our results diverge from previous work regarding reflectance, threat, and dominance (Todorov et al., 2013). From an applied standpoint, face-type bias related to Black stereotypicality may lead to judgments of dominance, which in some circumstances is positive (e.g., boxer, military personnel), and in other circumstances less advantageous, which can lead to negative judgments. Together, these findings suggest, potentially, that when people see a stereotypically Black face, it may cue assessments of dominance and threat which are consistent traits related to criminality. Thus, it may be that some aspects of the facial features tested here underpin criminal face-type bias reported in previous research. These effects upon ratings cued by these facial features are important because without contextual information, people are left to rely on hasty first impression cues to predict traits or behavior, and perceivers are likely to rate these different traits fairly similarly. While we used the demographic information available in our model, our sample of raters was primarily young, Black, females, and likely does not allow powerful tests of differences in ratings due to age, gender, and race. A more diverse sample would be informative and could yield not merely more generalizability, but interesting tests of differences in perceptions. However, it is noteworthy that our sample diverges from much of the previous research focused on face-type bias, which has tested trait assumptions within majority White samples (e.g., Blair, 2006; Blair et al., 2004a, 2004b; Eberhardt et al., 2006; Hagiwara et al., 2012). Although our study may be limited in generalizability, using a sample of people who may be the target of Black face-type bias is especially important. The findings here suggest that even for people who are part of a minoritized group and may themselves have encountered racial bias, are still prone to judge features representative of their racial group as dominant and threatening in some circumstances, lending support to the ubiquitous nature of biased racial judgments. In addition, we intentionally used a small set of features on artificial faces. The facial features in the current study were specifically designed and controlled to test features considered to be stereotypically Black and/or dominant in previous applied studies. More variation on more features with more faces could also provide more information about effects upon perceptions. The current sociocultural climate suggests that there is a need for people to be more cognizant of how they perceive and interact with individuals from different groups. First impressions based on facial features can lead to face-type bias and can serve as a vehicle to perpetuate faulty expectations of behavior. Throughout the legal system, people are assessed from the time of first interview (e.g., when stopped on the street or pulled over in their vehicle) to trial and sentencing. An awareness of race-based biases in face judgment could be disseminated throughout the legal system as training for law enforcement and triers of fact as well as become part of jury instruction to community members who serve as jurors. An awareness of biased tendencies will not stop people from having a bias but may slow knee-jerk decisions that are made prior to considering facts and evidence. Most misidentified men who were exonerated based on DNA evidence are Black (The Innocence Project, 2021), which suggests biased expectations are at work. Knowing that some Black individuals are judged as dominant and possibly threatening based on their facial structure should encourage citizens, law enforcement, and the legal system generally, to pause before making judgments that could have long-term impact. Upon request data are available to reviewers and can be uploaded to an appropriate website. The Symbolic Racism 2000 Scale was collected to assess participants' views toward Black Americans. On average, participants did not have negative views (M = 13.6) and was excluded from further analysis. The measure had good reliability (a = .71). Bar, M., Neta, M., & Linz, H. (2006). Very first impressions. Emotion, 6(2), 269–278. https://doi.org/10.1037/1528-3542.6.2.269 Blair, I. V. (2006). The efficient use of race and Afrocentric features in inverted faces. Social Cognition, 24(5), 563–579. https://doi.org/10.1521/soco.2006.24.5.563 Blair, I. V., Judd, C. M., & Chapleau, K. M. (2004a). The influence of Afrocentric facial features in criminal sentencing. Psychological Science, 15(10), 674–679. https://doi.org/10.1111/j.0956-7976.2004.00739.x Blair, I. V., Judd, C. M., & Fallman, J. L. (2004b). The Automaticity of race and Afrocentric facial features in social judgments. Journal of Personality and Social Psychology, 87(6), 763–778. https://doi.org/10.1037/0022-3514.87.6.763 Deregowski, J. B., Ellis, H. D., & Shepherd, J. W. (1975). Descriptions of White and Black faces by White and Black subjects. International Journal of Psychology, 10, 119–123. Dixon, T. L. (2017). Good guys are still always in white? Positive change and continued misrepresentation of race and crime on local television news. Communication Research, 44(6), 775–792. https://doi.org/10.1177/0093650215579223 Dixon, T. L., & Azocar, C. L. (2007). Priming crime and activating Blackness: Understanding the psychological impact of the overrepresentation of Blacks as lawbreakers on television news. Journal of Communication, 57(2), 229–253. https://doi.org/10.1111/j.1460-2466.2007.00341.x Dotsch, R., & Todorov, A. (2012). Reverse correlating social face perception. Social Psychological and Personality Science, 3(5), 562–571. https://doi.org/10.1177/1948550611430272 Eberhardt, J. L., Davies, P. G., Purdie-Vaughns, V. J., & Johnson, S. L. (2006). Looking deathworthy: Perceived stereotypicality of Black defendants predicts capital-sentencing outcomes. Psychological Science, 17(5), 383–386. https://doi.org/10.1111/j.1467-9280.2006.01716.x Flowe, H. D., & Humphries, J. E. (2011). An examination of criminal face bias in a random sample of police lineups. Applied Cognitive Psychology, 25(2), 265–273. https://doi.org/10.1002/acp.1673 Funk, F., Walker, M., & Todorov, A. (2017). Modelling perceptions of criminality and remorse from faces using a data-driven computational approach. Cognition and Emotion, 31(7), 1431–1443. https://doi.org/10.1080/02699931.2016.1227305 Golkar, A., Björnstjerna, M., & Olsson, A. (2015). Learned fear to social out-group members are determined by ethnicity and prior exposure. Frontiers in Psychology, 6, 1–6. https://doi.org/10.3389/fpsyg.2015.00123 Hagiwara, N., Kashy, D. A., & Cesario, J. (2012). The independent effects of skin tone and facial features on Whites' affective reactions to Blacks. Journal of Experimental Social Psychology, 48(4), 892–898. https://doi.org/10.1016/j.jesp.2012.02.001 Hehman, E., Sutherland, C. A. M., Flake, J. K., & Slepian, M. L. (2017). The unique contributions of perceiver and target characteristics in person perception. Journal of Personality and Social Psychology, 113(4), 513–529. https://doi.org/10.1037/pspa0000090.supp Henry, P. J., & Sears, D. O. (2002). The symbolic racism 2000 scale. Political Psychology, 23, 253–283. https://psycnet.apa.org/doi/https://doi.org/10.1111/0162-895X.00281 Ito, T.A., Willadsen-Jensen, E.C., Kaye, J., Park, B. (2011). Contextual variation in automatic evaluative bias to racially ambiguous faces. Journal of Experimental Social Psychology, 47, 818–23. https://dx.doi.org/https://doi.org/10.1016/j.jesp.2011.02.016 Joseph, E. (2009, February 9). The fall of the 'Barbie Bandits'. ABC News. https://abcnews.go.com/Primetime/story?id=3352813&page=1 Kaminska, O. K., Magnuski, M., Olszanowski, M., Gola, M., Brzezicka, A., & Winkielman, P. (2020). Ambiguous at the second sight: Mixed facial expressions trigger late electrophysiological responses linked to lower social impressions. Cognitive, Affective, & Behavioral Neuroscience, 20, 441–454. https://doi.org/10.3758/s13415-020-00778-5 Klatt, T., Maltby, J., Humphries, J. E., Smailes, H. L., Ryder, H., Phelps, M., & Flowe, H. D. (2016). Looking bad: Inferring criminality after 100 milliseconds. Applied Psychology in Criminal Justice, 12(2), 114–125. Kleider, H. M., Cavrak, S. E., & Knuycky, L. R. (2012). Looking like a criminal: Stereotypical Black facial features promote face source memory error. Memory & Cognition, 40(8), 1200–1213. https://doi.org/10.3758/s13421-012-0229-x Kleider-Offutt, H. M. (2019). Afraid of one afraid of all: When threat associations spread across face-types. Journal of General Psychology, 146(1), 93–110. https://doi.org/10.1080/00221309.2018.1540397 Kleider-Offutt, H. M., Bond, A. D., & Hegerty, S. E. A. (2017a). Black stereotypical features: When a face type can get you in trouble. Current Directions in Psychological Science, 26(1), 28–33. https://doi.org/10.1177/0963721416667916 Kleider-Offutt, H. M., Bond, A. D., Williams, S. E., & Bohil, C. J. (2018). When a face type is perceived as threatening: Using general recognition theory to understand biased categorization of Afrocentric faces. Memory & Cognition, 46(5), 716–728. https://doi.org/10.3758/s13421-018-0801-0 Kleider-Offutt, H. M., Knuycky, L. R., Clevinger, A. M., & Capodanno, M. M. (2017b). Wrongful convictions and prototypical Black features: Can a face-type facilitate misidentifications? Legal and Criminological Psychology, 22(2), 350–358. https://doi.org/10.1111/lcrp.12105 Knuycky, L. R., Kleider, H. M., & Cavrak, S. E. (2014). Line-up misidentifications: When being "prototypically Black" is perceived as criminal. Applied Cognitive Psychology, 28(1), 39–46. https://doi.org/10.1002/acp.2954 Livingston, R. W., & Brewer, M. B. (2002). What are we really priming? Cue-based versus category-based processing of facial stimuli. Journal of Personality and Social Psychology, 82, 5–18. https://doi.org/10.1037/0022-3514.82.1.5 MacLin, M. K., & Herrera, V. (2006). The criminal stereotype. North American Journal of Psychology, 8(2), 197–208. MacLin, O. H., & MacLin, M. K. (2004). The effect of criminality on face attractiveness, typicality, memorability and recognition. North American Journal of Psychology, 6(1), 145–154. Maddox, K. B., & Gray, S. A. (2002). Cognitive representations of Black Americans: Reexploring the role of skin tone. Personality and Social Psychology Bulletin, 28, 250–259. https://doi.org/10.1177/0146167202282010 Mazur, A., Mazur, J., & Keating, C. (1984). Military rank attainment of a West Point class: Effects of cadets' physical features. American Journal of Sociology, 90(1), 125–150. https://doi.org/10.1086/228050 Mueller, U., & Mazur, A. (1998). Reproductive constraints on dominance competition in male Homo Sapiens. Evolution and Human Behavior, 19(6), 387–396. https://doi.org/10.1016/S1090-5138(98)00032-4 Muthén, L. K., & Muthén , B. O. (1998–2017). Mplus user's guide (8th ed.). Muthén & Muthén. Olsson, A., Ebert, J. P., Banaji, M. R., & Phelps, E. A. (2005). The role of social groups in the persistence of learned fear. Science, 309(5735), 785–787. https://doi.org/10.1126/science.1113551 Oosterhof, N. N., & Todorov, A. (2008). The functional basis of face evaluation. PNAS, 105(32), 11087–11092. https://doi.org/10.1073/pnas.0805664105 Porter, S., ten Brinke, L., & Gustaw, C. (2010). Dangerous decisions: The impact of first impressions of trustworthiness on the evaluation of legal evidence and defendant culpability. Psychology, Crime & Law, 16(6), 477–491. https://doi.org/10.1080/10683160902926141 Rayne, N. (2016, March 9). 'Hot Convict' Jeremy Meeks released from prison: And he's coming home to a modeling contract. People. https://people.com/crime/hot-convict-jeremy-meeks-released-from-prison/ Stepanova, E. V., & Strube, M. J. (2009). Making of a face: Role of facial physiognomy, skin tone, and color presentation mode in evaluations of racial typicality. The Journal of Social Psychology, 149, 66–81. https://doi.org/10.3200/SOCP.149.1.66-81 Strom, M. A., Zebrowitz, L. A., Zhang, S., Bronstad, P. M., & Lee, H. K. (2012). Skin and bones: The contribution of skin tone and facial structure to racial prototypicality ratings. PLoS ONE. https://doi.org/10.1371/journal.pone.0041193 Todorov, A., Baron, S. G., & Oosterhof, N. N. (2008). Evaluating face trustworthiness: a model based approach. Social cognitive and affective neuroscience, 3(2),119–127. https://doi.org/10.1093/scan/nsn009. Todorov, A. (2011). Evaluating faces on social dimensions. In: A. Todorov, S. T. Fiske, & D. A. Prentice (Eds.), Social neuroscience: Toward understanding the underpinnings of the social mind (pp. 54–76). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195316872.003.0004. Todorov, A., Dotsch, R., Porter, J. M., Oosterhof, N. N., & Falvello, V. B. (2013). Validation of data-driven computational models of social perception of faces. Emotion, 13(4), 724–738. https://doi.org/10.1037/a0032335.supp(Supplemental) Todorov, A., Mandisodza, A. N., Goren, A., & Hall, C. C. (2005). Inferences of competence from faces predict election outcomes. Science, 308(5728), 1623–1626. https://doi.org/10.1126/science.1110589 Toscano, H., Schubert, T. W., Dotsch, R., Falvello, V., & Todorov, A. (2016). Physical strength as a cue to dominance: A data-driven approach. Personality and Social Psychology Bulletin, 42(12), 1603–1616. https://doi.org/10.1177/0146167216666266 Walker, M., & Vetter, T. (2009). Portraits made to measure: Manipulating social judgments about individuals with a statistical face model. Journal of Vision. https://doi.org/10.1167/9.11.12 Willadsen-Jensen, E. C., & Ito, T. A. (2008). A foot in both worlds: Asian Americans' perceptions of Asian, White, and racially ambiguous faces. Group Processes & Intergroup Relations, 11(2), 182–200. https://doi.org/10.1177/1368430207088037 Willis, J., & Todorov, A. (2006). First impressions: Making up your mind after a 100-ms exposure to a face. Psychological Science, 17(7), 592–598. https://doi.org/10.1111/j.1467-9280.2006.01750.x Zebrowitz, L. A. (2004). The origin of first impressions. Journal of Cultural and Evolutionary Psychology, 2(1–2), 93–108. https://doi.org/10.1556/JCEP.2.2004.1-2.6 Zebrowitz, L. A., Wadlinger, H. A., Luevano, V. X., White, B. M., Xing, C., & Zhang, Y. (2011). Animal analogies in first impressions of faces. Social Cognition, 29(4), 486–496. https://doi.org/10.1521/soco.2011.29.4.486 This project was not funded. Department of Psychology, Georgia State University, Atlanta, GA, 30030, USA Heather Kleider-Offutt, Ashley M. Meacham, Lee Branum-Martin & Megan Capodanno Heather Kleider-Offutt Ashley M. Meacham Lee Branum-Martin Megan Capodanno HK-O conceived of the research idea and questions, contributed to the experimental design and consulted on and reviewed the analysis. She wrote the introduction and discussion section and edited the entire document. AM contributed to the experimental design, created stimuli, conducted the analysis, wrote the methods section and edited the entire document and collected data. LB-M consulted on and contributed to the analysis and assisted with writing the results section, created the figures and edited the entire document. MC assisted with designing the data collection platform, pulling and cleaning data from the software, assisting with analysis and editing the entire document. All authors read and approved the final manuscript. Correspondence to Heather Kleider-Offutt. Ethics and approval and consent to participate This study was approved by the Georgia State IRB and all participants consented to participate. All authors agree to have this manuscript published. Full list of traits assessed (1 (not at all) to 7 (extremely)): How ________ was the previous face?: Threatening Stereotypically Black Stereotypically White Criminal-looking Baby-faced Full list of applied questions assessed (1 (not at all likely) to 7 (extremely likely)): How likely is it that you would ________?: Sit next to this person on the bus? Share an Uber with this person? Talk to this person at a party? Trust this person with your money? Vote for this person in an election? Trust this person to deliver a valuable package? Cross-classified model equations The cross-classified model for i ratings given by j raters across k faces upon the three traits t was specified as: $${\mathbf{Level}}\;{\mathbf{1}}\;\left( {{\mathbf{within}}} \right):Y_{ijkt} \, = \,{\ss}_{1jt} \, + \,{\ss}_{2kt} \, + \,e_{ijkt}$$ Yijkt is the rating i given by person j for face k upon trait t. ß1jt is the main effect of person j upon trait t. ß2kt is the main effect of face k upon trait t. eijkt is random error, distributed normally with free covariance across the three traits (i.e., each with mean zero and freely estimated variance and covariances). $${\mathbf{Level}}\;{\mathbf{j}}\;\left( {{\mathbf{raters}}} \right):{\ss}_{1jt} \, = \,\gamma_{10t} \, + \,\gamma_{11t} White_{j} \, + \,\gamma_{12t} Asian_{j} \, + \,\gamma_{13t} Hisp_{j} \, + \,\gamma_{14t} Old_{j} \, + \,\gamma_{15t} Non{ - }Female_{j} \, + \,u_{1jt}$$ γ10t is the grand mean of trait t. γ11tWhitej is the fixed effect of White (relative to Black raters) upon trait t. γ12tAsianj is the fixed effect of Asian (relative to Black raters) upon trait t. γ13tHispj is the fixed effect of Hispanic/Latinx (relative to Black raters) upon trait t. γ14tOldj is the fixed effect of being older than 21 years (i.e., dummy coded) upon trait t. γ15tNon-Femalej is the fixed effect of being non-female (male, non-binary, prefer not to respond) upon trait t. u1jt is random error for rater j, distributed normally with free covariance across the three traits (i.e., each with mean zero and freely estimated variance and covariances). $$\begin{aligned} {\mathbf{Level}}\;{\mathbf{k}}\;\left( {{\mathbf{faces}}} \right): {\ss}_{2kt} & = g_{21t} NT_{k} + g_{22t} NW_{k} + g_{23t} LT_{k} + g_{24t} LF_{k} + g_{25t} RN_{k} + g_{26t} RH_{k} \\ & \quad + g_{27t} NT_{k} * \, LT_{k} + g_{28t} NT_{k} * \, LF_{k} + g_{29t} NT_{k} * \, RN_{k} + g_{30t} NT_{k} * \, RH_{k} \\ & \quad + g_{31t} NW_{k} * \, LT_{k} + g_{32t} NW_{k} * \, LF_{k} + g_{33t} NW_{k} * \, RN_{k} + g_{34t} NW_{k} * \, RH_{k} \\ & \quad + g_{35t} LT_{k} * \, RN_{k} + g_{36t} LT_{k} * \, RH_{k} + g_{37t} LF_{k} * \, RN_{k} + g_{38t} LF_{k} * \, RH_{k} + + \, u_{2kt} \\ \end{aligned}$$ where each of the three facial features is dummy coded relative to their midpoint (zero), nose (thin, wide: NT, NW), lips (thin, full: LT, LF), and reflectance (no, high: RN, RH): γ21t&γ22t are the fixed effects of thin and wide nose (relative to average nose) upon trait t. γ23t &γ24t are the fixed effects of thin and full lips (relative to average lips) upon trait t. γ25t &γ26t are the fixed effects of no and high reflectance (relative to average reflectance) upon trait t. γ27t -γ38t are the fixed effects of all two-way interactions of respective facial features upon trait t. u2kt is random error for face k, distributed normally with free covariance across the three traits (i.e., each with mean zero and freely estimated variance and covariances). Two-level model equations Because most of the variance components for faces (u2kt) in the cross-classified model estimated close to zero, we fit the model as a two-level model of i ratings within j raters for the three traits t. Because this is a simple restriction of the cross-classified model (no variance components for k faces), we keep the same notation, but drop the k subscripts: $${\mathbf{Level}}\;{\mathbf{1}}\;\left( {{\mathbf{within}}} \right): Y_{ijkt} \, = \,{\ss}_{1jt} \, + \,e_{ijkt}$$ Yijt is the rating i given by person j upon trait t. eijt is random error, distributed normally with free covariance across the three traits (i.e., each with mean zero and freely estimated variance and covariances). $$\begin{aligned} {\mathbf{Level}}\;{\mathbf{2}}\;\left( {{\mathbf{raters}}} \right): {\ss}_{1jt} & = g_{10t} + g_{11t} White_{j} + g_{12t} Asian_{j} + g_{13t} Hisp_{j} + g_{14t} Old_{j} + g_{15t} Non - Female_{j} \\ & \quad + g_{21t} NT_{{}} + g_{22t} NW_{{}} + g_{23t} LT \, + g_{24t} LF \, + g_{25t} RN_{{}} + g_{26t} RH_{{}} \\ & \quad + g_{27t} NT_{{}} * \, LT \, + g_{28t} NT_{{}} * \, LF \, + g_{29t} NT_{{}} * \, RN \, + g_{30t} NT_{{}} * \, RH \, \\ & \quad + g_{31t} NW_{{}} * \, LT \, + g_{32t} NW_{{}} * \, LF \, + g_{33t} NW_{{}} * \, RN \, + g_{34t} NW_{{}} * \, RH \, \\ & \quad + g_{35t} LT_{{}} * \, RN \, + g_{36t} LT_{{}} * \, RH \, + g_{37t} LF_{{}} * \, RN \, + g_{38t} LF_{{}} * \, RH \, + \, u_{1jt} \\ \end{aligned}$$ where all parameters are the same as those provided for the cross-classified model—only the random effects for faces (u2kt) are excluded (i.e., restricted to zero). See Tables 2, 3 and 4. Table 2 Parameter estimates for level 1 of the two-level model Table 3 Parameter estimates for level 2 (raters) of the two-level model Table 4 Descriptive statistics (zero-order correlations) Kleider-Offutt, H., Meacham, A.M., Branum-Martin, L. et al. What's in a face? The role of facial features in ratings of dominance, threat, and stereotypicality. Cogn. Research 6, 53 (2021). https://doi.org/10.1186/s41235-021-00319-9 Accepted: 20 July 2021 Systemic Racism: Cognitive Consequences and Interventions
CommonCrawl
Mini Physics O Lvl Formula List Definitions List Practice MCQs A Lvl Short Question & Answer About Mini Physics Home O Level Work, Energy & Power Worked Examples for Work Worked Examples for Work January 19, 2020 January 16, 2020 by Mini Physics Show/Hide Sub-topics (Work, Energy & Power | O Level) Case Study 1: Energy Conversion for An Oscillating Ideal Pendulum Case Study 2: Energy Conversion for A Bouncing Ball Worked Examples for Energy These worked examples will help to solidify your understanding towards the concept of work and work done. Worked Example 1: Pushing A Box A boy pushes a box across a rough horizontal floor. He exerts 5 N to move it by 2 m. What is the work done by the boy? Click to show/hide answer $$\begin{aligned} \text{Work done by boy} &= f \times d \\ &= 5 \, \text{N} \times 2 \, \text{m} \\ &= 10 \, \text{J} \end{aligned}$$ Worked Example 2: Pulling A Wagon A boy pulls a 1 kg toy wagon along a smooth (i.e. no friction) horizontal floor over a distance of 5 m. If the speed of the wagon increases at a constant rate of $2 \, \text{m s}^{-2}$, what is the work done by the boy? The net resultant force on the toy wagon can be given by Newton's Second Law: $f_{\text{resultant}} = ma$. Since the floor is smooth (no friction), the net resultant force on the toy wagon is just the force exerted by the boy. Hence, $$\begin{aligned} \text{Net Resultant Force} &= \text{Force exerted by boy} \\ \text{Force exerted by boy} &= m \times a \\ &= 1 \, \text{kg} \times 2 \, \text{m s}^{-2} \\ &= 2 \, \text{N} \end{aligned}$$ The work done by the boy will just be: $$\begin{aligned} \text{Work done by boy on wagon} &= \text{force} \times \text{dist. moved in direction of force} \\ &= 2 \, \text{N} \times 5 \, \text{m} \\ &= 10 \, \text{J} \end{aligned}$$ Worked Example 3: Raising a Box A 5 kg box is raised 50 m from its original position. What is the gain in gravitational potential energy? Assume gravitational field strength = $10 \, \text{N kg}^{-1}$. How much work is needed to raise the box to its new position? Note: Assume that no energy is lost to air resistance. $$\begin{aligned} E_{p} &= mgh \\ &= 5 \, \text{kg} \times 10 \, \text{N kg}^{-1} \times 50 \, \text{m} \\ &= 2500 \, \text{J} \end{aligned}$$ Worked Example 4: Dropping A Ball A ball with a mass of 500 g is dropped from a height of 10 m from the ground level (reference level). What is it's initial gravitational potential energy? Determine its velocity just before it hits the ground. Part a: Taking the ground level to be the reference level, (For gravitational potential energy, the relevant formula is:) $$\begin{aligned} E_{p} &= mgh \\ &= \left( 0.5 \, \text{kg} \right) \left( 10 \, \text{N kg}^{-1} \right) \left( 10 \, \text{m} \right) \\ &= 50 \, \text{J} \end{aligned}$$ Part b: From the Principle of Conservation of Energy, the initial gravitational potential energy will be converted to its final kinetic energy. (I.e. Gain in kinetic energy = loss in gravitational potential energy) For kinetic energy, the relevant formula is: $$\begin{aligned} E_{k} &= \frac{1}{2} m v^{2} \\ 50 \, \text{J} &= \frac{1}{2} \left( 0.5 \, \text{kg} \right) v^{2} \\ v &= 14.1 \, \text{m s}^{-1} \end{aligned}$$ Worked Example 5: Dropping A Ball With The Effects of Air Resistance A ball with a mass of 500 g is dropped from a height of 10 m from the ground level (reference level). There is an energy loss of $10 \, \text{J}$ due to air resistance. Determine its velocity just before it hits the ground. You can obtain the gravitational potential energy of the ball from Worked Example 4, which is $50 \, \text{J}$. By the Principle of Conservation of Energy, the initial gravitational potential energy will be converted to final kinetic energy and energy lost due to air resistance. Hence, we have: $$\begin{aligned} E_{\text{initial}} &= E_{\text{final kinetic energy}} +\text{Energy Lost} \\ 50 \, \text{J} &= E_{\text{final kinetic energy}} + 10 \, \text{J} \\ E_{\text{final kinetic energy}} &= 50 \, \text{J}-10 \, \text{J} \\ &= 40 \, \text{J} \end{aligned}$$ Using the formula for kinetic energy, the velocity is: $$\begin{aligned} E_{\text{final kinetic energy}} &= \frac{1}{2}mv^{2} \\ 40 \, \text{J} &= \frac{1}{2} \left( 0.5 \text{kg} \right) v^{2} \\ v &= 12.6 \, \text{m s}^{-1} \end{aligned}$$ Worked Example 6: The Ball Rebounds A ball with a mass of 500 g is thrown vertically downwards from a height of 10 m with a velocity of $5.0 \, \text{m s}^{-1}$. It hits the ground and bounces. It then rises to a maximum height of 8.0 m. Determine the energy loss due to the bounce. Assume that no energy is lost due to air resistance. From the Principle of Conservation of Energy, the initial kinetic energy and initial gravitational potential energy is equals to the final kinetic energy + final gravitational potential energy + energy lost due to bounce. Let's put this into an equation form: $$\begin{aligned} E_{\text{k, initial}} + E_{\text{p, initial}} &= E_{\text{k, final}} + E_{\text{p, final}} + \text{Energy lost} \\ \frac{1}{2}mv^{2} + mgh_{\text{initial}} &= 0 + mgh_{\text{final}} + \text{Energy lost} \\ \frac{1}{2}\left( 0.5 \right) \left( 5^{2} \right) + \left( 0.5 \right) \left( 10 \right) \left( 10 \right) &= \left( 0.5 \right) \left( 10 \right) \left( 8 \right) + \text{Energy lost} \\ 56.25 \, \text{J} &= 40 \, \text{J} + \text{Energy lost} \\ \text{Energy lost} &= 16.25 \, \text{J} \\ \text{Energy lost} &= 16.3 \, \text{J} \, \left(3 \, \text{s.f.} \right) \end{aligned}$$ Worked Example 7: Work Done By Car Engine The engine of a car exerts a constant force of 10 kN. The car is able to accelerate constantly from rest to a speed of $30 \, \text{m s}^{-1}$ in 10 seconds. Determine the work done by the engine of the car and the kinetic energy of the car at the end of the 10 seconds. From what you have learnt in Kinematics, $$\begin{aligned} v &= u + at \\ a &= \frac{\left( 30-0 \right)}{10} \\ &= 3 \, \text{m s}^{-1} \end{aligned}$$ With the given engine force and calculated acceleration, we can find the mass of car by employing the use of Newton's Second Law. (Note: The engine force is the net resultant force acting on the car. Can you see why?) $$\begin{aligned} F &= ma \\ m &= \frac{F}{a} \\ &= \frac{10000}{3} \\ &= 3330 \, \text{kg} \end{aligned}$$ The distance travelled by the car in 10 seconds is given by: $$\begin{aligned} d &= \frac{1}{2} \left( v + u \right) \times t \\ &= 150 \, \text{m} \end{aligned}$$ The work done by the engine of the car will be: $$\begin{aligned} W &= F \times d \\ &= 10000 \, \text{N} \times 150 \, \text{m} \\ &= 1 500 000 \, \text{J} \\ &= 1.5 \times 10^{6} \, \text{J} \end{aligned}$$ The kinetic energy of the car at the end of 10 seconds will be given by: $$\begin{aligned} E_{k} &= \frac{1}{2} mv^{2} \\ &= \frac{1}{2} \left( 3330 \, \text{kg} \right) \left( 30 \, \text{m s}^{-1} \right)^{2} \\ &= 1.5 \times 10^{6} \end{aligned}$$ We notice that the work done by the engine of the car is the same as the kinetic energy of the car. This shows that all the work done by the engine of the car is converted into the kinetic energy of the car. (which is true if there is no dissipative forces acting on the car) Categories O Level, Work, Energy & Power Post navigation Systematic Error & Random Error Back To Work, Energy And Power (O Level) Back To O Level Topic List Categories O Level, Work, Energy & Power Tags Administrator of Mini Physics. If you spot any errors or want to suggest improvements, please contact us. Fires due to electrostatic charges Practice MCQs For Force and Turning Effect Of Forces 1 thought on "Worked Examples for Work" Nice example Boy on Practice On Finding The Zero Error Of A Vernier Caliper Anonymous on Weight William Burke on 7 Tips To Take Better Care Of Your Rechargeable Batteries yad on UY1: Calculation of moment of inertia of an uniform rigid rod FADI on Reading Kinematics Graphs Made with | 2010 - 2022 | Mini Physics | 91 queries in 0.126 seconds. | Sitemap
CommonCrawl
\begin{document} \title{Global regularity for a slightly supercritical hyperdissipative Navier--Stokes system} \begin{abstract} We prove global existence of smooth solutions for a slightly supercritical hyperdissipative Navier--Stokes under the optimal condition on the correction to the dissipation. This proves a conjecture formulated by Tao \cite{Tao2009}. \end{abstract} \section{Introduction} Let $d\geq 3$ and consider the generalized Navier--Stokes system \begin{equation}\label{e:gnse} \begin{cases} \frac{\partial u}{\partial t} + (u\cdot\nabla)u + \nabla p + D_0^2 u = 0,\\ \nabla\cdot u = 0,\\ \int_{[0,2\pi]^d} u(t,x)\,dx = 0, \end{cases} \end{equation} on $[0,2\pi]^d$ with periodic boundary conditions, where $D_0$ is a Fourier multiplier with non--negative symbol $m$. The Navier--Stokes system is recovered when $m(k) = |k|$. If \begin{equation}\label{e:asstao1} m(k) \geq c\frac{|k|^{\frac{d+2}4}}{G(|k|)}, \end{equation} where $G:[0,\infty)\to[0,\infty)$ is a non--decreasing function such that \begin{gather} \int_1^\infty\frac{ds}{s G(s)^4} =\infty, \label{e:asstao2}\\ \frac{G(x)}{|x|^{\frac{d+2}4}} \text{ eventually non--increasing}, \label{e:asstao3} \end{gather} then in \cite{Tao2009} it is proved\footnote{The proof of the result of \cite{Tao2009} is given in $\mathbb{R}^d$, but it can be easily extended to the periodic setting, see \cite[Remark 2.1]{Tao2009}.} that \eqref{e:gnse} has a global smooth solution for every smooth initial condition. The result has been later extended to the two dimensional case by \cite{KatTap2012}. A heuristic argument developed in \cite{Tao2009} and based on the comparison between the speed of propagation of a (possible) blow--up and the rate of dissipation suggests that regularity should still hold under the weaker condition \begin{equation}\label{e:assbmr} \int_1^\infty\frac{ds}{s G(s)^2} =\infty. \end{equation} The main result of this paper, contained in the following theorem, is a complete proof of this conjecture. \begin{theorem}\label{t:main1} Let $d\geq 2$ and assume \eqref{e:asstao1}, \eqref{e:asstao3} and \eqref{e:assbmr} for a non--decreasing function $G:[0,\infty)\to[0,\infty)$. Then \eqref{e:gnse} has a global smooth solution for every smooth initial condition. \end{theorem} A simple version of this conjecture, when reformulated on a toy model, has been proved for the dyadic model in \cite{BarMorRom2014}. Actually, for that model one could prove regularity in full supercritical regime, with $m(k)=|k|$, as was done in~\cite{BarMorRom2011}, but it was natural to develop there some of the main ideas on which also this paper is based. In fact here we prove that the equations for the velocity can be reduced to a suitable dyadic--like model, with infinitely many interactions though. A more sophisticated version of the arguments of \cite{BarMorRom2014} ensures regularity of this dyadic model and, in turns, of the solution of the problem \eqref{e:gnse} above. Our technique for proving Theorem~\ref{t:main1} is flexible enough to include an additional critical parameter. Consider the following generalized Leray $\alpha$--model, \begin{equation}\label{e:glanse} \begin{cases} \frac{\partial v}{\partial t} + (u\cdot\nabla)v + \nabla p + D_1 v = 0,\\ v = D_2 u,\\ \nabla\cdot v = 0,\\ \int_{[0,2\pi]^d} v(t,x)\,dx = \int_{[0,2\pi]^d} u(t,x)\,dx = 0, \end{cases} \end{equation} where $D_1$ and $D_2$ are Fourier multipliers with non--negative symbols $m_1$ and $m_2$. \begin{theorem}\label{t:main2} Let $d\geq2$, $\alpha,\beta\geq0$, and assume \[ m_1(k) \geq c\frac{|k|^\alpha}{g(|k|)}, \qquad m_2(k) \geq c|k|^\beta, \qquad \alpha+\beta\geq\frac{d+2}2, \] where $g:[0,\infty)\to[0,\infty)$ is a non--decreasing function such that $x^{-\alpha}g(x)$ is eventually non--increasing, and \begin{equation}\label{e:assbmrtrue} \int_1^\infty\frac{ds}{s g(s)} = \infty. \end{equation} Then \eqref{e:glanse} has a global smooth solution for every smooth initial condition. \end{theorem} Under the assumptions of Theorem~\ref{t:main1}, if $\beta=0$, $\alpha=\frac{d+2}{2}$, $g(x) = G(x)^2$, $m_2(k)=1$, and $m_1(k)=m(k)^2$, then the assumptions of Theorem~\ref{t:main2} are met. Therefore Theorem~\ref{t:main1} follows immediately from Theorem~\ref{t:main2}, and it is sufficient to prove only the second result. Our results hold as well when the problems are considered in $\mathbb{R}^d$, since in our method large scales play no significant role (see Remark~\ref{r:erredi}). The model~\eqref{e:glanse} with $g\equiv1$ was introduced by Olson and Titi in~\cite{OlsTit2007}. They proposed the idea that a weaker non-linearity and a stronger viscous dissipation could work together to yield regularity. Their statement uses though a stronger hypothesis $\alpha+\frac{\beta}{2}\geq\frac{d+2}{2}$ and this result was later logarithmically improved in~\cite{Yam2012} with condition~\eqref{e:asstao2}. Our results are also relevant in view of the analysis in \cite{Tao2014} (see Remark 5.2 therein), since they confirm that the condition \eqref{e:assbmrtrue} is optimal, when general non--linear terms with the same scaling are considered. The proof of the above theorem is based on two crucial ideas. The first idea is that smoothness of \eqref{e:glanse} can be reduced to the smoothness of a suitable shell model, obtained by averaging the energy of a solution of \eqref{e:glanse} over dyadic shells in Fourier space. We believe that this reduction may be interesting beyond the scope of this paper. The second idea is that the overall contribution of energy and dissipation over large shells satisfies a recursive inequality. Under condition \eqref{e:assbmrtrue} dissipation dumps significantly the flow of energy towards small scales and ensures smoothness. This is a more sophisticated version of the result obtained in \cite{BarMorRom2014}, due to the larger number of interactions between shells. The paper is organized as follows. In Section~\ref{s:reduction} we derive the \emph{shell approximation} of a solution of \eqref{e:glanse}. The recursive formula is obtained in Section~\ref{s:recursion}. In Section~\ref{s:cascade} we deduce exponential decays of shell modes by the recursive formula. The appendix~\ref{s:localexuniq} contains, for the sake of completeness, a standard existence and uniqueness result. \section{From the generalized Fourier Navier--Stokes to the dyadic equation} \label{s:reduction} This section contains one of the crucial steps in our approach. We show that the proof of Theorem~\ref{t:main2} can be reduced to a proof of decay of solutions of a suitable shell model. For simplicity and without loss of generality from now on we assume that \[ m_1(k) = \frac{|k|^\alpha}{g(|k|)}, \qquad m_2(k) \geq|k|^\beta. \] \subsection{The shell approximation} The dynamics of our generalized version of Navier-Stokes equation in Fourier decomposition reads \begin{equation}\label{e:fourier_ns} \begin{cases} \displaystyle v_k'=-\frac{|k|^\alpha}{g(|k|)}v_k-i\sum_{h\in\zds}\frac{\scalar{v_h,k}}{|h|^\beta}P_k(v_{k-h}),\\ \scalar{v_k,k}=0,\\ v_{-k}=\overline{v_k}, \end{cases} \qquad k\in\zds \end{equation} where $P_k(w):=w-\frac{\scalar{w,k}}{|k|^2}k$ and $v_0 = 0$. A solution is a family $(v_k)_{k\in\zds}$ where each $v_k=v_k(t)$ is a differentiable map from $[0,\infty)$ to $\mathbb{C}^d$ satisfying~\eqref{e:fourier_ns} for all times. As is common in Littlewood-Paley theory, let $\Phi:[0,\infty)\to[0,1]$ be a smooth function such that $\Phi\equiv1$ on $[0,1]$, $\Phi\equiv0$ on $[2,\infty)$ and $\Phi$ is strictly decreasing on $[1,2]$. For $x\geq0$, let $\psi(x):=\Phi(x)-\Phi(2x)$, so that $\psi$ is a smooth bump function supported on $(\frac12,2)$ satisfying \[ \sum_{n=0}^\infty\psi(x/2^n) =1-\Phi(2x) \equiv1 ,\qquad x\geq1. \] Notice that it is elementary to show that $\sqrt{\psi}$ is Lipschitz continuous. Let $\natu$ denote the set of non-negative integers. For all $n\in\natu$ we introduce the radial maps $\psi_n:\mathbb{R}^d\to[0,1]$ defined by $\psi_n(x)=\psi(2^{-n}|x|)$. Notice that \[ \sum_{n\in\natu}\psi_n(x) \equiv1 ,\qquad x\in\zds \] In Littlewood-Paley theory one typically defines $\psi_n$ for all $n\in\mathbb{Z}$, introduces objects like \[ P_n(x):=\sum_{k\in\mathbb{Z}^d}\psi_n(k)v_ke^{i\scalar{k,x}} \] and then proves that $u=\sum_nP_n$. Since these $P_n$ are not orthogonal\footnote{They are in fact \emph{almost orthogonal} in the sense that $\scalar{P_n,P_m}_{L^2}=0$ whenever $|m-n|\geq2$.} this does not give a nice decomposition of energy, as \[ \sum_{n\in\mathbb{Z}}\|P_n\|_{L^2}^2 \neq\sum_{k\in\mathbb{Z}^d}|v_k|^2 =\|u\|_{L^2}^2. \] Thus instead of $P_n(x)$ we introduce a sort of \emph{square-averaged} Littlewood-Paley decomposition. Let \begin{equation}\label{e:xn_def} X_n(t) :=\left(\sum_{k\in\mathbb{Z}^d}\psi_n(k)|v_k(t)|^2\right)^{1/2} ,\qquad n\in\natu,\quad t\geq0 \end{equation} Then clearly \[ \sum_{n\in\natu}X_n^2 =\sum_{k\in\mathbb{Z}^d}|v_k|^2 =\|u\|_{L^2}^2. \] \begin{remark} One major difference with respect to the usual Littlewood-Paley theory is that it is impossible to recover $v$ from these $X_n$ (as it was with the components $P_n(x)$), since they are averaged both in the physical space and over one shell of the frequency space. \end{remark} We will denote by $H^\gamma$ the Hilbert--Sobolev space of periodic functions with differentiation index $\gamma$, namely \begin{equation}\label{e:sobolev} H^\gamma = \{v = (v_k)_{k\in\mathbb{Z}^d}: \sum (1+|k|^2)^\gamma|v_k|^2<\infty\}. \end{equation} \begin{definition} If~\eqref{e:xn_def} holds, we say that $X=(X_n(t))_{n\in\natu,t\geq0}$ is the \emph{shell approximation} of $v$. \end{definition} If $v\in H^\gamma$ and $X$ is its shell approximation, then \[ \sum_n 2^{2\gamma n}X_n^2 = \sum_k\Bigl(\sum_n 2^{2\gamma n}\psi_n(k)\Bigr)|v_k|^2 \approx\sum_k |k|^{2\gamma}|v_k|^2 = \|v\|_{H^\gamma}^2. \] Hence, $v(t)\in C^\infty$ if and only if $\sup_n 2^{\gamma n}X_n<\infty$ for every $\gamma>0$. In view of Theorem~\ref{t:localexuniq} (see page \pageref{t:localexuniq}), Theorem~\ref{t:main2} follows if we can prove the following result. \begin{theorem}\label{t:main2bis} Under the same assumptions of Theorem~\ref{t:main2}, let $v(0)$ be smooth and periodic, and let $m\geq2+\frac{d}2$. If $v$ is a solution of \eqref{e:glanse} in $H^m$ on its maximal interval of existence $[0,T_\star)$, $X$ is its shell approximation and \[ \sup_{[0,T_\star)}\sum 2^{2mn}X_n^2 <\infty, \] then $T_\star=\infty$. \end{theorem} \subsection{The shell solution} We want to write a system of equation for the shell approximation of a solution of \eqref{e:glanse}. We give a more formal connection between \eqref{e:glanse} and its shell equation because we believe the notion will result useful beyond the scopes of the present work. Define the set $I$ as follows, \begin{equation}\label{e:def_I} I:=\left\{(l,m,n)\in\natu^3:\begin{array}l \text{the difference between the two largest}\\ \text{integers among $l$, $m$ and $n$ is at most 2} \end{array} \right\}. \end{equation} We are now ready to introduce the shell model ODE for the energy of each shell, equation~\eqref{e:shell_ns}. \begin{definition}[shell solution]\label{d:shell_solution} Let $X=(X_n)_{n\in\natu}$ be a sequence of real valued maps, $X_n:[0,\infty)\to\mathbb{R}$. We say that $X$ is a \emph{shell solution} if there are two families of real valued maps $\chi=(\chi_n)_{n\in\natu}$ and $\phi=(\phi_{(l,m,n)})_{(l,m,n)\in I}$ such that \begin{equation}\label{e:shell_ns} \frac d{dt}X_n^2(t) =-\chi_n(t)X_n^2(t)+\sum_{\substack{l,m\in\natu\\(l,m,n)\in I}}\phi_{(l,m,n)}(t)X_l(t)X_m(t)X_n(t), \end{equation} for all $n\in\natu$ and $t>0$ where the sum above is understood as absolutely convergent, and $\eta,\phi$ satisfies the following properties, \begin{enumerate} \item the family $\phi$ is \emph{antisymmetric}, in the sense that \[ \phi_{(l,m,n)}(t)=-\phi_{(l,n,m)}(t), \qquad (l,m,n)\in I,\ t\geq0, \] \item there exist two positive constants $c_1$ and $c_2$ for which, \begin{equation}\label{e:bound_chi_and_phi_def_shell_sol} \chi_n(t)\geq c_1\frac{2^{\alpha n}}{g(2^{n+1})} \qquad\text{and}\qquad \left|\phi_{(l,m,n)}(t)\right|\leq c_22^{(\frac d2+1-\beta)\min\{l,m,n\}} \end{equation} for all $(l,m,n)\in I$ and $t\geq0$. \end{enumerate} \end{definition} \begin{remark} We will prove below that the shell approximation of a solution of \eqref{e:glanse} is a shell solution. It is easy to check that the dissipation term is local as expected, due to the way the shell components of a solution interact in the model's dynamics. As for the nonlinear term, it turns out that the set $I$ of the triples of indices $(l,m,n)$ for which there may be interaction between the shell components $l$, $m$ and $n$ is quite small. This is basically because in the Fourier space, three components may interact only if they are the sides of a triangle and by triangle inequality their lengths cannot be in three shells far away from each other. \end{remark} \begin{remark} To ensure that the sum in \eqref{e:shell_ns} is absolutely convergent, it is sufficient to assume that the sequence $(X_n(t))_{n\in\natu}$ is square summable (this will be a consequence of the energy inequality, see Definition~\ref{d:energyineq}). Indeed, if $n$ is not the smallest index, then the sum is extended to a finite number if indices. Otherwise, $\phi_{(l,m,n)}$ is constant with respect to $l,m$. \end{remark} \begin{remark}\label{r:cancellations_phi} The antisymmetric property is what makes the non--linearity of \eqref{e:shell_ns} \emph{formally} conservative. In fact using antisymmetry, a change of variable ($m'=n$ and $n'=m$) and the fact that $(l,m',n')\in I$ if and only if $(l,n',m')\in I$, one could formally write, \begin{multline*} -\sum_{\substack{l,m,n\in\natu\\(l,m,n)\in I}}\phi_{(l,m,n)}X_lX_mX_n =\sum_{\substack{l,m,n\in\natu\\(l,m,n)\in I}}\phi_{(l,n,m)}X_lX_mX_n\\ =\sum_{\substack{l,m',n'\in\natu\\(l,n',m')\in I}}\phi_{(l,m',n')}X_lX_{m'}X_{n'} =\sum_{\substack{l,m',n'\in\natu\\(l,m',n')\in I}}\phi_{(l,m',n')}X_lX_{m'}X_{n'} \end{multline*} If these sums are absolutely convergent, this would prove indeed that the expression itself is equal to zero. Since these are infinite sums, these computations are not rigorous unless we know, for instance, that $\sum_n 2^{2\gamma n}X_n^2<\infty$, with $\gamma\geq\frac13(\frac{d}2+1-\beta)$, as it can be verified by an elementary computation. \end{remark} \subsection{The shell model as a shell approximation} The bounds on the coefficients given in Definition~\ref{d:shell_solution} are in the correct direction to prove regularity results (and hence Theorem~\ref{t:main2bis}). The following theorem, which is the main result of this section shows that they capture the natural scaling of the shell interactions for the \emph{physical} solutions. \begin{theorem}\label{t:shell_ode} If $v$ is a solution of \eqref{e:glanse} on $[0,T]$ and $X$ is its shell approximation, then $X$ is a shell solution. \end{theorem} \begin{remark}\label{r:erredi} At this stage it is easy to realize that our main results hold also in $\mathbb{R}^d$ with minimal changes. Indeed when passing to the shell approximation, all large frequencies are considered together in the first element of the shell model. \end{remark} The proof of Theorem~\ref{t:shell_ode} can be found at the end of this section. It is based on Propositions~\ref{p:bound_chi}-\ref{p:bound_phi} below, which give the actual definitions of $\chi$ and $\phi$ and prove their properties. \begin{proposition}\label{p:bound_chi} Let $X$ be the shell approximation of a solution $v$. Define $\chi_n(t)$ for all $n\in\natu$ and $t\geq0$ as follows \begin{equation}\label{e:chi_def} \chi_n(t) :=\begin{cases}\displaystyle \frac2{X_n^2(t)}\sum_{k\in\zds}\psi_n(k)\frac{|k|^\alpha}{g(|k|)}|v_k(t)|^2, &\quad\text{if }X_n(t)\neq0\\[4ex] \displaystyle \frac{2^{\alpha n-\alpha+1}}{g(2^{n+1})}, &\quad\text{if }X_n(t)=0 \end{cases} \end{equation} Then \[ \chi_n(t) \geq\frac{2^{\alpha n-\alpha+1}}{g(2^{n+1})} ,\qquad n\in\natu,t\geq0 \] \end{proposition} \begin{proof} Fix $n\in\natu$ and $t\geq0$. The map $\psi_n$ is supported on $\{x\in\mathbb{Z}^d:2^{n-1}<|x|<2^{n+1}\}$ and $g$ is non-decreasing, so \[ \sum_{k\in\zds}\psi_n(k)\frac{|k|^\alpha}{g(|k|)}\left|v_k(t)\right|^2 \geq\sum_{k\in\zds}\psi_n(k)\frac{2^{(n-1)\alpha}}{g(2^{n+1})}\left|v_k(t)\right|^2 =\frac{2^{(n-1)\alpha}}{g(2^{n+1})}X_n^2(t) \] where we used~\eqref{e:xn_def}. By~\eqref{e:chi_def} we get the thesis. \end{proof} We finally turn our attention to the antisymmetry property and an upper bound for $\phi_{(l,m,n)}(t)$. The statement is as follows. \begin{proposition}\label{p:bound_phi} Let $X$ be the shell approximation of a solution $v$. Define $\phi_{(l,m,n)}(t)$ for all $l,m,n\in\natu$ and $t\geq0$ as \begin{multline}\label{e:phi_def} \phi_{(l,m,n)}(t) :=\frac2{X_l(t)X_m(t)X_n(t)}\cdot\\ \cdot\sum_{\substack{h,k\in\mathbb{Z}^d\\h\neq0}}\psi_l(h)\psi_m(k-h)\psi_n(k)\frac{\im\{\scalar{v_h(t),k}\scalar{v_{k-h}(t),v_k(t)}\}}{|h|^\beta}, \end{multline} (unless $X_l(t)X_m(t)X_n(t)=0$, in which case $\phi_{(l,m,n)}(t):=0$). Then: \begin{enumerate} \item $\phi_{(l,m,n)}(t)=0$ for all $(l,m,n)\notin I$ and all $t\geq0$. \item $\phi_{(l,m,n)}(t)=-\phi_{(l,n,m)}(t)$ for all $l,m,n\in\natu$ and all $t\geq0$. \item For any $\beta\geq0$ there exists a constant $c_3>0$ depending only on $d$, $\beta$ and $\psi$ such that \begin{equation}\label{e:bound_phi} |\phi_{(l,m,n)}(t)| \leq c_3 2^{(\frac d2+1-\beta)\min\{l,m,n\}} ,\qquad (l,m,n)\in I,t\geq0. \end{equation} \end{enumerate} \end{proposition} For the proof we need a couple of lemmas. \begin{lemma}\label{l:sign_change_sum_k_of_phi} Suppose $v=(v_k)_{k\in\mathbb{Z}^d}$ is a complex field over $\mathbb{Z}^d$ such that for all $k\in\mathbb{Z}^d$, $\scalar{k,v_k}=0$ and $\overline{v_k}=v_{-k}$. Then for all $h\in\mathbb{Z}^d$, \begin{multline*} \sum_{k\in\mathbb{Z}^d}\psi_m(k-h)\psi_n(k)\im\{\scalar{v_h,k}\scalar{v_{k-h},v_k}\}\\ =-\sum_{k\in\mathbb{Z}^d}\psi_m(k)\psi_n(k-h)\im\{\scalar{v_h,k}\scalar{v_{k-h},v_k}\}. \end{multline*} \end{lemma} \begin{proof} Consider the left--hand side. By performing the change of variable $k'=h-k$ we obtain \begin{gather*} \psi_m(k-h)=\psi_m(-k')=\psi_m(k'),\\ \psi_n(k)=\psi_n(h-k')=\psi_n(k'-h),\\ \scalar{v_h,k}=\scalar{v_h,h-k'}=-\scalar{v_h,k'},\\ \scalar{v_{k-h},v_k}=\scalar{v_{-k'},v_{h-k'}}=\scalar{\overline{v_{k'}},\overline{v_{k'-h}}}=\scalar{v_{k'-h},v_{k'}}. \end{gather*} The sum for $k\in\mathbb{Z}^d$ is equivalent to the sum for $k'\in\mathbb{Z}^d$ and this concludes the proof. \end{proof} \begin{lemma}\label{l:cs_plus_cardinality_inequality} Let $v$ be a solution and $X$ its shell approximation. Then for all $a,b,c\in\natu$ and for all $t\geq0$, \begin{multline*} \sum_{h\in\mathbb{Z}^d}\psi_a(h)|v_h(t)|\sum_{k\in\mathbb{Z}^d}\sqrt{\psi_b(k)\psi_c(k-h)}|v_k(t)||v_{k-h}(t)|\leq\\ \leq2^{\frac d2(a+3)}X_a(t)X_b(t)X_c(t). \end{multline*} \end{lemma} \begin{proof} By Cauchy-Schwarz inequality and formula~\eqref{e:xn_def} we have that for all $h\in\mathbb{Z}^d$, \[ \sum_{k\in\mathbb{Z}^d}\sqrt{\psi_b(k)\psi_c(k-h)}|v_k(t)||v_{k-h}(t)| \leq X_b(t)X_c(t). \] Then, let $S_a$ denote the intersection between $\mathbb{Z}^d$ and the support of $\psi_a$. By inscribing $S_a$ in a cube we can bound its cardinality with $|S_a|\leq(2^{a+2}+1)^d\leq2^{(a+3)d}$, so \[ \sum_{k\in\mathbb{Z}^d}\psi_a(k)|v_k(t)| \leq\left(|S_a|\sum_{k\in S_a}\psi_a^2(k)v_k^2(t)\right)^{1/2} \leq\left(2^{(a+3)d}\right)^{1/2}X_a(t), \] where we used the fact that $\psi_a(k)\leq1$. \end{proof} \begin{proof}[Proof of Proposition~\ref{p:bound_phi}] Consider the definition of $\phi_{(l,m,n)}$, equation~\eqref{e:phi_def}. By applying Lemma~\ref{l:sign_change_sum_k_of_phi}, for fixed $t$, we immediately conclude that \[ \phi_{(l,n,m)} =-\phi_{(l,m,n)} \qquad l,m,n\in\natu, \] and in particular that $\phi_{(l,m,m)}=0$. Moreover, for all choices of $h$ and $k$, the arguments of $\psi_l$, $\psi_m$ and $\psi_n$ are the sides of a triangle in $\mathbb{R}^d$, so by the triangle inequality the size of the largest (wlog $k$) is at most twice the size of the second largest (wlog $h$). On the other hand for all $j\in\natu$ the support of $\psi_j$ is $\{x\in\mathbb{R}^d:2^{j-1}<|x|<2^{j+1}\}$. Thus whenever $\psi_l(h)\psi_n(k)\neq0$, necessarily $n\leq l+2$ since \[ 2^{n-1}<|k|\leq2|h|<2^{l+2}. \] This proves that $\phi_{(l,m,n)}=0$ outside $I$ as defined in equation~\eqref{e:def_I}. Finally we prove inequality~\eqref{e:bound_phi} for $(l,m,n)\in I$ with $m<n$. We will consider separately the two cases $n-m>2$ and $n-m\in\{1,2\}$, starting from the former. \noindent\textbf{Case 1.} Since $m<n-2$ and $(l,m,n)\in I$, then $m=\min\{l,m,n\}$ and $|l-n|\leq2$. This means in particular that for all the non-zero terms of the sum in equation~\eqref{e:phi_def}, tipically $|k-h|<|k|$, so it is convenient to substitute $\scalar{v_h,k}=\scalar{v_h,k-h}$ in the equation to obtain the following bound \[ |\phi_{(l,m,n)}| \leq\frac2{X_lX_mX_n}\sum_{\substack{h,k\in\mathbb{Z}^d\\h\neq0}}\psi_l(h)\psi_m(k-h)\psi_n(k)\frac{|v_h||k-h||v_{k-h}||v_k|}{|h|^\beta}. \] By the definition of $\psi_l$, either $\psi_l(h)=0$ or $|h|\geq 2^{l-1}\geq2^m$. Applying this and the change of variable $k'=k-h$ one gets, \[ |\phi_{(l,m,n)}| \leq\frac{2^{1-\beta m}}{X_lX_mX_n}\sum_{k'\in\mathbb{Z}^d}\psi_m(k')|k'||v_{k'}|\sum_{h\in\mathbb{Z}^d}\psi_l(h)\psi_n(k'+h)|v_h||v_{k'+h}|. \] In the same way we can substitute $|k'|\leq2^{m+1}$ and apply Lemma~\ref{l:cs_plus_cardinality_inequality} (recall that $\psi\leq1$, so $\psi\leq\sqrt\psi$) to get \[ |\phi_{(l,m,n)}| \leq2^{1-\beta m+m+1+\frac d2(m+3)}. \] Since in the present case $\min\{l,m,n\}=m$, this proves inequality~\eqref{e:bound_phi} with $c_3=2^{2+3d/2}$. \noindent\textbf{Case 2.} Suppose now that $n-m\in\{1,2\}$ and $(l,m,n)\in I$, then $l\leq n+2$ and $\min\{l,m,n\}\geq l-4$. In this case it is $l$ that can be small with respect to $m$ and $n$, so we take the terms in $l$ and $h$ outside the internal sum, \[ |\phi_{(l,m,n)}| \leq \frac{2}{X_lX_mX_n}\sum_{h\in\mathbb{Z}^d\setminus\{0\}}\frac{\psi_l(h)}{|h|^\beta}\left|\sum_{k\in\mathbb{Z}^d}\psi_m(k-h)\psi_n(k)\im\{\scalar{v_h,k}\scalar{v_{k-h},v_{k}}\}\right|. \] The idea is to exploit the cancellations in the sum over $k$ that happen when $k-h$ and $k$ are switched. By Lemma~\ref{l:sign_change_sum_k_of_phi} and the bound $|k|\leq2^{n+1}$ for $k$ in the support of $\psi_m$ or $\psi_n$, \begin{multline*} |\phi_{(l,m,n)}| \leq\frac2{X_lX_mX_n}\sum_{h\in\mathbb{Z}^d\setminus\{0\}}\frac{\psi_l(h)}{|h|^\beta}\\ \cdot\frac12\left|\sum_{k\in\mathbb{Z}^d}(\psi_m(k-h)\psi_n(k)-\psi_m(k)\psi_n(k-h))\im\{\scalar{v_h,k}\scalar{v_{k-h},v_{k}}\}\right|\\ \leq\frac{2^{n+1}}{X_lX_mX_n}\sum_{h\in\mathbb{Z}^d\setminus\{0\}}\frac{\psi_l(h)|v_h|}{|h|^\beta}\sum_{k\in\mathbb{Z}^d}\bigl|\psi_m(k-h)\psi_n(k)-\psi_m(k)\psi_n(k-h)\bigr| \left|v_{k-h}\right| \left|v_{k}\right|. \end{multline*} We turn our attention to the term $\psi_m(k-h)\psi_n(k)-\psi_m(k)\psi_n(k-h)$ and show that it is small. Let $L$ denote the Lipschitz constant of the function $\psi^{1/2}$. Then for all $h,k\in\mathbb{Z}^d$ and all $m,n\in\mathbb{N}$ such that $m\geq n-2$, \begin{multline*} \left|\sqrt{\psi_m(k-h)\psi_n(k)}-\sqrt{\psi_m(k)\psi_n(k-h)}\right|\\ =\left|\sqrt{\psi_m(k-h)\psi_n(k)}-\sqrt{\psi_m(k)\psi_n(k)}+\sqrt{\psi_m(k)\psi_n(k)}-\sqrt{\psi_m(k)\psi_n(k-h)}\right|\\ \leq L\frac{|h|}{2^m}\sqrt{\psi_n(k)}+L\frac{|h|}{2^n}\sqrt{\psi_m(k)} \leq L\frac{|h|}{2^{n-3}}. \end{multline*} Moreover by simmetry with respect to $m$ and $n$, \begin{multline*} \sum_{k\in\mathbb{Z}^d}\left(\sqrt{\psi_m(k-h)\psi_n(k)}+\sqrt{\psi_m(k)\psi_n(k-h)}\right)\left|v_{k-h}\right| \left|v_{k}\right|\\ =2\sum_{k\in\mathbb{Z}^d}\sqrt{\psi_m(k-h)\psi_n(k)}\left|v_{k-h}\right| \left|v_{k}\right|, \end{multline*} so that \[ |\phi_{(l,m,n)}| \leq\frac{2^5L}{X_lX_mX_n}\sum_{h\in\mathbb{Z}^d\setminus\{0\}}|h|^{1-\beta}\psi_l(h)|v_h|\sum_{k\in\mathbb{Z}^d}\sqrt{\psi_m(k-h)\psi_n(k)}\left|v_{k-h}\right| \left|v_{k}\right|. \] By the usual bound $2^{l-1}\leq|h|\leq2^{l+1}$, since $\beta\geq0$, we see that $|h|^{1-\beta}\leq2^{l(1-\beta)+1+\beta}$, so by Lemma~\ref{l:cs_plus_cardinality_inequality}, \[ |\phi_{(l,m,n)}| \leq 2^5 2^{(1-\beta)l+1+\beta} 2^{(l+3)\frac d2}L \leq 2^{(\frac d2+1-\beta)(l-4)+9-3\beta+\frac{11}2d}L. \] Since in the present case $\min\{l,m,n\}\geq l-4$, this proves inequality~\eqref{e:bound_phi} with $c_3=2^{9+\frac{11}2d-3\beta}L$. \end{proof} Finally we have all the ingredients to prove the main theorem of this section. \begin{proof}[Proof of Theorem~\ref{t:shell_ode}] A direct computation using~\eqref{e:xn_def} and~\eqref{e:fourier_ns} shows that \begin{multline*} \frac12\frac d{dt}X_n^2 =\re\sum_{k\in\mathbb{Z}^d}\psi_n(k)\scalar{v'_k,v_k}\\ =-\sum_{k\in\zds}\psi_n(k)\frac{|k|^\alpha}{g(|k|)}|v_k|^2+\im\sum_{k\in\mathbb{Z}^d}\sum_{h\in\zds}\psi_n(k)\frac{\scalar{v_h,k}}{|h|^\beta}\scalar{P_k(v_{k-h}),v_k}\\ =-\sum_{k\in\zds}\psi_n(k)\frac{|k|^\alpha}{g(|k|)}|v_k|^2+\sum_{\substack{h,k\in\mathbb{Z}^d\\h\neq0}}\psi_n(k)\frac{\im\{\scalar{v_h,k}\scalar{v_{k-h},v_k}\}}{|h|^\beta}. \end{multline*} To deal with the first sum, define $\chi$ as in Proposition~\ref{p:bound_chi}. By applying~\eqref{e:chi_def} for $X_n(t)\neq0$ and~\eqref{e:xn_def} for $X_n(t)=0$ we see that in both cases, \[ 2\sum_{k\in\zds}\psi_n(k)\frac{|k|^\alpha}{g(|k|)}|v_k|^2 =\chi_n(t)X_n^2(t). \] Now consider the second sum. Since the terms with $h=k$ give no contribution, we can apply \[ \sum_{l\in\natu}\psi_l(h) =\sum_{m\in\natu}\psi_m(k-h) =1 ,\qquad h,k\in\mathbb{Z}^d,\quad 0\neq h\neq k, \] to get \begin{multline*} \sum_{\substack{h,k\in\mathbb{Z}^d\\h\neq0}}\psi_n(k)\frac{\im\{\scalar{v_h,k}\scalar{v_{k-h},v_k}\}}{|h|^\beta}\\ =\sum_{\substack{h,k\in\mathbb{Z}^d\\h\neq0}}\sum_{l,m\in\natu}\psi_l(h)\psi_m(k-h)\psi_n(k)\frac{\im\{\scalar{v_h,k}\scalar{v_{k-h},v_k}\}}{|h|^\beta}\\ =\sum_{l,m\in\natu}\sum_{\substack{h,k\in\mathbb{Z}^d\\h\neq0}}\psi_l(h)\psi_m(k-h)\psi_n(k)\frac{\im\{\scalar{v_h,k}\scalar{v_{k-h},v_k}\}}{|h|^\beta}, \end{multline*} where it was possible to exchange the order of summation because the middle expression is clearly absolutely convergent. Now define $\phi$ as in Proposition~\ref{p:bound_phi}. By applying~\eqref{e:phi_def} or~\eqref{e:xn_def} depending on $X_l(t)X_m(t)X_n(t)$ being positive or zero, we see that for all $l,m,n\in\natu$ and $t\geq0$, \begin{multline*} 2\sum_{\substack{h,k\in\mathbb{Z}^d\\h\neq0}}\psi_l(h)\psi_m(k-h)\psi_n(k)\frac{\im\{\scalar{v_h,k}\scalar{v_{k-h},v_k}\}}{|h|^\beta}=\\ =\phi_{(l,m,n)}(t)X_l(t)X_m(t)X_n(t). \end{multline*} Putting all together we get \[ \frac d{dt}X_n^2(t) =-\chi_n(t)X_n^2(t)+\sum_{l,m\in\natu}\phi_{(l,m,n)}(t)X_l(t)X_m(t)X_n(t) \qquad n\in\natu,\ t\geq0. \] Finally recalling by Proposition~\ref{p:bound_phi} that $\phi\equiv0$ outside $I$, we may restrict the scope of the sum and obtain equation~\eqref{e:shell_ns}. The required properties of the coefficients $\chi$ and $\psi$ follow again from Propositions~\ref{p:bound_chi}-\ref{p:bound_phi}. \end{proof} \section{From the dyadic equation to the recursive inequality} \label{s:recursion} In view of the results of the previous section, we can now concentrate on shell solutions and forget equation \eqref{e:glanse}. In this section we proceed as in \cite{BarMorRom2014} and we deduce a recursive inequality between the tails of energy and dissipation. Clearly here, due to the more complex non--linear interaction, the relation is less trivial than in \cite{BarMorRom2014}. \begin{definition}\label{d:energyineq} A shell solution $X$ satisfies the \emph{energy inequality} on $[0,T]$ if the sum $\sum_n X_n^2(0)$ is finite and \begin{equation}\label{e:energy_inequality} \sum_{n\in\natu}X_n^2(t)+\int_0^t\sum_{n\in\natu}\chi_n(s)X_n^2(s)ds \leq\sum_{n\in\natu}X_n^2(0), \qquad t\in[0,T]. \end{equation} \end{definition} \begin{definition}\label{d:def_df} Let $X$ be a shell solution and define the sequences of real valued maps $(F_n)_{n\in\natu}$ and $(d_n)_{n\in\natu}$ for $t\geq0$ by \begin{gather*} F_n(t) :=\sum_{k\geq n}X_k^2(t),\\\label{e:def_dn} d_n(t) :=\left(F_n(t)+\sum_{h\geq n}\int_0^t\chi_h(s)X_h^2(s)ds\right)^{\frac12}. \end{gather*} We will call $(F_n)_{n\in\natu}$ the \emph{tail} of $X$ and $(d_n)_{n\in\natu}$ the \emph{energy bound} of $X$. \end{definition} The recursive inequality between the tails and the energy bound is given in the next result. \begin{proposition}\label{p:d_recursion} Let $X$ be a shell solution that satisfies the energy inequality on a time interval $[0,t]$, let $(d_n)_{n\in\natu}$ be its sequence of energy bounds, and set $\lambda=2^\alpha$. Then there is a positive constant $c_4>0$ such that for all $n\in\natu$, \begin{equation}\label{e:d_recursion} d_n^2(t) \leq F_n(0)+c_4\sum_{l=0}^{n-1}\frac{\bar d_l}{\lambda^{n-l}}\sum_{m\geq n-2}\frac{g(2^{m+1})}{\lambda^{m-n}}\bigl(d_{m}^2(t)-d_{m+1}^2(t)\bigr), \end{equation} where $\bar d_l:=\max_{s\in[0,t]}d_l(s)$. \end{proposition} \begin{proof} Fix $n\in\natu$. Differentiate $\sum_{h=0}^{n-1}X_h^2$ using equation~\eqref{e:shell_ns}, \[ \frac d{dt}\sum_{h=0}^{n-1}X_h^2 =-\sum_{h=0}^{n-1}\chi_hX_h^2+\sum_{\substack{l,m,h\in\natu\\(l,m,h)\in I\\h\leq n-1}}\phi_{(l,m,h)}X_lX_mX_h. \] Apply Lemma~\ref{l:cancellations_phi} below to the second sum and integrate on $[0,t]$ to obtain \[ \sum_{h=0}^{n-1}X_h^2(t)-\sum_{h=0}^{n-1}X_h^2(0) =-\int_0^t\sum_{h=0}^{n-1}\chi_hX_h^2\,ds-\int_0^t\sum_{\substack{(l,m,h)\in I\\m<n\leq h}}\phi_{(l,m,h)}X_lX_mX_h\,ds, \] so that by the energy inequality~\eqref{e:energy_inequality}, \[ F_n(t)+\int_0^t\sum_{h\geq n}\chi_h(s)X_h^2(s)\,ds \leq F_n(0)+\int_0^t\sum_{\substack{(l,m,h)\in I\\m<n\leq h}}\phi_{(l,m,h)}X_l(s)X_m(s)X_h(s)\,ds, \] where the $F_n$ are the tails of $X$ and $F_n(0)<\infty$ by hypothesis. Thus by \eqref{e:def_dn}, \[ d_n^2(t) \leq F_n(0)+\int_0^t\sum_{\substack{(l,m,h)\in I\\m<n\leq h}}\phi_{(l,m,h)}X_l(s)X_m(s)X_h(s)\,ds. \] Recall that $\alpha+\beta\geq\frac{d}2+1$, hence the bound \eqref{e:bound_chi_and_phi_def_shell_sol} for $\phi$ yields $\phi_{(l,m,h)} \leq c_2 \lambda^{\min\{l,m,h\}}$. Therefore \[ d_n^2(t) \leq F_n(0)+\int_0^t\sum_{\substack{(l,m,h)\in I\\m<n\leq h}}c_2\lambda^{\min\{l,m\}}|X_l(s)X_m(s)X_h(s)|\,ds. \] It is convenient to split the set over which the sum is done into $\{l<m\}$ and $\{m\leq l\}$, \begin{multline*} \sum_{\substack{(l,m,h)\in I\\m<n\leq h}}\lambda^{\min\{l,m\}}|X_lX_mX_h| \leq\sum_{\substack{(l,m,h)\in I\\l<m<n\leq h}}\lambda^l|X_lX_mX_h| +\sum_{\substack{(l,m,h)\in I\\m<n\leq h\\m\leq l}}\lambda^m|X_lX_mX_h|\\ \leq\sum_{\substack{(l,m,h)\in I\\l<m<n\leq h}}\lambda^l|X_lX_mX_h| +\sum_{\substack{(l,m,h)\in I\\l<n\leq h\\l\leq m}}\lambda^l|X_lX_mX_h|\\ \leq 2\sum_{\substack{(l,m,h)\in I\\l<n\leq h\\l\leq m}}\lambda^l|X_lX_mX_h| \leq 2\sum_{l=0}^{n-1}\lambda^l\bar d_l\sum_{h\geq n}\sum_{m=h-2}^{h+2}|X_mX_h|. \end{multline*} Apply the Cauchy-Schwarz inequality to get \[ 2\sum_{h\geq n}\sum_{m=h-2}^{h+2}|X_hX_m| \leq\sum_{h\geq n}\sum_{m=h-2}^{h+2}(X_h^2+X_m^2) \leq10\sum_{m\geq n-2}X_m^2. \] Then by the bound on $\chi$ in~\eqref{e:bound_chi_and_phi_def_shell_sol}, on all $[0,t]$, \[ \sum_{m\geq n-2}X_m^2 \leq c_1^{-1}\sum_{m\geq n-2}\frac{g(2^{m+1})}{\lambda^m}\chi_mX_m^2. \] Finally the integral of $\chi_mX_m^2$ can be bounded using \eqref{e:def_dn}, \[ d_{m}^2(t)-d_{m+1}^2(t) =F_{m}(t)-F_{m+1}(t)+\int_0^t\chi_m(s)X_m^2(s)\,ds \geq\int_0^t\chi_m(s)X_m^2(s)\,ds. \] Putting all together we obtain \[ d_n^2(t) \leq F_n(0)+10\frac{c_2}{c_1}\sum_{l=0}^{n-1}\frac{\bar d_l}{\lambda^{-l}}\sum_{m\geq n-2}\frac{g(2^{m+1})}{\lambda^{m}}(d_{m}^2(t)-d_{m+1}^2(t)), \] thus proving equation~\eqref{e:d_recursion} with $c_4=10\frac{c_2}{c_1}$. \end{proof} \begin{lemma}\label{l:cancellations_phi} Let $X$ be a shell solution, then for all $n\in\natu\setminus\{0\}$ and $s\in[0,t]$, \begin{equation}\label{e:cancellations_phi} \sum_{\substack{(l,m,h)\in I\\h\leq n-1}}\phi_{(l,m,h)}X_lX_mX_h =-\sum_{\substack{(l,m,h)\in I\\m\leq n-1<h}}\phi_{(l,m,h)}X_lX_mX_h. \end{equation} \end{lemma} \begin{proof} By using \eqref{e:bound_chi_and_phi_def_shell_sol}, noticing that $\min(l,m,h)\leq n-1$, we see that by definition of shell solution (Definition~\ref{d:shell_solution}) the left--hand side of \eqref{e:cancellations_phi} is an absolutely convergent sum. Therefore we can exploit the cancellations due to the antisymmetry of $\phi$, as in Remark~\ref{r:cancellations_phi}. Indeed \begin{equation}\label{e:long1} \smashoperator[r]{\sum_{\substack{(l,m,h)\in I\\h\leq n-1}}}\phi_{(l,m,h)}X_lX_mX_h = \smashoperator[r]{\sum_{\substack{(l,m,h)\in I\\m<h\leq n-1}}}\phi_{(l,m,h)}X_lX_mX_h +\smashoperator[r]{\sum_{\substack{(l,m,h)\in I\\h\leq n-1\\m>h}}}\phi_{(l,m,h)}X_lX_mX_h, \end{equation} and \begin{multline}\label{e:long2} \sum_{\substack{(l,m,h)\in I\\h\leq n-1\\m>h}}\phi_{(l,m,h)}X_lX_mX_h = -\sum_{\substack{(l,m,h)\in I\\h\leq n-1\\m>h}}\phi_{(l,h,m)}X_lX_mX_h =\\ = -\sum_{\substack{(l,h',m')\in I\\m'\leq n-1\\h'>m'}}\phi_{(l,m',h')}X_lX_{m'}X_{h'} = -\sum_{\substack{(l,m',h')\in I\\m'\leq n-1\\m'<h'}}\phi_{(l,m',h')}X_lX_{m'}X_{h'}. \end{multline} By using \eqref{e:long2} into \eqref{e:long1} the conclusion follows. \end{proof} \section{Solving the recursion} \label{s:cascade} In this section we complete the proof of our main result. In the previous section we have shown a recursive inequality involving the energy bounds of a shell solution. The following theorem shows that shell solutions are smooth. By Theorem~\ref{t:shell_ode} the shell approximation of a solution of \eqref{e:glanse} is a shell solution, hence Theorem~\ref{t:main2bis} holds, and in turns Theorem~\ref{t:main2} holds as well. \begin{theorem}\label{t:main2ter} Let $X$ be a shell solution satisfying the energy inequality on $[0,t)$. If $\sup_n 2^{mn}|X_n(0)|<\infty$ for every $m\geq1$, then \[ \sup_{s\in [0,t]}\sup_n 2^{mn}|X_n(s)| <\infty \] for every $m\geq1$. \end{theorem} Let $b_n = g(2^{n+1})^{-1}$, $n\geq0$, then the assumptions of Theorem~\ref{t:main2} for $g$ read in terms of the sequence $b$ as \begin{itemize} \item $(b_n)_{n\in\mathbb{N}}$ non--increasing, \item $(\lambda^n b_n)_{n\in\mathbb{N}}$ non--decreasing, \item $\sum_n b_n = \infty$. \end{itemize} Let $X$ be a shell solution as in the statement of Theorem~\ref{t:main2ter}, denote by $(d_n)_{n\in\mathbb{N}}$ and $(F_n)_{n\in\mathbb{N}}$ the energy bound and the tail of $X$ (see Definition~\ref{d:def_df}), and set $\bar d_n = \sup_{[0,t]}d_n(t)$ for every $n$. Set \[ Q_n = \sum_{j=0}^{n-1}\frac{\bar d_j}{\lambda^{n-j}} \] and \[ R_n(t) = \sum_{j\geq n} \frac{d_j(t)^2 - d_{j+1}(t)^2}{\lambda^{j-n}b_j}, \] where $\lambda=2^\alpha$ as in the previous section. We recall that, by Proposition~\ref{p:d_recursion}, the following inequality holds, \begin{equation}\label{e:drecursive} d_n(t)^2 \leq F_n(0) + c_4Q_n R_{n-2}(t). \end{equation} In the following lemma we collect some properties of the quantities $R_n$, $Q_n$, $\bar d_n$ that will be crucial in the proof of Theorem~\ref{t:main2ter} above. \begin{lemma} The following properties hold. \begin{enumerate} \item For every $1\leq m_1\leq m_2$ and $t>0$, \begin{equation}\label{e:Rformula} \min\{R_{m_1}(t),R_{m_1+1}(t)\dots,R_{m_2}(t)\} \leq \frac{\lambda}{\lambda-1} \frac{d_{m_1}(t)^2}{\sum_{n=m_1}^{m_2} b_n}. \end{equation} \item For every $t>0$, $\liminf_n R_n(t) = 0$. \item $\bar d_n\downarrow0$ as $n\to\infty$. \item $Q_n\to0$ as $n\to\infty$. \item $(Q_n)_{n\geq1}$ is eventually non--increasing. \end{enumerate} \end{lemma} \begin{proof} Since $\lambda^n b_n$ is non--decreasing, we know that $b_n - \lambda^{-1}b_{n-1}\geq0$. Hence by exchanging the sums, \begin{multline*} \sum_{n=m_1}^\infty\bigl(b_n - \lambda^{-1}b_{n-1}\bigr)R_n(t)=\\ = \sum_{k=m_1}^\infty \frac{d_k(t)^2 - d_{k+1}(t)^2}{\lambda^k b_k} \sum_{n=m_1}^k \bigl(\lambda^n b_n - \lambda^{n-1}b_{n-1}\bigr)\leq\\ \leq \sum_{k=m_1}^\infty (d_k(t)^2 - d_{k+1}(t)^2) \leq d_{m_1}(t)^2. \end{multline*} If $m_2\geq m_1$, since $(b_n)_{n\geq1}$ is non--increasing, \[ \begin{aligned} \sum_{n=m_1}^{m_2} \bigl(b_n - \lambda^{-1}b_{n-1}\bigr)R_n(t) &\geq \min\{R_{m_1}(t),\dots,R_{m_2}(t)\} \sum_{n=m_1}^{m_2}\bigl(b_n - \lambda^{-1}b_{n-1}\bigr)\\ &\geq\frac{\lambda-1}{\lambda}\Bigl(\sum_{n=m_1}^{m_2} b_n\Bigr) \min\{R_{m_1}(t),\dots,R_{m_2}(t)\}. \end{aligned} \] The claim $\liminf_n R_n(t)=0$ follows from \eqref{e:Rformula}, since $d_n(t)\leq d_1(t)$ for every $n$, and since, by the assumptions on $(b_n)_{n\geq1}$, we can find a sequence $(m_k)_{k\geq1}$ such that $\sum_{n=m_k}^{m_{k+1}-1}b_n\uparrow\infty$. To prove that $\bar d_n\downarrow0$, we notice that the sequence $(m_k)_{k\geq1}$ mentioned above does not depend on $t$, hence using the monotonicity of $(d_n(t))_{n\geq1}$ and formula \eqref{e:Rformula}, we can prove that $\liminf_n \bar d_n = 0$, and hence $\bar d_n\downarrow0$ by monotonicity. Once we know that $\bar d_n\downarrow0$, an easy and standard argument proves that $Q_n\to0$. To prove that $(Q_n)_{n\geq1}$ is eventually non--increasing, we notice that, since $(\bar d_n)_{n\geq1}$ is non--increasing, \[ (Q_{n+1} - Q_n) = \frac1\lambda(Q_n - Q_{n-1}) + \frac1\lambda(\bar d_n - \bar d_{n-1}) \leq \frac1\lambda(Q_n - Q_{n-1}). \] In view of the above inequality, it is sufficient to show that for some $m$ the increment $Q_m-Q_{m-1}\leq0$. This is true because otherwise the sequence $(Q_n)_{n\geq1}$ would be non--decreasing, in contradiction with $Q_n\to0$ and $Q_n\geq0$. \end{proof} Given $\theta>0$ and $n_0\geq1$, define by recursion the sequence \begin{equation}\label{e:sequence} n_{k+1} = 2 + \min\Bigl\{n\geq n_k-1: \sum_{j=n_k-1}^n b_j\geq\theta\lambda^{-\frac{k}4}\Bigr\}. \end{equation} The definition of $Q_n$ and the fact that the sequence $(\bar d_n)_{n\geq1}$ is non--increasing yield the following recursive formula for $Q_{n_k}$, \begin{equation}\label{e:Qrecursive} Q_{n_{k+1}} = \frac1{\lambda^{n_{k+1}-n_k}}Q_{n_k} + \sum_{j=n_k}^{n_{k+1}-1}\frac{\bar d_j}{\lambda^{n_{k+1}-j}} \leq \frac1\lambda Q_{n_k} + c\bar d_{n_k}, \end{equation} for a constant $c>0$ depending only from $\lambda$. Moreover, if we choose $n_0$ large enough that $(Q_n)_{n\geq0}$ is non--increasing, \[ d_{n_{k+1}}(t)^2 \leq d_n(t)^2 \leq F_n(0) + c_4 Q_n R_{n-2}(t) \leq F_{n_k}(0) + c_4 Q_{n_k} R_{n-2}(t) \] for each $n\in\{n_k+1,\dots,n_{k+1}\}$, hence by formula \eqref{e:Rformula} and the definition of the sequence $(n_k)_{k\geq1}$, \begin{multline*} d_{n_{k+1}}(t)^2 \leq F_{n_k}(0) + c_4 Q_{n_k}\min\{R_{n_k-1},\dots,R_{n_{k+1}-2}\}\leq\\ \leq F_{n_k}(0) + c Q_{n_k}\frac{d_{n_k-1}(t)^2}{\sum_{n_k-1}^{n_{k+1}-2}b_j} \leq F_{n_k}(0) + c\frac{\lambda^{\frac{k}4}}{\theta}Q_{n_k}d_{n_k-1}(t)^2, \end{multline*} and in conclusion, \begin{equation}\label{e:bdrecursive} \bar d_{n_{k+1}}^2 \leq F_{n_k}(0) + c\frac{\lambda^{\frac{k}4}}{\theta}Q_{n_k}\bar d_{n_k-1}^2. \end{equation} \begin{lemma}[initial step of the cascade]\label{l:initial} Given $M>0$, there are $n_0\geq1$ and $\theta>0$ such that \[ \begin{gathered} Q_{n_k} \leq \lambda^{-\frac{k}2},\\ \bar d_{n_k}^2 \leq \lambda^{-Mk}, \end{gathered} \] for all $k\geq0$. \end{lemma} \begin{proof} Without loss of generality we can choose $M$ large (depending only on the value of $\lambda$, see below at the end of the proof). Choose $n_0$ large enough that $(Q_n)_{n\geq n_0}$ is non--increasing and \[ Q_{n_0-i} \leq\epsilon, \qquad \bar d_{n_0-i} \leq \epsilon, \quad i=0,1, \quad\text{and}\quad \lambda^{Mn}F_n(0) \leq\epsilon, \quad n\geq n_0, \] for a number $\epsilon\in(0,1)$ suitably chosen below. We will prove by induction that \begin{equation}\label{e:initial_claim} Q_{n_k-i} \leq \lambda^{-\frac12(k-i)}, \qquad \bar d_{n_{k-i}}^2 \leq \lambda^{-M(k-i)}, \qquad i=0,1, \qquad k\geq1. \end{equation} For the initial step of the induction ($k=1$), we notice that by \eqref{e:Qrecursive} and \eqref{e:bdrecursive}, \[ \begin{gathered} Q_{n_1} \leq \frac1\lambda Q_{n_0} + c\bar d_{n_0} \leq \frac\epsilon\lambda + c\epsilon \leq \frac1{\lambda^{1/2}},\\ \bar d_{n_1}^2 \leq F_{n_0}(0) + \frac{c}\theta Q_{n_0}\bar d_{n_0-1}^2 \leq \epsilon + \frac{c}{\theta}\epsilon^3 \leq \lambda^{-M},\\ \end{gathered} \] if we choose $\epsilon$ small enough, depending from the values of $\lambda$, $M$, and $\theta$. Assume now that \eqref{e:initial_claim} holds for some $k\geq1$, and let us prove that the same holds for $k+1$. To this end it is sufficient to give the estimate for $Q_{n_{k+1}}$ and $\bar d_{n_{k+1}}^2$. Again by \eqref{e:Qrecursive}, \eqref{e:bdrecursive} and the induction hypothesis, and since $(n_k)_{k\geq0}$ is increasing by definition, \[ \begin{gathered} Q_{n_{k+1}} \leq \frac1\lambda Q_{n_k} + c \bar d_{n_k} \leq \lambda^{-\frac{k}2-1} + c\lambda^{-\frac{M}2k} \leq \lambda^{-\frac12(k+1)},\\ \bar d_{n_{k+1}}^2 \leq F_{n_k}(0) + c\frac{\lambda^{\frac{k}4}}{\theta}Q_{n_k}\bar d_{n_k-1}^2 \leq \epsilon\lambda^{-Mk} + \frac{c}\theta \lambda^{-\frac{k}4}\lambda^{-M(k-1)} \leq \lambda^{-M(k+1)}, \end{gathered} \] if $M$ is large (depending on $\lambda$), and $\epsilon$ is small and $\theta$ is large (depending only on $M$, $\lambda$). \end{proof} Before giving the last step of the proof of Theorem~\ref{t:main2ter}, we show a property of the sequence $(n_k)_{k\geq0}$. The proof is the same as \cite[Lemma 11]{BarMorRom2014}, we detail it for completeness. \begin{lemma}\label{l:indices} Given $n_0\geq1$ and $\theta>0$, consider the sequence defined in \eqref{e:sequence}. For infinitely many $k$, $n_{k+1}=n_k+1$. In particular $b_{n_k-1}\geq\theta\lambda^{-k/4}$ for all such $k$. \end{lemma} \begin{proof} Assume by contradiction that there is $r$ such that $n_{k+1}\geq n_k+2$ for $k\geq r$. On the one hand \[ \sum_{j=n_k-1}^{n_{k+1}-3}b_j \leq \theta \lambda^{-\frac{k}4}, \] and summing up in $k\geq r$ yields \[ \sum_{k\geq r}\sum_{j=n_k-1}^{n_{k+1}-3}b_j <\infty \qquad\leadsto\qquad \sum_k b_{n_k-2} = \infty. \] On the other hand, $b_{n_k-2}\leq b_{n_k-3}\leq \theta\lambda^{-\frac14(k-1)}$ and the series $\sum_k b_{n_k-2}$ converges. \end{proof} \begin{lemma}[cascade recursion] For every $M>0$ there is $c_M>0$ such that \[ \bar d_n^2 \leq c_M \lambda^{-Mn}, \qquad\quad Q_n \leq c_M \lambda^{-n}. \] \end{lemma} \begin{proof} There is no loss of generality if we assume $M$ is large. Let $n_0$, $\theta$ be the values provided by Lemma~\ref{l:initial}. By Lemma~\ref{l:initial} and Lemma~\ref{l:indices} there are infinitely many $k\geq1$ such that \begin{equation}\label{e:cascade_init} b_{n_k-1} \geq\theta \lambda^{-\frac{k}4}, \qquad Q_{n_k} \leq \lambda^{-\frac{k}2}, \qquad \bar d_{n_k}^2 \leq \lambda^{-Mk}. \end{equation} Let $k_0$ be one of such indices, large enough (the size of $k_0$ will be chosen at the end of the proof). We will prove by induction that \begin{equation}\label{e:cascade_recurse} \bar d_{n_{k_0}+m}^2 \leq c \lambda^{-Mm}, \qquad Q_{n_{k_0}+m} \leq c' \lambda^{-m}, \qquad b_{n_{k_0}-1+m} \geq \theta \lambda^{-\frac{k_0}4-m}, \end{equation} for a suitable choice of the constants $c>0$, $c'>0$. We first notice that there is nothing to prove concerning $b_{n_{k_0}-1+m}$, since this is a straightforward consequence of the choice of $k_0$ and the monotonicity of $(\lambda^n b_n)_{n\geq1}$. The initial step $m=0$ holds, since inequalities \eqref{e:cascade_init} hold true for the index $k_0$. For $m=1$, \[ \begin{gathered} \bar d_{n_{k_0}+1}^2 \leq \bar d_{n_{k_0}}^2 \leq c \lambda^{-M},\\ Q_{n_{k_0}+1} = \frac1\lambda Q_{n_{k_0}} + \frac1\lambda\bar d_{n_{k_0}} \leq \frac1\lambda(\lambda^{-\frac{k_0}2} + \lambda^{-\frac{M}2k_0}) \leq \frac{c'}\lambda, \end{gathered} \] if $c=\lambda^{-M(k_0-1)}$ and $c'\geq\lambda^{-k_0/2} + \lambda^{-Mk_0/2}$. Assume that \eqref{e:cascade_recurse} holds for $1,\dots,m$, for some $m\geq1$. By its definition, \[ \begin{aligned} Q_{n_{k_0}+m+1} &= Q_{n_{k_0}}\lambda^{-(m+1)} + \sum_{j=n_{k_0}}^{n_{k_0}+m}\frac{\bar d_j}{\lambda^{n_{k_0}+m+1-j}}\\ &\leq \lambda^{-\frac{k_0}2-(m+1)} + \sqrt{c}\lambda^{-(m+1)}\sum_{j=0}^m\lambda^{-(\frac{M}2-1)j}\\ &\leq \Bigl(\lambda^{-\frac{k_0}2} + \frac\lambda{\lambda-1}\sqrt{c}\Bigr) \lambda^{-(m+1)},\\ &\leq c' \lambda^{-(m+1)}, \end{aligned} \] if $c'= \lambda^{-\frac{k_0}2} + \lambda(\lambda-1)^{-1}\sqrt{c}$ (the previous constraint on $c'$ is met by this choice). By \eqref{e:drecursive} and \eqref{e:Rformula} we have that for every $n\geq2$, \[ d_{n+1}(t)^2 \leq F_{n+1}(0) + 04 Q_{n+1} R_{n-1}(t) \leq F_{n+1}(0) + c_4 Q_{n+1}\frac{\bar d_{n-1}^2}{b_{n-1}}, \] hence, using the inequality for $Q_{n_{k_0}+m+1}$ already proved and the induction hypothesis, \[ \begin{aligned} \bar d_{n_{k_0}+m+1}^2 &\leq F_{n_{k_0}+m+1}(0) + c_4 Q_{n_{k_0+m+1}}\frac{\bar d_{n_{k_0}+m-1}^2}{b_{n_{k_0}+m-1}}\\ &\leq c\lambda^{-M(m+1)}\Bigl( \lambda^{M(n_{k_0}+m+1)}F_{n_{k_0}+m+1}(0) + \frac{c_4}\theta c'\lambda^{2M+\frac{k_0}4}\Bigr)\\ &\leq c 2^{-M(m+1)}, \end{aligned} \] where the last inequality follows if $k_0$ is large enough, since $\lambda^n F_n(0)\to0$ by assumption, and by our choice of $c$, $c'$, we have that $\lambda^{k_0/4}c'\to0$ as $k_0\to\infty$. \end{proof} \appendix \section{Local existence and uniqueness} \label{s:localexuniq} Consider the generalised system \eqref{e:glanse}, under the same assumptions of Theorem~\ref{t:main2}. Assume\footnote{Existence and uniqueness can be proved also in the general case $m_1(k)\geq|k|^\alpha g(|k|)^{-1}$. A simple assumption that keeps our proof almost unchanged is a control from above, say $m(k)\leq |k|^\beta$, for some $\beta\geq\alpha$.}, for simplicity, that $m_1(k)=\frac{|k|^\alpha}{g(|k|)}$. Denote by $V_m$ the subspace of $H^m$ (see \eqref{e:sobolev}) of divergence free vector fields with mean zero. Our main theorem on local existence and uniqueness for \eqref{e:glanse} is as follows. \begin{theorem}\label{t:localexuniq} Let $m\geq 2+\frac{d}2$ and $v_0\in V_m$. Then there are $T>0$ and a unique solution $v$ of \eqref{e:glanse} on $[0,T]$ with initial condition $v_0$ such that \begin{equation}\label{e:leu_reg} \begin{gathered} v\in L^\infty([0,T];V_m)\cap\textup{Lip}([0,T];V_{m-\alpha}) \cap C([0,T];V_m^\textup{\tiny weak}),\\ \int_0^T \|D_1^{\frac12} v\|_m^2\,dt <\infty, \end{gathered} \end{equation} where $V_m^\textup{\tiny weak}$ is the space $V_m$ with the weak topology. Moreover, $v$ is right--continuous with values in $V_m$ for the strong topology. If $T_\star$ is the maximal time of existence of the solution started from $v_0$, then either $T_\star=\infty$ or \[ \limsup_{t\uparrow T_\star}\|v(t)\|_m = \infty. \] \end{theorem} The proof of the theorem is based on a proof of existence of a local unique solution for the Euler equation taken from \cite[Section 3.2]{BerMaj2002}. The idea is that we cannot use the $D_1$ operator as a replacement for the Laplacian, since in general $D_1$ may not have smoothing properties (indeed, it is easy to adapt the counterexample in \cite[Remark 15]{BarMorRom2014} to $D_1$ on $\mathbb{R}^d$ or on the $d$--dimensional torus). Likewise we do not use any smoothing properties of $D_2$, so that our proof includes the case $\beta=0$. The result is by no means optimal, but fits the needs of our paper. We work on the torus $[0,2\pi]^d$, although the proof, essentially unchanged, works in $\mathbb{R}^d$. Denote by $H$ the projection of $L^2([0,2\pi]^d)$ onto divergence free vector fields, and for every $s>0$, by $V_s$ the projection of the Sobolev space $H^s([0,2\pi]^d)$ onto divergence free vector fields. We will denote by $\|\cdot\|_H$ and by $\scalar{\cdot,\cdot}_H$ the norm and the scalar product in $H$, and by $\|\cdot\|_s$ and by $\scalar{\cdot,\cdot}_s$ the norm and the scalar product in $V_s$. We denote by $\hat B(v_1,v_2)$ the (Leray) projection of the non--linearity, namely \[ \hat B(v_1,v_2) = \Pi_\text{Leray}\bigl[\bigl((D_2^{-1}v_1\cdot\nabla\bigr) v_2\bigr]. \] Since $\beta\geq0$, $\|D_2^{-1}v\|_s\leq\|v\|_s$ for every $s\in\mathbb{R}$. Hence, (see for instance \cite{Kat1972}, or \cite{ConFoi1988}), for every $m\geq1+[\tfrac{d}2]$, there exists $c_m>0$ such that \[ \begin{gathered} \|\hat B(v_1,v_2)\|_m \leq c_m\|v_1\|_m\|v_2\|_{m+1},\\ \scalar{\hat B(v_1,v_2), v_2}_m \leq c_m \|v_1\|_m\|v_2\|_m^2. \end{gathered} \] In the rest of the section we briefly outline the proof of Theorem~\ref{t:localexuniq}, following \cite[Section 3.2]{BerMaj2002}. The proof of the following result is a slight modification of the arguments to prove \cite[Theorem 3.4]{BerMaj2002}. \begin{proposition}\label{p:localex} Given an integer $m\geq 2 + \tfrac{d}2$, there exists a number $c_\star>0$ such that for every $v_0\in V_m$, if $T<c_\star/\|v_0\|_m$, there is a unique solution of \eqref{e:glanse} with initial condition $v_0$. Moreover $\vep\to v$ in $C([0,T];V_{m'})$, for $m'<m$, and in $C([0,T];V_m^\textup{\tiny weak})$, \eqref{e:leu_reg} hold for $v$, and for every $\epsilon>0$, \begin{equation}\label{e:bound} \sup_{[0,T]}\|\vep\|_m \leq \frac{\|v_0\|_m}{1 - c_\star T\|v_0\|_m}. \end{equation} \end{proposition} Unfortunately, at this stage, we cannot prove the analogous of Theorem 3.5 of \cite{BerMaj2002} for our $v$, namely that $v$ is continuous in time for the strong topology of $V_m$. The reason is that their proof uses either the reversibility of the Euler equation (that we do not have due to the presence of $D_1$), or the smoothing of the Laplace operator, that we do not have here either (as already mentioned). On the other hand we can prove right--continuity. \begin{lemma} The solution $v$ from Proposition~\ref{p:localex} is right--continuous with values in $V_m$ for the strong topology, and $\frac{d}{dt}v$ is right continuous with values in $V_{m-\alpha}$. \end{lemma} \begin{proof} Given $t\in [0,T]$, the same computations leading to \eqref{e:bound} yield \[ \sup_{[0,t]}\|v(s)\|_m \leq \|v_0\|_m + \frac{c_\star t\|v_0\|_m^2}{1 - c_\star t\|v_0\|_m}, \] therefore $\limsup_{t\downarrow0}\|v(t)\|_m\leq \|v_0\|_m$. On the other hand, by weak continuity, $\|v_0\|_m\leq \liminf_{t\downarrow0}\|v(t)\|_m$ and $v$ is right continuous in $0$. Uniqueness for \eqref{e:glanse} and the same argument applied to $t\in(0,T]$ yield right--continuity in $t$. \end{proof} Nevertheless, we can still define a maximal solution and a maximal time of existence. Given $v_0\in V_m$, let $T_\star$ be the maximal time of existence of the solution starting from $v_0$, that is the supremum over all $T>0$ such that there exists a solution $v$ of \eqref{e:glanse} on $[0,T]$ with $v(0)=u_0$, $v$ is right--continuous with values in $V_m$, continuous with values in $V_m^\textup{\tiny weak}$ and with $\frac{d}{dt}v$ right continuous with values in $V_{m-\alpha}$. Due to uniqueness, any two such solutions coincide on the common interval of definition. \begin{proposition} Given $v_0\in V_m$, if $T_\star$ is the maximal time of existence of the solution started from $v_0$, then either $T_\star=\infty$ or \[ \limsup_{t\uparrow T_\star} \|v(t)\|_m = \infty. \] \end{proposition} \begin{proof} Assume by contradiction that $T_\star<\infty$ and that $M:=\sup_{t<T_\star}\|v(t)\|_m<\infty$. Let $T_0=T_\star-c_\star/(4M)$, and start a solution with initial condition $v(T_0)$ at time $T_0$. By Proposition~\ref{p:localex} there is a solution of \eqref{e:glanse} on a time span of length at least $c_\star/(2\|v(T_0)\|_m)\geq c_\star/(2M)$, hence at least up to time $T_0+c_\star/(2M)>T_\star$. By uniqueness, this solution is equal to $v$ up to time $T_\star$. \end{proof} \end{document}
arXiv
\begin{document} \title[Menger algebras of $n$-place functions] {Menger algebras of $n$-place functions} \author{Wies{\l}aw A. Dudek} \address{Institute of Mathematics and Computer Science\\ Wroc{\l}aw University of Technology\\ Wybrze\.ze Wyspia\'nskiego 27 \\ 50-370 Wroc{\l}aw, Poland} \email{[email protected]} \author{Valentin S. Trokhimenko} \address{Department of Mathematics\\ Pedagogical University\\ 21100 Vinnitsa \\ Ukraine} \email{[email protected]} \begin{abstract} It is a survey of the main results on abstract characterizations of algebras of $n$-place functions obtained in the last $40$ years. A special attention is paid to those algebras of $n$-place functions which are strongly connected with groups and semigroups, and to algebras of functions closed with respect natural relations defined on their domains. \end{abstract} \maketitle \centerline{\it{Dedicated to Professor K.P. Shum's 70th birthday}} \footnotetext{{\it 2010 Mathematics Subject Classification.} 20N15} \footnotetext{{\it Key words and phrases.} Menger algebra, algebra of multiplace functions, representation, group-like Menger algebra, diagonal semigroup} \section{Introduction} Every group is known to be isomorphic to some group of set substitutions, and every semigroup is isomorphic to some semigroup of set transformations. It accounts for the fact that the group theory (consequently, the semigroup theory) can be considered as an abstract study about the groups of substitutions (consequently, the semigroups substitutions). Such approach to these theories is explained, first of all, by their applications in geometry, functional analysis, quantum mechanics, etc. Although the group theory and the semigroup theory deal in particular only with the functions of one argument, the wider, but not less important class of functions remains without their attention -- the functions of many arguments (in other words -- the class of multiplace functions). Multiplace functions are known to have various applications not only in mathematics itself (for example, in mathematical analysis), but are also widely used in the theory of many-valued logics, cybernetics and general systems theory. Various natural operations are considered on the sets of multiplace functions. The main operation is the superposition, i.e., the operation which as a result of substitution of some functions into other instead of their arguments gives a new function. Although the algebraic theory of superpositions of multiplace functions has not developed during a long period of time, K.~Menger paid attention to abnormality of this state in the middle of 40's in the previous century. He stated, in particular, that the superposition of $n$-place functions, where $n$ is a fixed natural number, has the property resembling the associativity, which he called {\it super\-associativity } \cite{134,138,139}. As it has been found later, this property appeared to be fundamental in this sense that every set with superassociative $(n+1)$-ary operation can be represented as a set of $n$-place functions with the operation of superposition. This fact was first proved by R.~M.~Dicker \cite{117} in 1963, and the particular case of the given theorem was received by H.~Whitlock \cite{157} in 1964. The theory of algebras of multiplace functions, which now are called {\it Menger algebras} (if the number of variables of functions is fixed) or {\it Menger systems} (if the number of variables is arbitrary), has been studied by (in alphabetic order) M.~I.~Burtman \cite{6,8}, W.~A.~Dudek \cite{118} -- \cite{Dudtro4}, F.~A.~Gadzhiev \cite{18,18a,17}, L.~M.~Gluskin \cite{23}, Ja.~Henno \cite{91} -- \cite{119b}, F.~A.~Ismailov \cite{35b,35}, H.~L\"anger \cite{128,127,131a}, F.~Kh.~Muradov \cite{44,44f}, B.~M.~Schein \cite{108,schtr}, V.~S.~Trokhimenko (Trohimenko) \cite{61} -- \cite{76} and many others. The first survey on algebras of multiplace functions was prepared by B. M. Schein and V. S. Trokhimenko \cite{schtr} in 1979, the first monograph (in Russian) by W. A. Dudek and V. S. Trokhimenko \cite{Dudtro}. Extended English version of this monograph was edited in 2010 \cite{Dudtro4}. This survey is a continuation of the previous survey \cite{schtr} prepared in 1979 by B. M. Schein and V. S. Trokhimenko. \section{Basic definitions and notations} An {\it $n$-ary relation} (or an {\it $n$-relation}) between elements of the sets $A_1,A_2,\ldots,A_n$ is a subset $\rho $ of the Cartesian product $A_1\times A_2\times\ldots\times A_n$. If $A_1=A_2=\cdots=A_n$, then the $n$-relation $\rho$ is called {\it homogeneous}. Later on we shall deal with $(n+1)$-ary relations, i.e., the relations of the form $\rho\subset A_1\times A_2\times\ldots\times A_n\times B$. For convenience we shall consider such relations as binary relations of the form $\rho\subset (A_1\times A_2\times\ldots\times A_n)\times B$ \ or $\;\rho\subset\Big(\mathop{\LARGE\mbox{$\times$}}\nolimits\limits_{i=1}^{n}A_i\Big)\times B.$ In the case of a homogeneous $(n+1)$-ary relation we shall write $\rho\subset A^n\times A$ or $\rho\subset A^{n+1}.$ Let $\,\rho\subset\Big(\mathop{\LARGE\mbox{$\times$}}\nolimits\limits_{i=1}^{n}A_i\Big)\times B\,$ be an $(n+1)$-ary relation, $\bar a= (a_1,\ldots ,a_n)$ an element of $\,A_1\times A_2\times \ldots\times A_n$, $\,H_i\subset A_i$, $i\in\{1,\ldots ,n\}=\overline{1,n}$, then \ $\rho\langle\bar{a}\rangle=\{b\in B\;|\;(\bar{a},b)\in\rho\}$ and $$ \rho (H_1,\ldots ,H_n)=\bigcup\{\,\rho\langle\bar a\rangle\, |\;\bar a\in H_1 \times H_2\times\ldots\times H_n\}. $$ Moreover, let \begin{eqnarray*} && \mbox{pr}_{1}\rho =\{\bar a\in\mathop{\LARGE\mbox{$\times$}}\nolimits\limits_{i=1}^{n}A_i\;|\;(\exists b\in B)\, (\bar a,b)\in\rho\},\nonumber\\[3pt] && \mbox{pr}_{2}\rho =\{b\in B\;|\;(\exists \bar a \in\mathop{\LARGE\mbox{$\times$}}\nolimits\limits_{i=1}^{n}A_i)\, (\bar a,b)\in\rho\}.\nonumber \end{eqnarray*} To every sequence of $(n+1)$-relations $\,\sigma _1,\ldots ,\sigma _n, \rho\,$ such that $\,\sigma _i\subset A_1\times\ldots \times A_n\times B_i$, $\,i\in\overline{1,n}$,\, and $\,\rho\subset B_1\times\ldots\times B_n \times C$, we assign an $(n+1)$-ary relation $$ \rho [\sigma_1\ldots \sigma _n]\subset A_1\times\ldots\times A_n\times C, $$ which is defined as follows: $$ \rho [\sigma _1\ldots\sigma_n]=\{(\bar a,c)\;|\;(\exists \bar b\,)\, (\bar a,b_1)\in\sigma_1\;\&\,\ldots\,\&\, (\bar a,b_n)\in\sigma _n\;\&\,(\,\bar b,c)\in\rho\}, $$ where $\bar b =(b_1,\ldots ,b_n)\in B_1\times\ldots\times B_n$. Obviously: $$ \rho [\sigma _1\ldots\sigma _n](H_1,\ldots ,H_n)\subset \rho (\sigma _1(H_1,\ldots ,H_n),\ldots,\sigma _n(H_1,\ldots ,H_n)\,), $$ $$ \rho [\sigma _1\ldots\sigma _n][\chi _1\ldots \chi _n]\subset\rho [\sigma _1[\chi _1\ldots \chi _n]\ldots \sigma _n[\chi _1\ldots \chi _n]\,], $$ where\footnote{It is clear that the symbol $\,\rho [\sigma _1\ldots\sigma _n][\chi _1\ldots \chi _n]\,$ must be read as $\,\mu [\chi _1\ldots \chi _n]$, where $\,\mu=\rho [\sigma _1\ldots\sigma _n]$.} $\;\chi_i\subset A_1\times\ldots\times A_n\times B_i$, \ $\sigma _i\subset B_1\times\ldots\times B_n\times C_i$, \ $i=1,\ldots ,n$ and $\,\rho\subset C_1\times\ldots\times C_n\times D$. The $(n+1)$-operation $O:(\rho ,\sigma_1,\ldots ,\sigma _n)\mapsto \rho [\sigma _1\ldots\sigma _n]$ defined as above is called a {\it Menger superposition} or a {\it Menger composition} of relations. Let $$ \stackrel{n}{\triangle }_{A}=\{(\underbrace{a,\ldots ,a}_{n})\,| \,a\in A\}, $$ then the homogeneous $(n+1)$-relation $\rho\subset A^{n+1}$ is called \begin{itemize} \item {\it reflexive } if \ $\stackrel{n+1}{\triangle }_{A}\subset\rho $, \item {\it transitive } if \ $\rho [\rho\ldots \rho]\subset \rho $, \item an $n$-{\it quasi-order} ({\it $n$-preorder}) if it is reflexive and transitive. For $n=2$ it is a {\it quasi-order}. \end{itemize} An $(n+1)$-ary relation $\rho\subset A^n\times B$ is an {\it $n$-place function} if it is one-valued, i.e., $$ (\forall \bar{a}\in A^n)(\forall b_1,b_2\in B)\,(\,(\bar{a},b_1)\in\rho\,\;\&\,\;(\bar{a},b_2)\in\rho\,\longrightarrow\, b_1=b_2). $$ Any mapping of a subset of $A^n$ into $B$ is a {\it partial $n$-place function}. The set of all such functions is denoted by ${\mathcal F}(A^n,B)$. The set of all {\it full $n$-place functions} on $A$, i.e., mappings defined for every $(a_1,\ldots,a_n)\in A^n$, is denoted by ${\mathcal T}(A^n,B)$. Elements of the set ${\mathcal T}(A^n,A)$ are also called {\it $n$-ary transformations of $A$}. Obviously ${\mathcal T}(A^n,B)\subset{\mathcal F}(A^n,B)$. Many authors instead of a full $n$-place function use the term an {\it $n$-ary operation}. The superposition of $n$-place functions is defined by \begin{equation}\label{1} f[g_1\ldots g_n](a_1,\ldots,a_n) = f(g_1(a_1,\ldots,a_n),\ldots,g_n(a_1,\ldots,a_n)), \end{equation} where $\,a_1,\ldots,a_n\in A$, $f,g_1,\ldots,g_n\in {\mathcal F}(A^n,A)$. This superposition is an $(n+1)$--\\ ary operation $\,\mathcal O\,$ on the set ${\mathcal F}(A^n,A)\,$ determined by the formula $\,{\mathcal O}(f,g_1,\ldots,g_n) = f[g_1\ldots g_n]$. Sets of $n$-place functions closed with respect to such superposition are called {\it Menger algebras of $n$-place functions} or {\it $n$-ary Menger algebras of functions}. According to the general convention used in the theory of $n$-ary systems, the sequence $\,a_i,a_{i+1},\ldots,a_j$, where $i\leqslant j$, \ can be written in the abbreviated form as $\,a_i^j$ \ (for \ $i>j$ \ it is the empty symbol). In this convention (\ref{1}) can be written as \[ f[g_1^n](a_1^n)=f(g_1(a_1^n),\ldots,g_n(a_1^n)). \] For $g_1=g_2=\ldots=g_n=g$ instead of $f[g_1^n]$ we will write $f[g^n]$. An $(n+1)$-ary groupoid $(G;o)$, i.e., a non-empty set $G$ with one $(n+1)$-ary operation $o:G^{n+1}\rightarrow G$, is called a {\it Menger algebra of rank $n$}, if it satisfies the following identity (called the {\it superassociativity}): \[ o(\,o(x,y_{1}^{n}),z_{1}^{n})=o(x,o(y_{1},z_{1}^{n}),o(y_{2},z_{1}^{n}),\ldots, o(y_{n},z_{1}^{n})). \] A Menger algebra of rank $1$ is a semigroup. Since a Menger algebra (as we see in the sequel) can be interpreted as an algebra of $n$-place functions with a Menger composition of such functions, we replace the symbol $o(x,y_{1}^{n})$ by $x[y_{1}^{n}]$ or by $x[\bar{y}]$, i.e., these two symbols will be interpreted as the result of the operation $o$ applied to the elements $x,y_{1},\ldots ,y_{n}\in G$. In this convention the above superassociativity has the form \[ x[y_{1}^{n}][z_{1}^{n}]=x[y_{1}[z_{1}^{n}]\ldots y_{n}[z_{1}^{n}]] \] or shortly \[ x[\bar{y}][\bar{z}]=x[y_{1}[\bar{z}]\ldots y_{n}[\bar{z}]], \] where the left side can be read as in the case of functions, i.e., $x[y_{1}^{n}][z_{1}^{n}]=\big(x[y_1^n]\big)\,[z_1^n]$. \begin{theorem}\label{T21.2} {\bf (R. M. Dicker, \cite{117})}\newline Every Menger algebra of rank $n$ is isomorphic to some Menger algebra of full $n$-place functions. \end{theorem} Indeed, for every element $g$ of a Menger algebra $(G;o)$ of rank $n$ we can put into accordance the full $n$-place function $\lambda_{g}$ defined on the set $G\,^{\prime }=G\cup \{a,b\}$, where $a$, $b$ are two different elements not belonging to $G$, such that \[ \lambda _{g}(x_{1}^{n})= \left\{\begin{array}{cl} g[x_{1}^{n}]&{\rm if } \ \ x_{1},\ldots ,x_{n}\in G, \\ g&{\rm if } \ \ x_{1}=\cdots =x_{n}=a, \\ b& {\rm in\; all\; other\; cases.} \end{array}\right. \] Using the set $G^{\prime\prime}=G\cup\{a\}$, where $a\not\in G$, and the partial $n$-place functions \[ \lambda _{g}^{\prime }(x_{1}^{n})= \left\{\begin{array}{cl} g[x_{1}^{n}]&{\rm if } \ \ x_{1},\ldots ,x_{n}\in G, \\ g&{\rm if } \ \ x_{1}=\cdots =x_{n}=a, \end{array}\right. \] we can see that every Menger algebra of rank $n$ is isomorphic to a Menger algebra of partial $n$-place functions too \cite{108}. \begin{theorem} {\bf (J. Henno, \cite{97})}\newline Every finite or countable Menger algebra of rank $n>1$ can be isomorphically embedded into a Menger algebra of the same rank generated by a single element. \end{theorem} A Menger algebra $(G;o)$ containing {\it selectors}, i.e., elements $e_{1},\ldots ,e_{n}\in G$ such that \[ x[e_{1}^{n}]=x \ \ \ \ {\rm and } \ \ \ \ e_{i}[x_{1}^{n}]=x_{i} \] for all $x,x_{i}\in G,$ $i=1,\ldots ,n$, is called {\it unitary}. \begin{theorem} {\bf (V.S. Trokhimenko, \cite{61})}\newline Every Menger algebra $(G;o)$ of rank $n$ can be isomorphically embedded into a unitary Menger algebra $(G^{\ast};o^{\ast })$ of the same rank with selectors $e_{1},\ldots ,e_{n}$ and a generating set $G\cup\{e_{1},\ldots ,e_{n}\}$, where $e_{i}\not\in G$ for all $i\in\overline{1,n}$. \end{theorem} J. Hion \cite{100} and J. Henno \cite{98} have proven that Menger algebras with selectors can be identified with some set of multiplace endomorphisms of a universal algebra. Moreover E. Redi proved in \cite{45} that a Menger algebra is isomorphic to the set of all multiplace endomorphisms of some universal algebra if and only if it contains all selectors. W. N\"obauer and W. Philipp consider \cite{140} the set of all one-place mappings of a fixed universal algebra into itself with the Menger composition and proved that for $n>1$ this algebra is simple in the sense that it possesses no congruences other than the equality and the universal relation \cite{142}. \section{Semigroups} The close connection between semigroups and Menger algebra was stated already in 1966 by B.~M. Schein in his work \cite{108}. He found other type of semigroups, which simply define Menger algebras. Thus, there is the possibility to study semigroups of such type instead of Menger algebras. But the study of these semigroups is quite difficult, that is why in many questions it is more advisable simply to study Menger algebras, than to substitute them by the study of similar semigroups. \begin{definition}\rm Let $(G;o)$ be a Menger algebra of rank $n$. The set $G^{n}$ together with the binary operation $\ast$ defined by the formula: \[ (x_{1},\ldots,x_{n})\ast(y_{1},\ldots,y_{n})=(x_{1}[y_{1}\ldots y_{n}],\ldots,x_{n}[y_{1}\ldots y_{n}]) \] is called the \textit{binary comitant} of a Menger algebra $(G;o)$. \end{definition} An $(n+1)$-ary operation $o$ is superassociative if and only if the operation $\ast$ is associative \cite{108}. \begin{theorem}{\bf (B. M. Schein, \cite{schtr})}\newline The binary comitant of a Menger algebra is a group if and only if the algebra is of rank $1$ and is a group or the algebra is a singleton. \end{theorem} It is evident that binary comitants of isomorphic Menger algebras are isomorphic. However, as it was mentioned in \cite{108}, from the isomorphism of binary comitants in the general case the isomorphism of the corresponding Menger algebras does not follow. This fact, due to necessity, leads to the consideration of binary comitants with some additional properties such that the isomorphism of these structures implies the isomorphism of initial Menger algebras. L.~M.~Gluskin observed (see \cite{23} and \cite{24}) that the sets \[ M_{1}[G]=\{\bar{c}\in G^n \,|\,x[\bar{y}][\bar{c}]=x[\bar{y}\ast\bar{c}\,]\;\;{\rm for\ all}\ x\in G,\;\bar{y}\in G^n \} \] and \[ M_{2}[G]=\{\bar{c}\in G^n \,|\,x[\bar{c}][\bar{y}]=x[\bar{c}\ast\bar{y}\,]\;\;{\rm for\ all}\ x\in G,\;\bar{y}\in G^n \} \] are either empty or subsemigroups of the binary comitant $(G^{n};\ast)$. The set \[ M_{3}[G]=\{a\in G \,|\,a[\bar{x}][\bar{y}]=a[\bar{x}\ast\bar{y}\,]\;\;{\rm for\ all}\ \bar{x},\bar{y}\in G^n \} \] is either empty or a Menger subalgebra of $(G;o)$. Let us define on the binary comitant $(G^{n};\ast)$ the equivalence relations $\pi_{1},\ldots,\pi_{n}$ putting \[ (x_{1},\ldots,x_{n})\equiv(y_{1},\ldots,y_{n})(\pi_{i})\longleftrightarrow x_{i}=y_{i} \] for all $x_{i},y_{i}\in G,$ \ $i\in\overline{1,n}$. It is easy to check, that these relations have the following properties: \begin{itemize} \item[$(a)$] for any elements $\bar{x}_{1},\ldots,\bar{x}_{n}\in G^{n}$ there is an element $\bar{y}\in G^{n}$ such that $\bar{x}_{i}\equiv \bar{y}(\pi_{i})$ for all $i\in\overline{1,n}$, \item[$(b)$] if $\bar{x}\equiv\bar{y}(\pi_{i})$ for some $i\in\overline{1,n}$, then $\bar{x}=\bar{y}$, where $\bar{x},\bar{y}\in G^{n}$, \item[$(c)$] relations $\pi_{i}$ are {\it right regular}, i.e., $$ \bar{x}_{1}\equiv\bar{x}_{2}\longrightarrow\bar{x}_{1}\ast\bar{y}\equiv \bar {x}_{2}\ast\bar{y}(\pi_{i}) $$ for all $\bar{x}_{1},\bar{x}_{2},\bar{y}\in G^{n}$, \ $i\in\overline{1,n}$, \item[$(d)$] $\stackrel{n}{\triangle}_{G}$ is a {\it right ideal} of $(G^{n};\ast)$, i.e., $$ \bar{x}\in\,\stackrel{n}{\triangle}_{G}\,\wedge\;\bar{y}\in G^{n}\longrightarrow\bar{x}\ast\bar{y}\in\,\stackrel{n}{\triangle}_{G} $$ \item[$(e)$] for any $i\in\overline{1,n}$ every $\pi_{i}$-class contains precisely one element from $\stackrel{n}{\triangle}_{G}$. \end{itemize} All systems of the type $(G^{n};\ast,\pi_{1},\ldots,\pi_{n},\stackrel{n }{\triangle}_{G})$ will be called the \textit{rigged binary comitant of a Menger algebra $(G;o)$}. \begin{theorem} Two Menger algebras of the same rank are isomorphic if and only if their rigged binary comitants are isomorphic. \end{theorem} This fact is a consequence of the following more general theorem proved in \cite{108}. \begin{theorem}\label{T25.1}{\bf (B. M. Schein, \cite{108})}\newline The system $(G;\cdot,\varepsilon_{1},\ldots,\varepsilon_{n},H)$, where $(G;\cdot)$ is a semigroup, $\varepsilon_{1},\ldots,\varepsilon_{n}$ are binary relations on $G$ and $H\subset G$ is isomorphic to the rigged binary comitant of some Menger algebra of rank $n$ if and only if \begin{itemize} \item[$1)$] for all $i=1,\ldots,n$ the relations $\varepsilon_{i}$ are right regular equivalence relations and for any $(g_{1},\ldots,g_{n})\in G^{n}$ there is exactly one $g\in G$ such that $g_{i}\equiv g(\varepsilon_{i}) \ for\; all\; i\in\overline{1,n},$ \item[$2)$] $H$ is a right ideal of a semigroup $(G;\cdot)$ and for any $i\in\overline{1,n}$ every $\varepsilon_{i}$-class contains exactly one element of $H$. \end{itemize} \end{theorem} In the literature the system of the type $(G;\cdot,\varepsilon_{1},\ldots,\varepsilon _{n},H)$, satisfying all the conditions of the above theorem, is called a \textit{Menger semigroup of rank $n$}. Of course, a Menger semigroup of rank 1 coincides with the semigroup $(G;\cdot)$. So, the theory of Menger algebras can be completely restricted to the theory of Menger semigroups. But we cannot use this fact. It is possible that in some cases it would be advisable to consider Menger algebras and in other -- Menger semigroups. Nevertheless, in our opinion, the study of Menger semigroups is more complicated than study of Menger algebras with one $(n+1)$-ary operation because a Menger semigroup besides one binary operation contains $n+1$ relations, which naturally leads to additional difficulties. Let $(G;o)$ be a Menger algebra of rank $n$. Let us define on its binary comitant $(G\,^{n};\ast)$ the unary operations $\rho_{1},\ldots,\rho_{n}$ such that \[ \rho_{i}(x_{1},\ldots,x_{n})=(x_{i},\ldots,x_{i}) \] for any $x_{1},\ldots,x_{n}\in G$, $i\in\overline{1,n}$. Such obtained system $(G^{n};\ast,\rho_{1},\ldots,\rho_{n})$ is called the \textit{selective binary comitant} of Menger algebra $(G;o)$. \begin{theorem}\label{T25.2}{\bf (B. M. Schein, \cite{108})}\newline For the system $(G;\cdot,p_{1},\ldots,p_{n})$, where $(G;\cdot )$ is a semigroup and $p_{1},\ldots,p_{n}$ are unary operations on it, a necessary and sufficient condition that this system be isomorphic to the selective binary comitant of some Menger algebra of rank $n$ is that the following conditions hold: \begin{itemize} \item[$1)$] $p_{i}(x)y=p_{i}(xy)$ for all $i\in\overline{1,n}$ and $x,y\in G$, \item[$2)$] $p_{i}\circ p_{j}=p_{j}$ for all $i,j\in\overline{1,n}$, \item[$3)$] for every vector $(x_{1},\ldots,x_{n})\in G^{n}$ there is exactly only one $g\in G$ such that $p_{i}(x_{i})=p_{i}(g)$ for all $i\in\overline{1,n}$. \end{itemize} \end{theorem} The system $(G;\cdot,p_{1},\ldots,p_{n}) $ satisfying the conditions of this theorem is called \textit{selective semigroups of rank} $n$. \begin{theorem} {\bf (B. M. Schein, \cite{108})}\newline For every selective semigroup of rank $n$ there exists a Menger algebra of the same rank with which the selective semigroup is associated. This Menger algebra is unique up to isomorphism. \end{theorem} These two theorems give the possibility to reduce the theory of Menger algebras to the theory of selective semigroups. In this way, we received three independent methods to the study of superposition of multiplace functions: Menger algebras, Menger semigroups and selective semigroups. A great number of papers dedicated to the study of Menger algebras have been released lately, but unfortunately the same cannot be said about Menger semigroups and selective semigroups. Defining on a Menger algebra $(G;o)$ the new binary operation \[ x\cdot y=x[y\ldots y], \] we obtain the so-called \textit{diagonal semigroup} $(G;\cdot )$. An element $g\in G$ is called \textit{idempotent}, if it is idempotent in the diagonal semigroup of $(G;o)$, i.e., if $g[g^n]=g$. An element $e\in G$ is called a \textit{left $($right$)$ diagonal unit} of a Menger algebra $(G;o)$, if it is a left (right) unit of the diagonal semigroup of $(G;o)$, i.e., if the identity $\,e[g^n]=g\,$ (respectively, $g[e^n]=g$) holds for all $g\in G$. If $e$ is both a left and a right unit, then it is called a {\it diagonal unit}. It is clear that a Menger algebra has at most only one diagonal unit. Moreover, if a Menger algebra has an element which is a left diagonal unit and an element which is a right diagonal unit, then these elements are equal and no other elements which are left or right diagonal units. An $(n+1)$-ary groupoid $(G;o)$ with the operation $o(x_{0}^{n})=x_{0}$ is a simple example of a Menger algebra of rank $n$ in which all elements are idempotent and right diagonal units. Of course, this algebra has no left units. In the Menger algebra $(G;o_n)$, where $o_n(x_{0}^{n})=x_{n}$, all elements are left diagonal units, but this algebra has no right diagonal units. If a Menger algebra $(G;o)$ has a right diagonal unit $e$, then its every element $c\in G$ satisfying the identity $e=e[c^n]$ is also a right diagonal unit. Non-isomorphic Menger algebras may have the same diagonal semigroup. Examples are given in \cite{118}. \begin{theorem} A semigroup $(G;\cdot)$ is a diagonal semigroup of some Menger algebra of rank $n$ only in the case when on $G$ can be defined an idempotent $n$-ary operation $f$ satisfying the identity $$f(g_{1},g_{2},\ldots,g_{n})\cdot g = f(g_{1}\cdot g,g_{2}\cdot g,\ldots,g_{n}\cdot g).$$ \end{theorem} The operation of diagonal semigroup is in some sense distributive with respect to the Menger composition. Namely, for all $x,y,z_{1},\ldots,z_{n}\in G$ we have \[(x\cdot y)[\bar{z}]= x\cdot y[\bar{z}] \ \ \ \ {\rm and } \ \ \ \ x[\bar{z}]\cdot y = x[(z_{1}\cdot y)\ldots (z_{n}\cdot y)].\] In some cases Menger algebras can be completely described by their diagonal semigroups. Such situation takes place, for example, in the case of algebras of closure operations. \begin{theorem}{\bf (V. S. Trokhimenko, \cite{Trokhim1})}\newline A Menger algebra $(G;o)$ of rank $n$ is isomorphic to some algebra of $n$-place closure operations on some ordered set if and only if its diagonal semigroup $(G;\cdot)$ is a semilattice and \[ x[y_{1}\ldots y_{n}]=x\cdot y_{1}\cdot\ldots\cdot y_{n} \] for any $x,y_{1},\ldots,y_{n}\in G$. \end{theorem} A non-empty subset $H$ of a Menger algebra $(G;o)$ of rank $n$ is called \begin{itemize} \item an {\it $s$-ideal\,} if \ $h[x_1^n]\in H$, \item a {\it $v$-ideal\,} if \ $x[h_1^n]\in H$, \item an {\it $l$-deal\,} if \ $x[x_1^{i-1},h_i,x_{i+1}^n]\in H$ \end{itemize} for all $x,x_1,\ldots,x_n\in G$, $h,h_1,\ldots,h_n\in H$ and $i\in\overline{1,n}$. An $s$-ideal which is a $v$-ideal is called an {\it $sv$-ideal}. A Menger algebra is {\it $s$-$v$-simple} if it possesses no proper $s$-ideals and $v$-ideals, and {\it completely simple} if it posses minimal $s$-ideals and $v$-ideals but has no proper $sv$-ideals. \begin{theorem} {\bf (Ja. N. Yaroker, \cite{114})}\newline A Menger algebra is completely simple $(s$-$v$-simple$)$ if and only if its diagonal semigroup is completely simple $($a group$)$. \end{theorem} \begin{theorem} {\bf (Ja. N. Yaroker, \cite{114})}\newline A completely simple Menger algebra of rank $n$ can be decomposed into disjoint union of $s$-$v$-simple Menger subalgebras with isomorphic diagonal groups. \end{theorem} Subalgebras obtained in this decomposition are classes modulo some equivalence relation. This decomposition gives the possibility to study connections between isomorphisms of completely simple Menger algebras and isomorphisms of their dia\-gonal semigroups (for details see \cite{114}). If for $\bar{g}\in G^n$ there exists $x\in G$ such that $g_{i}[x^n][\bar{g}]=g_{i}$ holds for each $i\in\overline{1,n}$, then we say that $\bar{g}$ is a {\it $v$-regular} vector. The diagonal semigroup of a Menger algebra in which each vector is $v$-regular is a regular semigroup \cite{Trokhim2}. An element $x\in G$ is an \textit{inverse element} for $\bar{g}\in G^{n}$, if\ $x[\bar{g}][x^n]=x$\ and\ $g_{i}[x^n][\bar{g}]=g_{i}$ for all $i\in\overline{1,n}$. Every $v$-regular vector has an inverse element \cite{Trokhim2}. Moreover, if each vector of a Menger algebra $(G;o)$ is $v$-regular and any two elements of $(G;o)$ are commutative in the diagonal semigroup of $(G;o)$, then this semigroup is inverse. \begin{theorem}\label{T22.5} {\bf (V. S. Trokhimenko, \cite{Trokhim2})}\newline If in a Menger algebra $(G;o)$ each vector is $v$-regular, then the diagonal semigroup of $(G;o)$ is a group if and only if $(G;o)$ has only one idempotent. \end{theorem} \section{Group-like Menger algebras} \setcounter{equation}{0} A Menger algebra $(G;o)$ of rank $n$ in which the following two equations \begin{equation} x[a_{1}\ldots a_{n}]=b,\label{sol-1}\\ \end{equation} \begin{equation} a_{0}[a_{1}\ldots a_{i-1}x_{i}a_{i+1}\ldots a_{n}]=b\label{sol-i} \end{equation} have unique solutions $x,x_{i}\in G$ for all $a_{0},a_{1}^n,b\in G$ and some fixed $i\in\overline{1,n}$, is called {\it $i$-solvable}. A Menger algebra which is $i$-solvable for every $i\in\overline{1,n}$ is called {\it group-like}. Such algebra is an $(n+1)$-ary quasigroup. It is associative (in the sense of $n$-ary operations) only in some trivial cases (see \cite{Dud1, Dud2, Dud3}). A simple example of a group-like Menger algebra is the set of real functions of the form \[ f_{\alpha}(x_{1},\ldots,x_{n})=\frac{x_{1}+\cdots+x_{n}}{n}+\alpha. \] Every group-like Menger algebra of rank $n$ is isomorphic to some group-like Menger algebra of {\it reversive} $n$-place functions, i.e., a Menger algebra of functions $f\in\mathcal{F}(G^n,G)$ with the property: $$ f(x_1^{i-1},y,x_{i+1}^n)= f(x_1^{i-1},z,x_{i+1}^n)\, \longrightarrow\, y=z $$ for all $x_1^n,y,z\in G$ and $i=1,\ldots,n$. The investigation of group-like Menger algebras was initiated by H. Skala \cite{Sk}, the study of $i$-solvable Menger algebras by W. A. Dudek (see \cite{Dud1}, \cite{Dud2} and \cite{118}). A simple example of $i$-solvable Menger algebras is a Menger algebra $(G;o_{i})$ with an $(n+1)$-ary operation $o_{i}(x_{0},x_{1},\ldots,x_{n})=x_{0}+x_{i}$ defined on a non-trivial commutative group $(G;+)$. It is obvious that the diagonal semigroup of this Menger algebra coincides with the group $(G;+)$. This algebra is $j$-solvable only for $j=i$, but the algebra $(G;o)$, where $o(x_{0},x_{1},\ldots,x_{n})=x_{0}+x_{1}+\ldots +x_{k+1}$ and $(G;+)$ is a commutative group of the exponent $k\leqslant n-1$, is $i$-solvable for every $i=1,\ldots,k+1$. Its diagonal semigroup also coincides with $(G;+)$. Note that in the definition of $i$-solvable Menger algebras one can postulate the existence of solutions of \eqref{sol-1} and \eqref{sol-i} for all $a_{0},\ldots, a_{n}\in G$ and some fixed $b\in G$ (see \cite{118}). The uniqueness of solutions cannot be dropped, but it can be omitted in the case of finite algebras. \begin{theorem}{\bf (W. A. Dudek, \cite{118})}\label{D-T13}\newline A finite Menger algebra $(G;o)$ of rank $n$ is $i$-solvable if and only if it has a left diagonal unit $e$ and for all $a_{1},\ldots,a_{n}\in G$ there exist $x,y\in G$ such that \[ x[a_{1}\ldots a_{n}]=a_{0}[a_{1}\ldots a_{i-1}y\,a_{i+1}\ldots a_{n}]=e. \] \end{theorem} A diagonal semigroup of an $i$-solvable Menger algebra is a group \cite{Dud1}. The question when a given group is a diagonal group of some Menger algebra is solved by the following two theorems proved in \cite{118}. \begin{theorem}\label{CD-P1} A group $(G;\cdot)$ is a diagonal group of some $i$-solvable Menger algebra if and only if on $G$ can be defined an $n$-ary idempotent operation $f$ such that the equation $ f(a_{1}^{i-1},x,a_{i+1}^{n})=b $ has a unique solution for all $a_{1}^{n},b\in G$ and the identity $$ f(g_{1},g_{2},\ldots,g_{n})\cdot g = f(g_{1}\cdot g,g_{2}\cdot g,\ldots,g_{n}\cdot g) $$ is satisfied. \end{theorem} \begin{theorem}\label{P23.3a} A group $(G;\cdot)$ is the diagonal group of some $n$-solvable Menger algebra of rank $n$ if and only if on $G$ can be defined an $(n-1)$-ary operation $f$ such that $f(e,\ldots,e)=e$ for the unit of the group $(G;\cdot)$ and the equation \begin{equation} f(a_{1}\cdot x,\ldots,a_{n-1}\cdot x)=a_{n}\cdot x\label{23.111} \end{equation} has a unique solution $x\in G$ for all $a_{1},\ldots,a_{n}\in G$. \end{theorem} On the diagonal semigroup $(G;\cdot)$ of a Menger algebra $(G;o)$ of rank $n$ with the diagonal unit $e$ we can define a new $(n-1)$-ary operation $f$ putting \[f(a_{1},\ldots,a_{n-1})=e[a_{1}\ldots a_{n-1}e] \] for all $a_{1},...,a_{n-1}\in G$. The diagonal semigroup with such defined operation $f$ is called a \textit{rigged diagonal semigroup} of $(G;o)$. In the case when $(G;\cdot)$ is a group, the operation $f$ satisfies the condition \begin{equation} a[a_{1}\ldots a_{n}]=a\cdot f(a_{1}\cdot a_{n}^{-1},\ldots,a_{n-1}\cdot a_{n}^{-1})\cdot a_{n}, \label{23.11} \end{equation} where $a_{n}^{-1}$ is the inverse of $a_{n}$ in the group $(G;\cdot)$. \begin{theorem}{\bf (W. A. Dudek, \cite{118})}\newline A Menger algebra $(G;o)$ of rank $n$ is $i$-solvable for some $1\leqslant i<n$ if and only if in its rigged diagonal group $(G;\cdot,f)$ the equation \[ f(a_{1},\ldots,a_{i-1},x,a_{i+1},\ldots,a_{n-1})=a_{n} \] has a unique solution $x\in G$ for all $a_{1},\ldots,a_{n}\in G$. \end{theorem} \begin{theorem}{\bf (W. A. Dudek, \cite{118})}\newline A Menger algebra $(G;o)$ of rank $n$ is $n$-solvable if and only if in its rigged diagonal group $(G;\cdot,f)$ for all $a_{1},\ldots,a_{n-1}\in G$ there exists exactly one element $x\in G$ satisfying \eqref{23.111}. \end{theorem} \begin{theorem}{\bf (B. M. Schein, \cite{schtr})}\label{T23.3} \newline A Menger algebra $(G;o)$ of rank $n$ is group-like if and only if on its diagonal semigroup $(G;\cdot)$ is a group with the unit $e$, the operation $f(a_{1}^{n-1})=e[a_{1}^{n-1}e]$ is a quasigroup operation and for all $a_{1},\ldots,a_{n-1}\in G$ there exists exactly one $x\in G$ satisfying the equation \eqref{23.111}. \end{theorem} Non-isomorphic group-like Menger algebras may have the same diagonal group \cite{Sk}, but group-like Menger algebras of the same rank are isomorphic only in the case when their rigged diagonal groups are isomorphic. In \cite{Sk} conditions are considered under which a given group is a diagonal group of a group-like Menger algebras of rank $n$. This is always for an odd $n$ and a finite group. However, if both $n$ and the order of a finite group are even, then a group-like Menger algebra of rank $n$ whose diagonal group is isomorphic to the given group need exists. If $n$ is even, then such algebra exists only for finite orders not of the form $2p$, where $p$ is an odd prime. There are no group-like Menger algebras of rank $2$ and finite order $2p$. The existence of group-like Menger algebras of order $2p$ and even rank $n$ greater than $2$ is undecided as yet. \section{Representations by $n$-place functions} Any homomorphism $P$ of a Menger algebra $(G;o)$ of rank $n$ into a Menger algebra $({\mathcal{F}}(A^{n},A);O)$ of $n$-place functions (respectively, into a Menger algebra $(\mathfrak{R}(A^{n+1});O)$ of $(n+1)$-ary relations), where $A$ is an arbitrary set, is called a {\it representation of $(G;o)$ by $n$-place functions} (respectively, {\it by $(n+1)$-ary relations}). In the case when $P$ is an isomorphism we say that this representation is {\it faithful}. If $P$ and $P_{i},\,i\in I,$ are representations of $(G;o)$ by functions from ${\mathcal{F}}(A^{n},A)$ (relations from $\mathfrak{R}(A^{n+1})$) and $P(g)=\bigcup_{i\in I}P_{i}(g)$ for every $g\in G$ $P$, then we say that $P$ is the \textit{union} of the family $(P_{i})_{i\in I}$. If $A=\bigcup_{i\in I}A_{i}$, where $A_{i}$ are pairwise disjoint, then the union $\bigcup_{i\in I}P_{i}(g)$ is called the \textit{sum} of $(P_{i})_{i\in I}$. \begin{definition}\rm A {\it determining pair} of a Menger algebra $(G;o)$ of rank $n$ is any pair $(\varepsilon,W)$, where $\varepsilon$ is a partial equivalence on $(G^{\ast};o^{\ast})$, $W$ is a subset of $G^{\ast}$ and the following conditions hold: \begin{itemize} \item[$1)$] $G\cup\{e_{1},\ldots,e_{n}\}\subset{\rm pr}_{1}\varepsilon$, where $e_{1},\ldots,e_{n}$ are the selectors of $(G^{\ast};o^{\ast})$, \item[$2)$] $e_{i}\not\in W$ for all $i=1,\ldots,n$, \item[$3)$] $g[\varepsilon\langle e_{1}\rangle\ldots\varepsilon\langle e_{n}\rangle]\subset\varepsilon\langle g\rangle$ for all $g\in G$, \item[$4)$] $g[\varepsilon\langle g_{1}\rangle\ldots\varepsilon\langle g_{n}\rangle]\subset\varepsilon\langle g[g_1\ldots g_n]\rangle$ for all $g,g_1,\ldots,g_n\in G$, \item[$5)$] if $W\not =\varnothing$, then $W$ is an $\varepsilon$-class and $W\cap G$ is an $l$-ideal of $(G;o)$. \end{itemize} \end{definition} Let $(H_{a})_{a\in A}$ be the family of all $\varepsilon$-classes indexed by elements of $A$ and distinct from $W$. Let $e_{i}\in H_{b_{i}}$ for every $i=1,\ldots,n$,\ $A_{0}=\{a\in A\,|\,H_{a}\cap G\not =\varnothing\}$,\ $\mathfrak{A}=A_{0}^{n}\cup\{(b_{1},\ldots,b_{n})\}$,\ $B=G^{n}\cup \{(e_{1},\ldots,e_{n})\}$. Every $g\in G$ is associated with an $n$-place function $P_{(\varepsilon,W)}(g)$ on $A$, which is defined by \[ (\bar{a},b)\in P_{(\varepsilon,W)}(g)\longleftrightarrow\bar{a}\in \mathfrak{A}\,\wedge\, g[H_{a_{1}}\ldots H_{a_{n}}]\subset H_{b}, \] where $\bar{a}\in A^{n},$\ $b\in A$. The representation $P_{(\varepsilon,W)}:g\to P_{(\varepsilon,W)}(g)$ is called the {\it simplest representation} of $(G;o)$. \begin{theorem}\label{T1.4.2} Every representation of a Menger algebra of rank $n$ by $n$-place functions is the union of some family of its simplest representations. \end{theorem} The proof of this theorem can be found in \cite{Dudtro4} and \cite{Dudtro4a}. With every representation $P$ of $(G;o)$ by $n$-place functions ($(n+1)$-ary relations) we associate the following binary relations on $G$: \[\begin{array}{lll} & \zeta_{P}=\{(g_{1},g_{2})\,|\,P(g_{1})\subset P(g_{2})\}, \\[4pt] & \chi_{P}=\{(g_{1},g_{2})\,|\,{\rm pr}_{1}P(g_{1})\subset{\rm pr} _{1}P(g_{2})\}, \\[4pt] & \pi_{P}=\{(g_{1},g_{2})\,|\,{\rm pr}_{1}P(g_{1})={\rm pr}_{1}P(g_{2})\}, \\[4pt] & \gamma_{P}=\{(g_{1},g_{2})\,|\,{\rm pr}_{1}P(g_{1})\cap{\rm pr} _{1}P(g_{2})\not =\varnothing\}, \\[4pt] & \kappa_{P}=\{(g_{1},g_{2})\,|\,P(g_{1})\cap P(g_{2})\not =\varnothing \}, \\[4pt] & \xi_{P}=\{(g_{1},g_{2})\,|\,P(g_{1})\circ\triangle_{{\rm {\small pr}} _{1}P(g_{2})}=P(g_{2})\circ\triangle_{{\rm {\small pr}}_{1}P(g_{1})}\}. \end{array} \] A binary relation $\rho$ defined on $(G;o)$is {\it projection representable} if there exists a representation $P$ of $(G;o)$ by $n$-place functions for which $\rho=\rho_P$. It is easy to see that if $P$ is a sum of the family of representations $(P_{i})_{i\in I}$ then $\sigma_{P}=\bigcap_{i\in I}\sigma_{P_{i}}$ for\label{SP} $\sigma\in\{\zeta,\chi ,\pi,\xi\}$ and $\sigma_{P}=\bigcup_{i\in I}\sigma_{P_{i}}$ for $\sigma \in\{\kappa,\gamma\}$. Algebraic systems of the form $(\Phi;O,\zeta_{\Phi},\chi_{\Phi})$, $(\Phi;O,\zeta_{\Phi})$, $(\Phi;O,\chi_{\Phi})$ are called: \textit{fundamentally ordered projection $($f.o.p.$)$ Menger algebras, fundamentally ordered $($f.o.$)$ Menger algebras and projection quasi-ordered $($p.q-o.$)$ Menger algebras}. A binary relation $\rho$ defined on a Menger algebra $(G;o)$ of rank $n\geqslant 2$ is \begin{itemize} \item {\it stable}, if \ $(x,y),(x_1,y_1),\ldots,(x_n,y_n)\in\rho\longrightarrow (x[x_1^n ],\,y[y_1^n]\,)\in\rho$, \item {\it $l$-regular}, if \ $(x,y)\in\rho\longrightarrow (x[x_1^n ],y[x_1^n]\,)\in\rho$, \item {\it $v$-negative}, if \ $(x,\,u[x_1^{i-1},y,x_{i+1}^n]\,)\in\rho\longrightarrow(x,y)\in\rho$ \end{itemize} \noindent for all $u,x,y,x_1,\ldots,x_n,y_1,\ldots,x_n\in G$ and $i\in\overline{1,n}$. \begin{theorem}\label{T2.1.1} {\bf (V. S. Trokhimenko, \cite{77})}\newline An algebraic system $(G;o,\zeta,\chi)$, where $(G;o)$ is a Menger algebra of rank $n$ and $\zeta$, $\chi$ are relations defined on $G$, is isomorphic to a f.o.p. Menger algebra of $(n+1)$-relations if and only if $\zeta$ is a stable order and $\chi$ is an $l$-regular and $v$-negative quasi-order containing $\zeta$. \end{theorem} From this theorem it follows that each stable ordered Menger algebra of rank $n$ is isomorphic to some f.o. Menger algebra of $\,(n+1)$--relations; each p.q-o. Menger algebra of $(n+1)$-relations is a Menger algebra of rank $n$ with the $l$-regular and $v$-negative quasi-order. Let $T_{n}(G)$ be the set of all polynomials defined on a Menger algebra $(G;o)$ of rank $n$. For any binary relation $\rho$ on $(G;o)$ we define the relation $\zeta (\rho)\subset G\times G$ putting $(g_{1},g_{2})\in\zeta (\rho)$ if and only if there exist polynomials $t_{i}\in T_{n}(G)$, vectors $\bar{z}_{i}\in B=G^{n}\cup\{\bar{e}\}$ and pairs $(x_{i},y_{i})\in\rho\cup\{(e_{1},e_{1}),\ldots ,(e_{n},e_{n})\}$, where $e_{1},\ldots ,e_{n}$ are the selectors from $(G^{\ast };o^{\ast })$, such that \[ g_{1}=t_{1}(x_{1}[\bar{z}_{1}])\,\wedge\, \bigwedge\limits_{i=1}^{m}\left( t_{i}(y_{i}[\bar{z}_{i}])=t_{i+1}(x_{i+1}[\bar{z}_{i+1}])\right) \,\wedge \,t_{m+1}(y_{m+1}[\bar{z}_{m+1}])=g_{2} \] for some natural number $m$. Such defined relation $\zeta (\rho )$ is the least stable quasi-order on $(G;o)$ containing $\rho$. \begin{theorem} \label{T2.1.3}{\bf (V. S. Trokhimenko, \cite{77})}\newline An algebraic system $(G;o,\chi )$, where $(G;o)$ is a Menger algebra of rank $n\geqslant 2$, $\chi \subset G\times G$, $\sigma =\{(x,g[x^n])\,|\,x,g\in G\}$ is isomorphic to some p.q-o. Menger algebra of reflexive $(n+1)$-relations if and only if $\zeta (\sigma )$ is an antisymmetric relation, $\chi $ is an $l$-regular $v$-negative quasi-order and $(t(x[\bar{y}]), t(g[x^n ][\bar{y}]))\in\chi$ for all \ $t\in T_{n}(G),$ \ $x,g\in G,$ \ $\bar{y}\in G^{n}\cup\{\bar{e}\}.$ \end{theorem} The system $(G;o,\zeta ,\chi )$, where $(G;o)$ is a Menger algebra of rank $n$ and $\zeta ,\chi \subset G\times G$ satisfy all the conditions of Theorem \ref{T2.1.1} can be isomorphically represented by transitive $(n+1)$-relations if and only if for each $g\in G$ we have $(g[g^n],g)\in\zeta$. Analogously we can prove that each stable ordered Menger algebra of rank $n$ satisfying the condition $(g[g^n], g)\in\zeta$, is isomorphic to some f.o. Menger algebra of transitive $(n+1)$-relations \cite{77}. Every idempotent stable ordered Menger algebra of rank $n\geqslant 2$ in which $(x,y[x^n])\in\zeta$ is isomorphic to some f.o. Menger algebra of $n$-quasi-orders. \begin{theorem}{\bf (V. S. Trokhimenko, \cite{61})}\newline An algebraic system $(G;o,\zeta ,\chi)$, where $o$ is an $(n+1)$-operation on $G$ and $\zeta ,\chi \subset G\times G$, is isomorphic to a fundamentally ordered projection Menger algebra of $n$-place functions if and only if $o$ is a superassociative operation, $\zeta$ is a stable order, $\chi $ -- an $l$-regular $v$-negative quasi-order containing $\zeta $, and for all $i\in\overline{1,n}$ \ $u,g,g_{1},g_{2}\in G,$ \ $\bar{w}\in G^{n}$ the following two implications hold \begin{eqnarray*} &&(g_{1},g), (g_{2},g)\in\zeta\ \& \ (g_{1},g_{2})\in\chi\longrightarrow (g_{1},g_{2})\in\zeta,\\[4pt] &&(g_{1},g_{2})\in\zeta\ \& \ (g,g_{1}),(g, u[\bar{w}|_{i}g_{2}])\in\chi\longrightarrow (g,u[\bar{w}|_{i}g_{1}])\in\chi, \end{eqnarray*} where $u[\bar{w}|_ig]=u[w_1^{i-1},g,w_{i+1}^n]$. \end{theorem} \begin{theorem}{\bf (V. S. Trokhimenko, \cite{61})}\newline The necessary and sufficient condition for an algebraic system $(G;o,\zeta )$ to be isomorphic to some f.o. Menger algebra of $n$-place functions is that $o$ is a superassociative $(n+1)$-ary operation and $\zeta $ is a stable order on $G$ such that \[ (x,y), (z,t_1(x)),(z,t_2(y))\in\zeta\longrightarrow (z,t_2(x))\in\zeta \] for all $x,y,z\in G$ and $t_1,t_2\in T_n(G)$. \end{theorem} Replacing the last implication by the implication \[ (z,t_1(x), (z,t_1(y)),(z,t_2(y))\in\zeta\longrightarrow (z,t_2(x))\in\zeta \] we obtain a similar characterization of algebraic systems $(G;o,\zeta)$ isomorphic to f.o. Menger algebras of reversive $n$-place functions (for details see \cite{schtr} or \cite{Dudtro4,Dudtro4a}). \section{Menger algebras with additional operations}\setcounter{theorem}{0} \setcounter{equation}{0} Many authors investigate sets of multiplace functions closed with respect to the Menger composition of functions and other naturally defined operations such as, for example, the set-theoretic intersection of functions considered as subsets of the corresponding Cartesian product. Such obtained algebras are denoted by $(\Phi;O,\cap )$ and are called \textit{$\cap $-Menger algebras}. Their abstract analog is called a \textit{$ \curlywedge$-Menger algebra of rank} $n$ or a \textit{Menger $\mathcal{P}$-algebra}. An abstract characterization of $\cap $-Menger algebras is given in \cite{73} (see also \cite{Dudtro6}). \begin{theorem} \label{T3.1.1}{\bf (V. S. Trokhimenko, \cite{73})}\newline For an algebra $(G;o,\curlywedge )$ of type $(n+1,2)$ the following statements are true: $(a)$ \ $(G;o,\curlywedge )$ is a $\curlywedge$-Menger algebra of rank $n$, if and only if $\rule{6.5mm}{0mm}(i)$ \ $o$ is a superassociative operation, $\rule{6mm}{0mm}(ii)$ \ $(G;\curlywedge )$ is a semilattice, and the following two conditions \begin{eqnarray} &&(x\curlywedge y)[\bar{z}]=x[\bar{z}]\curlywedge y[\bar{z}], \label{3.4.1} \\[4pt] &&t_{1}(x\curlywedge y\curlywedge z)\curlywedge t_{2}(y)=t_{1}(x\curlywedge y)\curlywedge t_{2}(y\curlywedge z)\label{3.4.2} \end{eqnarray} \hspace*{13mm}hold for all $\,x,y,z\in G,\ \bar{z}\in G^{n}$ and $\,t_{1},t_{2}\in T_{n}(G)$, $(b)$ \ if $\,n>1$, then $\,(G;o,\curlywedge )\,$ is isomorphic to a $\,\cap$-Menger algebra of reversive \hspace*{7mm}$n$-place functions if the operation $o$ is superassociative, $\,(G;\curlywedge )\,$ is a semi-- \hspace*{7mm}lattice, $(\ref{3.4.1})$ and \begin{eqnarray} &&u[\bar{z}|_{i}(x\curlywedge y)]=u[\bar{z}|_{i}x]\curlywedge u[\bar{z}|_{i}y], \label{3.4.3} \\[4pt] &&t_{1}(x\curlywedge y)\curlywedge t_{2}(y)=t_{1}(x\curlywedge y)\curlywedge t_{2}(x)\label{3.4.4} \end{eqnarray} \hspace*{7mm}are valid for all $\,i\in\overline{1,n},\ u,x,y\in G,\ \bar{z}\in G^{n},\ t_{1},t_{2}\in T_{n}(G)$. \end{theorem} \begin{theorem} \label{T3.1.4}{\bf (B. M. Schein, V. S. Trokhimenko, \cite{76})}\newline An algebra $(G;o,\curlyvee )$ of type $(n+1,2)$ is isomorphic to some Menger algebra $(\Phi;O)$ of $n$-place functions closed under the set-theoretic union of functions if and only if the operation $o$ is superassociative, $(G;\curlyvee )$ is a semilattice with the semillatice order $\leqslant$ such that $x[\bar{z}]\leqslant z_{i}$ and \begin{eqnarray} &&(x\curlyvee y)[\bar{z}]=x[\bar{z}]\curlyvee y[\bar{z}],\label{3.4.20}\\ &&u[\bar{z}|_{i}(x\curlyvee y)]=u[\bar{z}|_{i}x]\curlyvee u[\bar{z}|_{i}y], \label{3.4.28} \\[4pt] &&x\leqslant y\curlyvee u[\bar{z}|_{i}z]\longrightarrow x\leqslant y\curlyvee u[\bar{z}|_{i}x], \label{3.4.30} \end{eqnarray} for all $x,y,z\in G$, $\bar{z}\in G^{n},$ $u\in G\cup \{e_1,\ldots,e_n\}$, $i\in\overline{1,n}$. \end{theorem} \begin{theorem} \label{T3.1.5}{\bf (B. M. Schein, V. S. Trokhimenko, \cite{76})}\newline An abstract algebra $(G;o,\curlywedge ,\curlyvee )$ of type $(n+1,2,2)$ is isomorphic to some Menger algebra of $n$-place functions closed with respect to the set-theoretic intersection and union of functions if and only if the operation $o$ is superassociative, $(G,\curlywedge ,\curlyvee )$ is a distributive lattice, the identities $(\ref{3.4.1})$, $(\ref{3.4.20})$, $(\ref{3.4.28})$ and \begin{equation} x[(y_{1}\curlywedge z_{1})\ldots (y_{n}\curlywedge z_{n})]=x[\bar{y}]\curlywedge z_{1}\curlywedge \ldots \curlywedge z_{n}\label{3.4.32} \end{equation} are satisfied for all $x,y\in G$, $\bar{y},\bar{z}\in G^{n},$ $u\in G\cup \{e_1,\ldots,e_n\}$, $i\in\overline{1,n}$. \end{theorem} The restriction of an $n$-place function $f\in{\mathcal F}(A^n,B)$ to the subset $H\subset A^n$ can be defined as a composition of $f$ and $\bigtriangleup_H=\{(\bar{a},\bar{a})|\bar{a}\in H\}$, i.e., $f|_H=f\circ\bigtriangleup_H$. The {\it restrictive product} $\rhd$ of two functions $f,g\in{\mathcal F}(A^n,B)$ is defined as $$ f\rhd g=g\circ\bigtriangleup_{\mbox{pr}_1\,}f. $$ \begin{theorem} \label{T3.2.1}{\bf (V. S. Trokhimenko, \cite{65})}\newline An algebra $(G;o,\blacktriangleright )$ of type $(n+1,2)$ is isomorphic to a Menger algebra of $n$-place functions closed with respect to the restrictive product of functions if and only if $(G;o)$ is a Menger algebra of rank $n$, $(G;\blacktriangleright )$ is an idempotent semigroup and the following three identities hold: \begin{eqnarray} &&x[(y_{1}\blacktriangleright z_{1})\ldots (y_{n}\blacktriangleright z_{n})]= y_{1}\blacktriangleright\ldots \blacktriangleright y_{n}\blacktriangleright x[z_1\ldots z_n], \rule{20mm}{0mm} \label{3.5.1} \\[4pt] &&(x\blacktriangleright y)[\bar{z}]=x[\bar{z}]\blacktriangleright y[\bar{z}],\label{3.5.2} \\[4pt] &&\label{3.5.3}x\blacktriangleright y\blacktriangleright z=y\blacktriangleright x\blacktriangleright z. \end{eqnarray} \end{theorem} An algebra $(G;o,\curlywedge ,\blacktriangleright )$ of type $(n+1,2,2)$ is isomorphic to a Menger algebra of $n$-place functions closed with respect to the set-theoretic intersection and the restrictive product of functions, i.e., to $(\Phi;O,\cap,\rhd)$, if and only if $(G;o,\blacktriangleright )$ satisfies the conditions of Theorem \ref{T3.2.1}, $(G;\curlywedge )$ is a semilattice and the identities $(x\curlywedge y)[\bar{z}]=x[\bar{z}]\curlywedge y[\bar{z}]$, $x\curlywedge (y\blacktriangleright z)=y\blacktriangleright (x\curlywedge z)$, $(x\curlywedge y)\blacktriangleright y=x\curlywedge y$ are satisfied \cite{73}. Moreover, if $(G;o,\curlywedge ,\blacktriangleright )$ also satisfies $(\ref{3.4.3})$, then it is isomorphic to an algebra $(\Phi;O,\cap,\rhd)$ of reversive $n$-place functions. More results on such algebras can be found in \cite{Dudtro4} and \cite{Dudtro4a}. \section{$(2,n)$-semigroups}\setcounter{theorem}{0} \setcounter{equation}{0} On $\mathcal{F}(A^{n},A)$ we can define $n$ binary compositions $\op{1\,},\ldots ,\op{n}$ of two functions by putting \[ (f\op{i\,}g)(a_{1}^n)=f(a_{1}^{i-1},g(a_{1}^{n}),a_{i+1}^{n}). \] for all $f,g\in\mathcal{F}(A^{n},A)$ and $a_1,\ldots,a_n\in A$. Since all such defined compositions are associative, the algebra $(\Phi; \op{1\,},\ldots ,\op{n})$, where $\Phi\subset\mathcal{F}(A^{n},A)$, is called a \textit{$(2,n)$-semigroup of $n$-place functions}. The study of such compositions of functions for binary operations was initiated by Mann \cite{Man} . Nowadays such compositions are called {\it Mann's compositions} or {\it Mann's superpositions}. Mann's compositions of $n$-ary operations, i.e., full $n$-place functions, were described by T. Yakubov in \cite{Yak}. Abstract algebras isomorphic to some sets of operations closed with respect to these compositions are described in \cite{Sok}. Menger algebras on $n$-place functions closed with respect to Mann's superpositions are called {\it Menger $(2,n)$-semigroups}. Their abstract characterizations are given in \cite{Dudtro1}. Any non-empty set $G$ with $n$ binary operations defined on $G$ is also called an abstract $(2,n)$-semigroup. For simplicity, for these operations the same symbols as for Mann's compositions of functions will be used. An abstract $(2,n)$-semigroup having a representation by $n$-place function is called {\it representable}. Further, for simplicity, all expressions of the form $(\cdots ((x\op{i_{1}}y_{1})\op{i_{2}}y_{2})\cdots )\op{i_{k}}y_{k}$ are denoted by $x\op{i_{1}}y_{1}\op{i_{2}}\cdots \op{i_{k}}y_{k}$\label{sym96} or, in the abbreviated form, by $x\op{i_{1}}^{i_{k}}y_{1}^{k}$. The symbol $\mu_{i}(\op{i_{1}}^{i_{s}}x_{1}^{s})$ will be reserved for the expression $x_{i_{k}}\!\op{i_{k+1}}^{i_{s}}\!x_{k+1}^{s}$, if $i\neq i_{1}$, $\ldots$, $i\neq i_{k-1}$, $\,i=i_{k}$ for some $k\in\{1,\ldots, s\}$. In any other case this symbol is empty. For example, $\mu _{1}(\op{2}x\op{1\,}y\op{3}z)=y\op{3}z$, $\mu _{2}(\op{2}x\op{1\,}y\op{3}z)=x\op{1\,}y\op{3}z$, $\mu _{3}(\op{2}x\op{1\,}y\op{3}z)=z$. The symbol $\mu _{4}(\op{2}x\op{1\,}y\op{3}z)$ is empty. \begin{theorem}{\bf (W. A. Dudek, V. S. Trokhimenko, \cite{Dudtro2})}\newline A $(2,n)$-semigroup $(G;\op{1\;},\ldots,\op{n})$ has a faithful representation by partial $n$-place functions if and only if for all $\,g,x_1,\ldots,x_s,y_1,\ldots,y_k\in G$ and $\,i_1,\ldots,i_s,j_1,\ldots,j_k\in\overline{1,n}$ the following implication \begin{equation}\label{imp-14} \bigwedge\limits_{i=1}^{n}\Big(\mu_i(\opp{i_1}{i_s}x_1^s) =\mu_i(\opp{j_1}{j_k}y_1^k)\Big)\longrightarrow g\opp{i_1}{i_s}x_1^s=g\opp{j_1}{j_k}y_1^k \end{equation} is satisfied. \end{theorem} A $(2,n)$-semigroup $(G;\op{1\;},\ldots,\op{n})$ satisfying the above implication has also a faithful representation by full $n$-place functions. Moreover, for any $(2,n)$-semigroup any its representation by $n$-place functions is a union of some family of its simplest representations \cite{Dudtro2}. \begin{theorem}\label{T5}{\bf (W. A. Dudek, V. S. Trokhimenko, \cite{Dudtro2})}\newline An algebraic system $(G;\op{1\,},\ldots,\op{n},\chi)$, where $(G;\op{1\,},\ldots,\op{n})$ is a $(2,n)$-semigroup and $\chi$ is a binary relation on $\,G$, is isomorphic to a p.q-o. $(2,n)$-semigroup of partial $\,n$-place functions if and only if the implication $\eqref{imp-14}$ is satisfied and $\chi$ is a quasi-order such that $(x\opp{i_{1}}{i_{s}}z_{1}^{s},\,\mu_{j}(\opp{i_{1}}{i_{s}}z_{1}^{s}))\in \chi$ and $(x\op{i\,}z,\,y\op{i\,}z)\in\chi$ for all $(x,y)\in\chi$. \end{theorem} In the case of Menger $(2,n)$-semigroups the situation is more complicated since conditions under which a Menger $(2,n)$-semigroup is isomorphic to a Menger $(2,n)$-semi\-group of $n$-place functions are not simple. \begin{theorem}\label{T2}{\bf (W. A. Dudek, V. S. Trokhimenko, \cite{Dudtro1})}\newline A Menger $(2,n)$-semigroup $(G;o,\op{1},\ldots,\op{n})$ is isomorphic to a Menger $(2,n)$-semi\-group of partial $n$-place functions if and only if it satisfies the implication $\eqref{imp-14}$ and the following three identities \begin{eqnarray*}\label{8} &&x\op{i}y[z_1^n]=x[z_1^{i-1},y[z_1^n],z_{i+1}^n]\, ,\\[4pt] \label{9} &&x[y_1^n]\op{i}z=x[y_1\op{i}z\ldots y_n\op{i}z]\, ,\\[4pt] \label{10} &&x\opp{i_1}{i_s}y_1^s=x[\mu_1(\opp{i_1}{i_s}y_1^s)\ldots \mu_n(\opp{i_1}{i_s}y_1^s)]\, , \end{eqnarray*} where $\{i_1,\ldots,i_s\}=\{1,\ldots,n\}$. \end{theorem} Using this theorem one can prove that any Menger $(2,n)$-semigroup of $n$-place functions is isomorphic to some Menger $(2,n)$-semigroup of full $n$-place functions (for details see \cite{Dudtro1}). Moreover, for any Menger $(2,n)$-semigroup satisfying all the assumptions of Theorem \ref{T2} one can find the necessary and sufficient conditions under which the triplet $(\chi,\gamma,\pi)$ of binary relations defined on this Menger $(2,n)$-semigroup be a projection representable by the triplet of relations $(\chi_P,\gamma_P,\pi_P)$ defined on the corresponding Menger $(2,n)$-semigroup of $n$-place functions \cite{Dudtro3}. Similar conditions were found for pairs $(\chi,\gamma)$, $(\chi,\pi)$ and $(\gamma,\pi)$. But conditions under which the triplet $(\chi,\gamma,\pi)$ will be a faithful projection representable have not yet been found. \setcounter{equation}{0} \section{Functional Menger systems}\label{s36} On the set ${\mathcal{F}}(A^{n},A)$ we also can consider $n$ unary operations ${\mathcal{R}}_{1},\ldots ,{\mathcal{R}}_{n}$ such that for every function $f\in {\mathcal{F}}(A^{n},A)$ \ ${\mathcal{R}}_{i}f$ is the restriction of $n$-place projectors defined on $A$ to the domain of $f$, i.e., $ {\mathcal{R}}_{i}f=f \vartriangleright I_{i}^{n}\,$ for every $i=1,\ldots ,n,$ where $I_{i}^{n}(a_1^n)=a_i\,$ is the $i$th $n$-place projector on $A$. In other words, ${\mathcal{R}}_{i}f$ is such $n$-place function from ${\mathcal{F}}(A^{n},A)$, which satisfies the conditions \[{\rm pr}_{1}{\mathcal{R}}_{i}f ={\rm pr}_{1}f , \ \ \ \ \ \ \bar{a}\in {\rm pr}_{1}f \longrightarrow {\mathcal{R}}_{i}f (\bar{a})=a_{i} \] for any $\bar{a}\in A^{n}$. Algebras of the form $(\Phi;O,{\mathcal{R}}_{1},\ldots ,{\mathcal{R}}_{n})$, where $\Phi\subset{\mathcal{F}}(A^{n},A)$, are called \textit{functional Menger system of rank $n$.} Such algebras (with some additional operations) were firstly studied in \cite{120} and \cite{151}. For example, V. Kafka considered in \cite{120} the algebraic system of the form $(\Phi;O,{\mathcal{R}}_{1},\ldots ,{\mathcal{R}}_{n},{\mathcal{L}},\subset )$, where $\Phi\subset\mathcal{F}(A^{n},A)$ and ${\mathcal{L}}f =\stackrel{n+1}\triangle_{\!{\rm pr}_{2}f}$ for every $f\in\Phi$. Such algebraic system satisfies the conditions: \begin{enumerate} \item[$(A_1)$] \ $(\Phi;O)$ is a Menger algebra of functions, \item[$(A_2)$] \ $\subset$ \ is an order on $\Phi$, \item[$(A_3)$] \ the following identities are satisfied \[\left\{\begin{array}{l} f[\mathcal{R}_{1}f\ldots\mathcal{R}_{n}f]=f,\\[4pt] \mathcal{R}_{i}(f[g_1\ldots g_n])= \mathcal{R}_{i}((\mathcal{R}_{j}f)[g_1\ldots g_n]),\\[4pt] (\mathcal{L}f)[f\ldots f]=f , \end{array}\right. \] \item[$(A_4)$] \ $\mathcal{L}(f[g_1\ldots g_n])\subset\mathcal{L}f\,$ for all $\,f,g_1\ldots g_n\in\Phi$, \item[$(A_5)$] \ $\Phi$ has elements $I_1,\ldots,I_n$ such that $\,f[I_1\ldots I_n]=f\,$ and \[\left\{\begin{array}{l} g_i\subset I_i\longrightarrow\mathcal{R}_{i}g_i\subset g_i ,\\[4pt] f\subset \bigcap\limits_{j=1}^{n}I_j\longrightarrow \mathcal{L}f\subset f,\\[4pt] g_k\subset I_k\longrightarrow g_k[f_1\ldots f_n]\subset f_k,\\[4pt] \bigwedge\limits_{k=1}^{n}(g_k\subset I_k)\longrightarrow f[g_1\ldots g_n]\subset f,\\[4pt] \mathcal{R}_{i} g_j\subset\bigcap\limits_{k=1}^{n}\mathcal{R}_{i} g_k\longrightarrow I_j[g_1\ldots g_n]=g_j ,\\[4pt] f\subset h\longrightarrow f=h[p_1\ldots p_n] \ \ {\rm for\;some}\;\;p_1\subset I_1,\ldots,p_n\subset I_n \end{array}\right. \] for all $f,f_1,\ldots,f_n,g_1,\ldots,g_n\in\Phi$ and $i,j,k\in\{1,\ldots,n\}$. \end{enumerate} It is proved in \cite{120} that for any algebraic system $(G;o,R_{1},\ldots ,R_{n},L,\leqslant)$ satisfying the above conditions there exists $\Phi\subset\mathcal{F}(A^{n},A)$ and an isomorphism $(G;o)$ onto $(\Phi;O)$ which transforms the order $\leqslant$ into the set-theoretical inclusion of functions. However, $A_1 - A_5$ do not give a complete characterization of systems of the form $(\Phi;O,{\mathcal{R}}_{1},\ldots ,{\mathcal{R}}_{n},{\mathcal{L}},\subset )$. Such characterization is known only for systems of the form $(\Phi;O,{\mathcal{R}}_{1},\ldots ,{\mathcal{R}}_{n})$. \begin{theorem} \label{T3.3.2}{\bf (V. S. Trokhimenko, \cite{71})}\newline A functional Menger system $(\Phi;O,{\mathcal{R}}_{1},\ldots ,{\mathcal{R}}_{n})$ of rank $n$ is isomorphic to an algebra $(G;o,R_{1},\ldots ,R_{n})$ of type $(n+1,1,\ldots ,1)$, where $(G;o)$ is a Menger algebra, if and only if for all $i,k\in\overline{1,n}$ it satisfies the identities \begin{eqnarray} &&\hspace*{-15mm}x[R_{1}x\ldots R_{n}x]=x,\label{3.6.5} \\[4pt] &&\hspace*{-15mm}x[\bar{u}|_{i}z][R_{1}y\ldots R_{n}y]= x[\bar{u}|_{i}\,z[R_{1}y\ldots R_{n}y]],\label{3.6.6} \\[4pt] &&\hspace*{-15mm}R_{i}(x[R_{1}y\ldots R_{n}y])= (R_{i}x)[R_{1}y\ldots R_{n}y],\label{3.6.7} \\[4pt] &&\hspace*{-15mm}x[R_{1}y\ldots R_{n}y][R_{1}z\ldots R_{n}z]= x[R_{1}z\ldots R_{n}z][R_{1}y\ldots R_{n}y],\label{3.6.8} \\[4pt] &&\hspace*{-15mm}R_{i}(x[\bar{y}])= R_{i}((R_{k}x)[\bar{y}]), \label{3.6.9}\\[4pt] &&\hspace*{-15mm}(R_{i}x)[\bar{y}]=y_{i}[R_{1}(x[\bar{y}])\ldots R_{n}(x[\bar{y}])].\label{3.6.10} \end{eqnarray} \end{theorem} It is interesting to note that defining on $(G;o,R_{1},\ldots ,R_{n})$ a new operation $\blacktriangleright$ by putting $x\blacktriangleright y=y[R_{1}x\ldots R_{n}x]$ we obtain an algebra $(G;o,\blacktriangleright )$ isomorphic to a Menger algebra of $n$-place functions closed with respect to the restrictive product of functions \cite{Dudtro4a}. Another interesting fact is \begin{theorem} \label{T3.3.3}{\bf (W. A. Dudek, V. S. Trokhimenko, \cite{Dudtro})}\newline An algebra $(G;o,\curlywedge,R_{1},\ldots,R_{n})$ of type $(n+1,2,1,\ldots ,1)$ is isomorphic to some functional Menger $\cap$-algebra of $n$-place functions if and only if $(G;o,R_{1},\ldots,R_{n})$ is a functional Menger system of rank $n$, $(G;\curlywedge )$ is a semilattice, and the identities \begin{eqnarray*} &&(x\curlywedge y)[z_1\ldots z_n]= x[z_1\ldots z_n]\curlywedge y[z_1\ldots z_n],\\[4pt] &&(x\curlywedge y)[R_1z\ldots R_nz]=x\curlywedge y[R_1z\ldots R_nz],\\[4pt] &&x[R_1(x\curlywedge y)\ldots R_n(x\curlywedge y)]=x\curlywedge y \end{eqnarray*} are satisfied. \end{theorem} Now we present abstract characterizations of two important sets used in the theory of functions. We start with the set containing functions with the same fixed point. \begin{definition} A non-empty subset $H$ of $G$ is called a \textit{stabilizer} of a functional Menger system $(G;o,R_1,\ldots,R_n)$ of rank $n$ if there exists a representation $P$ of $(G;o,R_1,\ldots,R_n)$ by $n$-place functions $f\in{\mathcal F}(A^n,A)$ such that $$ H=H^a_P=\{g\in G\,|\,P(g)(a,\ldots,a)=a\} $$ for some point $a\in A$ common for all $g\in H$. \end{definition} \begin{theorem}{\bf (W. A. Dudek, V. S. Trokhimenko, \cite{Dudtro5})}\newline A non-empty $H\subset G$ is a stabilizer of a functional Menger system $(G;o,R_1,\ldots,R_n)$ of rank $n$ if and only if there exists a subset $U$ of $\,G$ such that \begin{eqnarray} &&\hspace*{-15mm} \label{f-0}H\subset U, \ \ R_iU\subset H, \ \ R_i(G\!\setminus\! U)\subset G\!\setminus\!U,\\ &&\hspace*{-15mm}\label{f-1} x,y\in H\;\,\&\;\,t(x)\in U\longrightarrow t(y)\in U, \\ &&\hspace*{-15mm} \label{f-2} x=y[R_1x\ldots R_nx]\in U\;\,\&\;\, u[\bar{w}\,|_iy]\in H\longrightarrow u[\bar{w}\,|_ix]\in H, \\ &&\hspace*{-15mm} \label{f-3} x=y[R_1x\ldots R_nx]\in U\;\,\&\;\,u[\bar{w}\,|_iy]\in U \longrightarrow u[\bar{w}\,|_ix]\in U,\\ &&\hspace*{-15mm}y\in H\;\,\&\;\,x[y\ldots y]\in H\longrightarrow x\in H,\\ &&\hspace*{-15mm}x,y\in H\;\,\&\;\,t(x)\in H\longrightarrow t(y),\\ &&\hspace*{-15mm}x\in H\longrightarrow x[x\ldots x]\in H\label{f-13} \end{eqnarray} for all $x,y\in G,\,$ $\bar{w}\in G^{\,n},\,$ $t\in T_n(G)$ and $i\in\overline{1,n}$, where the symbol $u[\bar{w}\,|_i\ ]$ may be empty.\footnote{ \ If $u[\bar{w}\,|_i\ ]$ is the empty symbol, then $u[\bar{w}\,|_ix]$ is equal to $x$.} \end{theorem} In the case of functional Menger algebras isomorphic to functional Menger $\cap$-algebras of $n$-place functions stabilizers have simplest characterization. Namely, as it is proved in \cite{Dudtro5}, the following theorem is valid. \begin{theorem} A non-empty subset $H$ of $\,G$ is a stabilizer of a functional Menger algebra $(G;o,\curlywedge,R_1,\ldots,R_n)$ of rank $n$ if and only if $H$ is a subalgebra of $(G;o,\curlywedge)$, $\,R_iH\subset H$ for every $i\in\overline{1,n}$, and $$ x[y_1\ldots y_n]\in H\longrightarrow x\in H $$ for all $x\in G,$ $y_1,\ldots,y_n\in H$. \end{theorem} Another important question is of abstract characterizations of stationary subsets of Menger algebras of $n$-place functions. Such characterizations are known only for some types of such algebras. As it is well known, by the \textit{stationary subset} of $\Phi\subset\mathcal{F}(A^n,A)$ we mean the set \[ \mathbf{St}(\Phi)= \{f\in\Phi\,|\,(\exists a\in A)\,f(a,\ldots,a)=a\}. \] Note that in a functional Menger $\cap$-algebra $(\Phi;\mathcal{O},\cap,\mathcal{R}_1,\ldots,\mathcal{R}_n)$ of $n$-place functions without a zero $\mathbf{0}$ the subset $\mathbf{St}(\Phi)$ coincides with $\Phi$. Indeed, in this algebra $f\neq\emptyset$ for any $f\in\Phi$. Consequently, $f\cap g[f\ldots f]\neq\emptyset$ for all $f,g\in\Phi$. Hence, $g[f\ldots f](\bar{a})=f(\bar{a})$, i.e., $g(f(\bar{a}),\ldots,f(\bar{a}))=f(\bar{a})$ for some $\bar{a}\in A^n$. This means that $g\in\mathbf{St}(\Phi)$. Thus, $\Phi\subseteq\mathbf{St}(\Phi)\subseteq\Phi$, i.e., $\mathbf{St}(\Phi)=\Phi$. Therefore, below we will consider only functional Menger $\cap$-algebras with a zero. \begin{definition} \label{D-2}\rm A non-empty subset $H$ of $G$ is called a \textit{stationary subset} of a functional Menger algebra $(G;o,\curlywedge,R_{1},\ldots,R_{n})$ of rank $n$ if there exists its faithful representation $P$ by $n$-place functions such that \begin{equation*} g\in H\longleftrightarrow P(g)\in\mathbf{St}(P(G)) \end{equation*} for every $g\in G$, where $P(G)=\{P(g)\,|\,g\in G\}$. \end{definition} \begin{theorem}{\bf (W. A. Dudek, V. S. Trokhimenko, \cite{Dudtro6})}\newline A non-empty subset $H$ of $G$ is a stationary subset of a functional Menger algebra $(G;o,\curlywedge,R_{1},\ldots,R_{n})$ with a zero $\mathbf{0}$ if and only if \begin{eqnarray*} &&\hspace*{-60mm}x[x\ldots x]\curlywedge x\in H,\\[4pt] &&\hspace*{-60mm} z[y\ldots y]=y\in H\longrightarrow z\in H, \\[4pt] &&\hspace*{-60mm} z[y\ldots y]\curlywedge y\neq\mathbf{0}\longrightarrow z\in H,\\[4pt] &&\hspace*{-60mm} \mathbf{0}\not\in H\longrightarrow R_i\mathbf{0}=\mathbf{0}, \end{eqnarray*} for all $x\in H,$ $\,y,z\in G$ and $i\in\overline{1,n}$. \end{theorem} Conditions formulated in the above theorem are not identical with the conditions used for the characterization of stationary subsets of restrictive Menger $\mathcal P$-algebras (see Theorem 8 in \cite{Trokh3}). For example, the implication $$ y[R_1x\ldots R_nx]=x\in H\longrightarrow y\in H $$ is omitted. Nevertheless, stationary subsets of functional Menger $\curlywedge$-algebras with a zero have the same properties as stationary subsets of restrictive Menger $\mathcal P$-algebras. More results on stationary subsets and stabilizers in various types of Menger algebras can be found in \cite{Dudtro4} and \cite{Dudtro4a}. In \cite{Dudtro4a} one can find abstract characterizations of algebras of vector-valued functions, positional algebras of functions, Mal'cev--Post iterative algebras and algebras of multiplace functions partially ordered by various types of natural relations connected with domains of functions. \end{document}
arXiv
Anniversaries (Ukrainian) Yurii Dmitrievich Sokolov (on his 100th birthday) Gorbachuk M. L., Luchka A. Y., Mitropolskiy Yu. A., Samoilenko A. M. Ukr. Mat. Zh. - 1996νmber=6. - 48, № 11. - pp. 1443-1445 Chronicles (Ukrainian) Just people of the world Zukhovitskii S. I. Article (Russian) On the optimal rate of convergence of the projection-iterative method and some generalizations of it on a class of equations with smoothing operators Azizov M. ↓ Abstract For some classes of operator equations of the second kind with smoothing operators, we find the exact order of the optimal rate of convergence of generalized projection-iterative methods. On boundary-value problems for a second-order differential equation with complex coefficients in a plane domain Burskii V. P. We study boundary-value problems for a homogeneous partial differential equation of the second order with arbitrary constant complex coefficients and a homogeneous symbol in a bounded domain with smooth boundary. Necessary and sufficient conditions for the solvability of the Cauchy problem are obtained. These conditions are written in the form of a moment problem on the boundary of the domain and applied to the investigation of boundary-value problems. This moment problem is solved in the case of a disk. Article (Ukrainian) Multipoint problem for hyperbolic equations with variable coefficients Klyus I. S., Ptashnik B. I., Vasylyshyn P. B. By using the metric approach, we study the problem of classical well-posedness of a problem with multipoint conditions with respect to time in a tube domain for linear hyperbolic equations of order 2n (n ≥ 1) with coefficients depending onx. We prove metric theorems on lower bounds for small denominators appearing in the course of the solution of the problem. Estimate of error of an approximated solution by the method of moments of an operator equation Gorbachuk M. L., Yakymiv R. Ya. For an equationAu = f whereA is a closed densely defined operator in a Hilbert spaceH, f εH, we estimate the deviation of its approximated solution obtained by the moment method from the exact solution. All presented theorems are of direct and inverse character. The paper refers to direct methods of mathematical physics, the development of which was promoted by Yu. D. Sokolov, the well-known Ukrainian mathematician and mechanic, a great humanitarian and righteous man. We dedicate this paper to his blessed memory. On characteristic properties of singular operators Koshmanenko V. D., Ota S. For a linear operatorS in a Hilbert space ℋ, the relationship between the following properties is investigated: (i)S is singular (= nowhere closable), (ii) the set kerS is dense in ℋ, and (iii)D(S)∩ℛ(S)={0}. On one variational criterion of stability of pseudoequilibrium forms Lukovsky I. O., Mykhailyuk O. V., Timokha A. N. We establish a variational criterion of stability for the problem of the vibrocapillary equilibrium state which appears in the theory of interaction of limited volumes of liquid with vibrational fields. Methods for the solution of equations with restrictions and the Sokolov projection-iterative method Luchka A. Y. We establish consistency conditions for equations with additional restrictions in a Hilbert space, suggest and justify iterative methods for the construction of approximate solutions, and describe the relationship between these methods and the Sokolov projection-iterative method. Variational schemes for vector eigenvalue problems Makarov I. L. We construct and study exact and truncated self-adjoint three-point variational schemes of any degree of accuracy for self-adjoint eigenvalue problems for systems of second-order ordinary differential equations. Potential fields with axial symmetry and algebras of monogenic functions of a vector variable. I Mel'nichenko I. P., Plaksa S. A. We obtain a new representation of potential and flow functions for space potential solenoidal fields with axial symmetry. We study principal algebraic-analytical properties of monogenic functions of a vector variable with values in an infinite-dimensional Banach algebra of even Fourier series and describe the relationship between these functions and the axially symmetric potential and Stokes flow function. The suggested method for the description of the above-mentioned fields is an analog of the method of analytic functions in the complex plane for the description of plane potential fields. On the optimization of projection-iterative methods for the approximate solution of ill-posed problems Pereverzev S. V., Solodkii S. G. We consider a new version of the projection-iterative method for the solution of operator equations of the first kind. We show that it is more economical in the sense of amount of used discrete information. Moduli of continuity defined by zero continuation of functions and K-functionals with restrictions Radzievskii G. V. We consider the followingK-functional: $$K(\delta ,f)_p : = \mathop {\sup }\limits_{g \in W_{p U}^r } \left\{ {\left\| {f - g} \right\|_{L_p } + \delta \sum\limits_{j = 0}^r {\left\| {g^{(j)} } \right\|_{L_p } } } \right\}, \delta \geqslant 0,$$ where ƒ ∈L p :=L p [0, 1] andW p,U r is a subspace of the Sobolev spaceW p r [0, 1], 1≤p≤∞, which consists of functionsg such that \(\int_0^1 {g^{(l_j )} (\tau ) d\sigma _j (\tau ) = 0, j = 1, ... , n} \) . Assume that 0≤l l ≤...≤l n ≤r-1 and there is at least one point τ j of jump for each function σ j , and if τ j =τ s forj ≠s, thenl j ≠l s . Let \(\hat f(t) = f(t)\) , 0≤t≤1, let \(\hat f(t) = 0\) ,t<0, and let the modulus of continuity of the functionf be given by the equality $$\hat \omega _0^{[l]} (\delta ,f)_p : = \mathop {\sup }\limits_{0 \leqslant h \leqslant \delta } \left\| {\sum\limits_{j = 0}^l {( - 1)^j \left( \begin{gathered} l \hfill \\ j \hfill \\ \end{gathered} \right)\hat f( - hj)} } \right\|_{L_p } , \delta \geqslant 0.$$ We obtain the estimates \(K(\delta ^r ,f)_p \leqslant c\hat \omega _0^{[l_1 ]} (\delta ,f)_p \) and \(K(\delta ^r ,f)_p \leqslant c\hat \omega _0^{[l_1 + 1]} (\delta ^\beta ,f)_p \) , where β=(pl l + 1)/p(l 1 + 1), and the constantc>0 does not depend on δ>0 and ƒ ∈L p . We also establish some other estimates for the consideredK-functional. Sobolev problem in the complete scale of Banach Spaces Roitberg Ya. A., Sklyarets A. V. In a bounded domainG ⊂ ℝ n , whose boundary is the union of manifolds of different dimensions, we study the Sobolev problem for a properly elliptic expression of order 2m. The boundary conditions are given by linear differential expressions on manifolds of different dimensions. We study the Sobolev problem in the complete scale of Banach spaces. For this problem, we prove the theorem on a complete set of isomorphisms and indicate its applications. Brief Communications (Russian) Coercive solvability of a generalized Cauchy-Riemann system in the Space $L_p (E)$ Ospanov K. N. For an inhomogeneous generalized Cauchy-Riemann system with nonsmooth coefficients separated from zero, we establish conditions for the solvability and estimation of a weighted solution and its first-order derivatives. Brief Communications (Ukrainian) Periodic solutions of Quasilinear Hyperbolic integro-differential equations of second order Petrovskii Ya. B. We study a periodic boundary-value problem for a quasilinear integro-differential equation with the d'Alembert operator on the left-hand side and a nonlinear integral operator on the right-hand side. We establish conditions under which the uniqueness theorems are true. On averaging of differential inclusions in the case where the average of the right-hand side does not exist Plotnikov V. A., Savchenko V. M. We consider the problem of application of the averaging method to the asymptotic approximation of solutions of differential inclusions of standard form in the case where the average of the right-hand side does not exist. Boundary-Value problems for systems of integro-differential equations with Degenerate Kernel Boichuk О. A., Krivosheya S. A., Samoilenko A. M. By using methods of the theory of generalized inverse matrices, we establish a criterion of solvability and study the structure of the set of solutions of a general linear Noether boundary-value problem for systems of integro-differential equations of Fredholm type with degenerate kernel. On the instability of lagrange solutions in the three-body problem Sosnitskii S. P. We consider the relation between the Lyapunov instability of Lagrange equilateral triangle solutions and their orbital instability. We present a theorem on the orbital instability of Lagrange solutions. This theorem is extended to the planarn-body problem.
CommonCrawl
\begin{document} \title{Stability of Vortex Solutions to an Extended Navier-Stokes System \thanks{ The authors thank the AMS Math Research Communities program (NSF grant DMS 1007980) where this research was initiated, and Center for Nonlinear Analysis (NSF Grants No. DMS-0405343 and DMS-0635983) where part of this research was carried out. GG acknowledges partial support from NSF grant DMS 1212141. GI acknowledges partial support from NSF grant DMS 1252912, and an Alfred P. Sloan research fellowship. JPW thanks the LANL/LDRD program for its support. } } \author{ Gung-Min Gie\thanks{ Department of Mathematics, University of Louisville, Louisville, KY 40292. \email{[email protected]}} \and Christopher Henderson\thanks{ Department of Mathematics, Stanford University, Stanford, CA 94305. \email{[email protected]}} \and Gautam Iyer\thanks{ Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA 15213. \email{[email protected]}} \and Landon Kavlie\thanks{ Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, Chicago, IL 60607. \email{[email protected]}} \and Jared P. Whitehead\thanks{ Mathematics Department, Brigham Young University, Provo, UT 84602, \email{[email protected]}} } \pagestyle{myheadings} \markboth {Stability of Vortex Solutions to an Extended Navier-Stokes System} {Gie, Henderson, Iyer, Kavlie and Whitehead} \maketitle \maketitle \begin{abstract} We study the long-time behavior an extended Navier-Stokes system in $\mathbb{R}^2$ where the incompressibility constraint is relaxed. This is one of several ``reduced models'' of Grubb and Solonnikov '89 and was revisited recently (Liu, Liu, Pego '07) in bounded domains in order to explain the fast convergence of certain numerical schemes (Johnston, Liu '04). Our first result shows that if the initial divergence of the fluid velocity is mean zero, then the Oseen vortex is globally asymptotically stable. This is the same as the Gallay Wayne '05 result for the standard Navier-Stokes equations. When the initial divergence is not mean zero, we show that the analogue of the Oseen vortex exists and is stable under small perturbations. For completeness, we also prove global well-posedness of the system we study. \end{abstract} \begin{keywords} Navier-Stokes equation, infinite energy solutions, extended system, long-time behavior, Lyapunov function, asymptotic stability \end{keywords} \begin{AMS} 76D05, 35Q30, 76M25, 65M06 \end{AMS} \section{Introduction}\label{sxnIntro} The dynamics of vortices of the incompressible Navier-Stokes equations play a central role in the study of many problems. Mathematically, control of the vorticity production~\cites{bblConstantinFefferman93,bblBealeKatoEtAl84} will settle a longstanding open problem regarding global existence of smooth solutions~\cites{bblFefferman06,bblConstantin01b}. Physically, regions of intense vorticity manifest themselves as cyclones in the atmosphere~\cites{bblMontgomerySmith11,bblDengSmithEtAl12}, and at a slightly decreased intensity as eddies in the oceans~\cites{bblColasMcWilliamsEtAl12,bblPetersenWilliamsEtAl13}. In all cases, regions of intense vorticity are of vital geophysical (and astrophysical) interest. After many years of intense study (see for instance \cites{bblBen-Artzi94, bblBrezis94, bblCarpio94, bblFujigakiMiyakawa01, bblGallayWayne02, bblKajikiyaMiyakawa86, bblOliverTiti00,bblMasuda84, bblSchonbek85,bblSchonbek91,bblSchonbek99, bblWiegner87,bblGigaKambe88,bblGigaMiyakawaEtAl88}), the seminal work of Gallay and Wayne~\cite{bblGallayWayne05} proved the existence of a globally stable (infinite energy) vortex in ${\mathbb{R}^2}$, known as the Oseen vortex. Physically, this means that any ${L^1}$ configuration of vortex patches will eventually combine into a ``giant'' vortex and then dissipate like the linear heat equation. The main result of this paper is the analogue of this result for an extended Navier-Stokes system where the incompressibility constraint is relaxed. The equations we study are one of several ``reduced models'' of Grubb and Solonnikov~\cites{bblGrubbSolonnikov91,bblGrubbSolonnikov89}. This model resurfaced recently in~\cite{bblLiuLiuEtAl07} to analyze a stable and efficient numerical scheme proposed in~\cite{bblJohnstonLiu04}. The numerical scheme is a time discrete, pressure Poisson scheme which improves both stability and efficiency of computation by replacing the incompressibility constraint with an auxiliary equation to determine the pressure. The formal time continuous limit of this scheme is the system \begin{equation}\label{eqnExtendedNS1} \begin{beqn}[;C] \partial_t u + (u\cdot \nabla)u + \nabla p = \Delta u,\\ \partial_t d = \Delta d,\\ d = \nabla \cdot u, \end{beqn} \end{equation} where $u$ represents the fluid velocity and $p$ the pressure. We draw attention to the fact that the usual incompressibility constraint, $d = 0$, in the Navier-Stokes equations has been replaced with an evolution equation for $d$. Of course, if $d = 0$ at time $0$, then it will remain $0$ for all time and the system~\eqref{eqnExtendedNS1} reduces to the standard incompressible Navier-Stokes equations. In domains with boundary the system~\eqref{eqnExtendedNS1} has been studied by numerous authors~\cites{bblIyerPegoEtAl12, bblJohnstonWangEtAl14, bblLiuLiuEtAl07, bblLiuLiuEtAl09, bblLiuLiuEtAl10, bblIgnatovaIyerEtAl15} both from an analytical and a numerical perspective. Boundaries, however, cause production of vorticity in a nontrivial manner and make the long time behavior of the vorticity intractable by current methods. Thus, we study the system~\eqref{eqnExtendedNS1} in ${\mathbb{R}^2}$ where at least the long time behavior of vorticity when $d = 0$ is now reasonably understood~\cite{bblGallayWayne05}. Since $d$ approaches $0$ asymptotically as $t \to \infty$, we expect that the long time behavior of solutions to~\eqref{eqnExtendedNS1} should be the same as that of the standard incompressible Navier-Stokes equations. Indeed, our first result (theorem~\ref{thmBetaZero}) shows that this is the case, \emph{provided} the initial divergence~$d_0$ has mean $0$. In this case, the entropy constructed in~\cite{bblGallayWayne05} can still be used to show global stability of the Oseen vortex. Surprisingly, if $d_0$ does not have mean $0$, the nonlinearity contributes to the entropy non-trivially and we are unable to show global stability of a steady solution using this method. Instead when $d_0$ has non-zero mean, we use methods similar to~\cites{bblRodrigues09} and show existence (but not uniqueness) of a solution that is stable under small perturbations globally in time, provided $d_0$ has a small enough mean. We are unable to show that this solution is stable under large perturbations. Further, if $d_0$ has large mean, we are unable to show that this solution is stable even under small perturbations. \subsection*{Plan of this paper} In section~\ref{sxnResults} we introduce our notation and state our main results. Next, in section~\ref{sxnMeanZero} we show that if $\beta = 0$ the Oseen vortex is the global asymptotically stable steady state. Then, in section~\ref{sxnNonMeanZero}, we study the analogue of this result when $\beta \neq 0$. We find the analogue of the Oseen vortex in this context, but are unable to show a global stability result like in the case when $\beta = 0$. We instead show that the solution is globally stable under perturbations that are small in Gaussian weighted spaces. The proofs in section~\ref{sxnMeanZero} relied on certain heat kernel like bounds for the vorticity and on relative compactness of complete trajectories. We prove these in sections~\ref{sxnInequalities} and~\ref{sxnCompactness} respectively. Finally, to ensure our results long time results are not vacuously true, we conclude this paper with section~\ref{sxnWellPosed}, where briefly discuss global well-posedness for the extended Navier-Stokes system in this context. \section{Statement of results.}\label{sxnResults} For our purposes it is more convenient to formulate~\eqref{eqnExtendedNS1} in terms of the vorticity \begin{equation*} \omega \defeq \nabla \times u = \partial_1 u_2 - \partial_2 u_1. \end{equation*} Taking the curl of~\eqref{eqnExtendedNS1} gives the system \begin{gather} \label{eqnENSOmega} \partial_t \omega + \nabla\cdot( u\omega) = \Delta \omega, \\ \label{eqnENSd} \partial_t d = \Delta d, \\ \label{eqnENSu} u = K_{BS}* \omega + \nabla^{-1} d, \end{gather} where, $K_{BS}$ and $\nabla^{-1}$ are defined by \begin{equation*} K_{BS}(x) \defeq \frac{1}{2\pi} \frac{x^\perp}{|x|^2}, \qquad\text{and}\qquad \nabla^{-1} f \defeq \frac{1}{2\pi} \frac{x}{|x|^2} \ast f. \end{equation*} Equation~\eqref{eqnENSu} simply recovers $u$ as the unique vector field with divergence $d$ and curl $\omega$. When $d = 0$, this is simply the Biot-Savart law, hence our notation~$K_{BS}$. Formally integrating equations~\eqref{eqnENSOmega} and~\eqref{eqnENSd}, one immediately sees that the quantities \begin{equation}\label{eqnMean} \alpha \defeq \int_{\mathbb{R}^2} \omega(x, t) \, dx, \qquad\text{and}\qquad \beta \defeq \int_{\mathbb{R}^2} d(x, t) dx \end{equation} are constant in time. The value of $\alpha$ in the long term vortex dynamics is mainly that of a scaling factor and not too important. The value of $\beta$, however, affects the dynamics (or at least our proofs) dramatically. We begin by studying the long term vortex dynamics when $\beta = 0$. In this case we show that the Oseen vortex defined by \begin{equation*} \tilde \omega(x, t) = \frac{1}{t} G\paren[\Big]{ \frac{x}{\sqrt{t}} } \end{equation*} is the globally stable solution, where \begin{equation*} G(x) \defeq \frac{1}{4\pi} \exp\paren[\Big]{ \frac{-|x|^2}{4} } \end{equation*} is the Gaussian. We state this as our first result. \begin{theorem}\label{thmBetaZero} Suppose $\omega_0$, $d_0 \in L^1(\mathbb{R}^2)$ are such that $\abs{x} d_0 \in L^1({\mathbb{R}^2})$ and $\beta = 0$. If the pair $(\omega, d)$ solves the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace with initial data $(\omega_0, d_0)$ then for any $p \in [1, \infty]$ we have \begin{equation}\label{eqnOmegaDDecay} \lim_{t\to\infty} t^{1-1/p} \left\| \omega(t, \cdot) - \alpha \tilde \omega(t, \cdot) \right\|_{L^p} = 0 \quad\text{and}\quad \sup_{t \geqslant 0} \; t^{\frac{3}{2}- \frac{1}{p}} \|d(t, \cdot)\|_{L^p} < \infty. \end{equation} \end{theorem} When $\beta \neq 0$, we are unable to prove a result as strong as theorem~\ref{thmBetaZero}, because a key entropy estimate is destroyed by the nonlinearity. To formulate our result in this situation, we first identify the analogue of the Oseen vortex. We show (in section~\ref{sxnWs}) that the radial self-similar solutions to the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace are obtained by rescaling $W_s = W_s(\beta)$, where $W_s$ is the unique, radially symmetric, solution of the ODE \begin{equation}\label{eqnWs} \frac{\partial_r W_s}{W_s} = \frac{-r}{2} + \frac{\beta}{2\pi r} \paren[\big]{1 - e^{-r^2/4}}, \quad \text{with normalization } \int_{\mathbb{R}^2} W_s \, dx = 1. \end{equation} A direct calculation shows that the pair $(\alpha \tilde \omega_\beta, \tilde d_\beta)$ defined by \begin{equation}\label{eqnWsInXandT} \tilde \omega_\beta( x, t ) = \frac{1}{t} W_s\paren[\Big]{\frac{x}{\sqrt{t}} }, \qquad \tilde d_\beta( x, t ) = \frac{\beta}{t} G\paren[\Big]{\frac{x}{\sqrt{t}} }, \end{equation} is a radially symmetric self-similar solution to the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace, making $\tilde \omega_\beta$ the analogue of the Oseen vortex. When $\beta = 0$, we see $W_s$ is exactly the Gaussian $G$, but this is no longer true when $\beta \neq 0$. When $\beta < 4\pi$ the shape of $W_s$ is similar to that of the Gaussian in that $W_s$ attains it's maximum at $0$ and is strictly decreasing for $r > 0$. When $\beta > 4 \pi$, however, $W_s$ attains its maximum at some $r_0 > 0$ and the profile looks like that of a ``vortex ring'' (see figure~\ref{fgrWs}). For any $\beta \neq 0$, the interaction between $W_s$ and the nonlinearity is largely responsible for the failure in our proof of theorem~\ref{thmBetaZero}. \begin{figure} \caption{Plots of $W_s$ vs $r$ for $\beta \in \set{-2\pi, 0, 2\pi, 4\pi, 8\pi, 16\pi}$.} \label{fgrWs} \end{figure} Our main result when $\beta \neq 0$ uses the Gaussian weighted spaces appearing in~\cites{bblGallayWayne02, bblGallayWayne05, bblRodrigues09} and shows that the solution $(\alpha \tilde \omega_\beta, \tilde d_\beta )$ is stable under small perturbations. Explicitly, define the weighted spaces $L^2_w$ by \begin{equation}\label{eqnWeightedSpaces} L^2_w \defeq \{f \in L^2(\mathbb{R}^2): \|f\|_w < \infty\}, \quad\text{where}\quad \|f\|_w^2 \defeq \int G(x)^{-1} |f(x)|^2 dx. \end{equation} Now our stability result when $\beta \neq 0$ is as follows: \begin{theorem}\label{thmBetaNonZero} Let $t_0 > 0$ and $(\omega, d)$ solve the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace on the time interval $[t_0, \infty)$. For any $\gamma \in (0, 1/2)$, there exists $\eps_0 = \eps_0( \gamma) > 0$ such that if \[ |\beta|(1+|\alpha|) + \norm[\big]{ \omega(t_0) - \alpha \tilde \omega_\beta(t_0) }_w + \norm[\big]{ d(t_0) - \tilde d_\beta(t_0) }_w \leqslant \eps_0, \] then \begin{equation*} \lim_{t \to \infty} t^\gamma \norm{ \tilde G(t)^{-1/2} \paren{\omega(t) - \alpha \tilde \omega_\beta(t) } }_w = 0 \end{equation*} and \begin{equation*} \sup_{t\geqslant 0}\: t^{1/2} \norm{ \tilde G(t)^{-1/2} \paren{d(t) - \tilde d_\beta(t) } }_w <\infty. \end{equation*} Here $\tilde G(x, t) = G( x / \sqrt{t} )$ is the rescaled Gaussian. \end{theorem} When $\beta = 0$, the function $\tilde \omega_\beta = \tilde \omega$, and theorem~\ref{thmBetaZero} proves stability of $\tilde \omega$ (albeit under a different norm) without any smallness assumption on the perturbation. Finally, to ensure that theorems~\ref{thmBetaZero} and~\ref{thmBetaNonZero} are not vacuously true, we establish global existence of solutions to the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace. While a little work has been done on this system in $\mathbb{R}^2$, the existence and uniqueness theory is not altogether far from the classical theory, and we address this next. \begin{proposition}\label{ppnExistence} Define the space $X$ to be either $L^1$ or $L^2_w$. If $\omega_0,d_0 \in X$, then there exists a unique time global solution $(\omega, d)$ to the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace in $X$ with initial data $(\omega_0, d_0)$. \end{proposition} The proof of this proposition follows a similar structure to results in~\cites{bblBen-Artzi94,bblBrezis94,bblGallayWayne02,bblGallayWayne05,bblKato94,bblRodrigues09}, and we do not provide a complete proof. However, for convenience of the reader, we sketch a brief outline in section~\ref{sxnWellPosed}. \section{Global stability for mean zero initial divergence.}\label{sxnMeanZero} We devote this section to proving theorem~\ref{thmBetaZero}. The main idea in the case where $\beta = 0$ is the same as that used by Gallay and Wayne in~\cite{bblGallayWayne05}. However, to use this method, certain compactness criteria and vorticity bounds need to be established. In order to present a self contained treatment, we begin with the heart of the matter (following~\cite{bblGallayWayne05}), and only state the compactness criteria where required. We postpone the proofs of the vorticity bounds and these criteria to sections~\ref{sxnInequalities} and~\ref{sxnCompactness} respectively. \subsection{Reformulation using self-similar coordinates.}\label{sxnSelfSimilar} We begin by reformulating theorem~\ref{thmBetaZero} in the natural self-similar coordinates associated to~\eqref{eqnExtendedNS1}. \begin{proof}[Proof of theorem~\ref{thmBetaZero}] Define the coordinates $\xi$ and $\tau$ by \begin{equation} \xi \defeq \frac{x}{\sqrt{t}}, \quad \tau \defeq \log(t), \end{equation} and the rescaled velocity, vorticity, and divergence by \begin{equation}\label{eqnVarChange} U( \xi, \tau ) \defeq \sqrt{t} u( x, t ), \quad W(\xi, \tau) \defeq t \omega(x,t), \quad\text{ and }\quad D(\xi, \tau) \defeq t d(x,t). \end{equation} With this transformation the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace becomes \begin{gather} \label{eqnW} \partial_\tau W + \nabla\cdot (UW) = \mathcal{L} W, \\ \label{eqnD} \partial_\tau D = \mathcal{L} D, \\ \label{eqnU} U = K_{BS}* W + \nabla^{-1} D. \end{gather} where $\mathcal{L}$ is the operator defined by \begin{equation}\label{eqnCLDef} \mathcal{L} f \defeq \Delta f + \frac{1}{2} \xi \cdot \nabla f + f. \end{equation} In the rescaled variables we will prove the following result: \begin{proposition}\label{ppnMeanZero} Let $(W, D)$ solve the system~\eqref{eqnW}--\eqref{eqnU}\xspace with initial data $(W_0, D_0)$ such that $W_0, (1 + \abs{\xi})D_0 \in L^1(\mathbb{R}^2)$. If $\alpha = \int W_0 \, d\xi$ and $\beta = \int D_0 \, d\xi = 0$, then \begin{equation}\label{eqnMeanZeroConv} \lim_{\tau\to\infty} \left\| W - \alpha G\right\|_{L^p} = 0 \quad\text{and}\quad \sup_{\tau\geqslant 0} \; e^{\frac{\tau}{2}} \left\| D \right\|_{L^p} < \infty \end{equation} for any $p \in [1, \infty]$. \end{proposition} Undoing the change of variables immediately yields theorem~\ref{thmBetaZero}. \end{proof} Before proving proposition~\ref{ppnMeanZero} we pause momentarily to explain why the proof in this case is similar to the proof in~\cite{bblGallayWayne05} for the standard Navier-Stokes equations. The only mean zero function $D$ that decays sufficiently at infinity and is an equilibrium solution to~\eqref{eqnD} is the $0$ function, in which case the system~\eqref{eqnW}--\eqref{eqnU}\xspace reduces to the standard Navier-Stokes equations in self-similar coordinates. Thus, when $\beta = 0$, the long time dynamics of the system~\eqref{eqnW}--\eqref{eqnU}\xspace should be similar to that of the standard Navier-Stokes equations (in self-similar coordinates). Indeed, as we show below, the key step of the proof in~\cite{bblGallayWayne05} goes through almost unchanged. Of course, the required bounds and compactness estimates leading up to this still require work to prove and, for clarity of presentation, we postpone their proofs to sections~\ref{sxnInequalities} and~\ref{sxnCompactness}. The proof of proposition~\ref{ppnMeanZero} consists of two main steps. The first step is to establish relative compactness of trajectories to the system~\eqref{eqnW}--\eqref{eqnU}\xspace in the space $L^1$ and is our next lemma. \begin{lemma}\label{lmaCompactness} Suppose that $W$ and $D$ solve the system~\eqref{eqnW}--\eqref{eqnU}\xspace in $C^0([0,\infty), L^1(\mathbb{R}^2)\times L^1(\mathbb{R}^2))$. Then the trajectory $\{(W(\tau),D(\tau))\}_{\tau\in[0,\infty)}$ is relatively compact in $L^1(\mathbb{R}^2)$. Further, \begin{equation}\label{eqnWbound} \abs{W(\xi, \tau)} \leqslant C \int_{\mathbb{R}^2} \exp \paren[\Big]{ \frac{-\abs{\xi - \eta e^{-\tau/2}}^2 }{C} } \abs{W_0(\eta)} \, d\eta. \end{equation} for some constant $C$ which depends only on $\norm{W_0}_{L^1}$ and $\norm{(1 + \abs{\xi})D_0(\xi)}_{L^1}$. \end{lemma} The second step in the proof of proposition~\ref{ppnMeanZero} is to characterize complete trajectories of the system~\eqref{eqnW}--\eqref{eqnU}\xspace. To do this we need to introduce a weighted $L^p$ space. For any $m \geqslant 0$, $p \in [1, \infty)$ we define the space $L^p(m)$ by \begin{equation*} L^p(m) = \set[\Big]{ f \in L^p: \|f\|_{L^p(m)} < \infty, \text{ where } \|f\|_{L^p(m)}^p = \int (1 + |\xi|^2)^\frac{pm}{2} |f(\xi)|^p d\xi }. \end{equation*} It turns out that the only complete trajectories of the system~\eqref{eqnW}--\eqref{eqnU}\xspace that are bounded in $L^2(m)$ are scalar multiples of the Gaussian. This is our next lemma. \begin{lemma}\label{lmaMeanZeroCTraj} Let $m > 3$ and suppose that $\{(W(\tau),D(\tau))\}$ is a complete trajectory of the system~\eqref{eqnW}--\eqref{eqnU}\xspace which is bounded in $L^2(m)$. Then, if $\int W_0 = \alpha$ and $\int D_0 = 0$ we must have $W(\tau) = \alpha G$ and $D = 0$ for all $\tau$. \end{lemma} Momentarily postponing the proofs of lemmas~\ref{lmaCompactness} and~\ref{lmaMeanZeroCTraj}, we prove proposition~\ref{ppnMeanZero}. \begin{proof}[Proof of proposition~\ref{ppnMeanZero}] Let $\Omega$ be the $\omega$-limit set of the trajectory $(W, D)$. Since lemma~\ref{lmaCompactness} guarantees $\{W(\tau)\}$ and $\{D(\tau)\}$ are relatively compact in $L^1$, $\Omega$ must be non-empty, compact, and fully invariant under the evolution of the system~\eqref{eqnW}--\eqref{eqnU}\xspace. Consequently, the trajectory of any $(\overline{W}, \overline{D}) \in \Omega$ must be complete. Further, the upper bound~\eqref{eqnWbound} implies $\overline{W}$ is bounded above by a Gaussian. To see this, choose a sequence of times $\tau_n \to \infty$ such that $(W(\tau_n)) \to \overline{W}$ in $L^1$ and almost everywhere. Now dominated convergence and~\eqref{eqnWbound} imply \begin{equation*} \begin{split} |\overline{W}(\xi)| &= \lim_{n\to\infty} |W(\xi, \tau_n)| \leqslant \lim_{n\to\infty} C \int \exp\paren[\Big]{ \frac{-|\xi-\eta e^{\frac{-\tau_n}{2}}|^2}{C} } |W_0(\eta)| \, d\eta \\ &\leqslant C \|W_0\|_{L^1} \exp\paren[\Big]{ \frac{-|\xi|^2}{C} }. \end{split} \end{equation*} Consequently $\Omega \subset L^2(m)^2$ for every $m$. This implies that for any $(\overline{W}, \overline{D})$, the associated complete trajectory is bounded in $L^2(m)^2$ for every $m$. Thus lemma~\ref{lmaMeanZeroCTraj} shows $\Omega \subset \{(\theta G,0): \theta \in \mathbb{R}\}$. Since total mass is invariant under the flow (and $\Omega \neq \emptyset$), it follows that $\Omega = \{(\alpha G, 0)\}$, where $\alpha$ is defined in~\eqref{eqnMean}. Since $\Omega$ contains exactly one element and $(W(\tau), D(\tau))$ is relatively compact in $L^1$, this immediately implies the first equality in~\eqref{eqnMeanZeroConv} for $p=1$. Combined with the Gaussian upper bound implied by~\eqref{eqnWbound}, we obtain the first inequality in~\eqref{eqnMeanZeroConv} for any $p < \infty$. The proof for $p = \infty$ uses bounds on the semigroup generated by the operator $\mathcal{L}$ and an integral representation for $W$. Since we develop these bounds in section~\ref{sxnCompactness}, we prove $L^\infty$ convergence as lemma~\ref{lmaLinf} at the end of section~\ref{sxnCompactness}. The second inequality in~\eqref{eqnMeanZeroConv} follows directly from the explicit solution formula for the heat equation. Since this will also be used later, we extract it as a lemma. \begin{lemma}\label{d_estimates} Let $D$ be a solution to~\eqref{eqnD} with initial data $D_0$. Suppose \begin{equation*} \int_{\mathbb{R}^2} (1 + \abs{\xi} ) \abs{D_0(\xi)} \, d\xi < \infty \quad\text{and}\quad \int_{\mathbb{R}^2} D_0 \, d\xi = 0. \end{equation*} There there exists a universal constant $C>0$ such that \begin{equation}\label{eqnDfastDecay} \norm{D(\tau)}_{L^p} \leqslant C e^{-\tau/2} \int_{\mathbb{R}^2} (1 + \abs{\xi}) \abs{D_0(\xi)} \, d\xi \end{equation} for all $p \in [1,\infty]$. \end{lemma} We remark that the decay rate of $D$ to $0$ being faster than that of the rescaled heat kernel is because the initial data has mean-zero. This concludes the proof of proposition~\ref{ppnMeanZero}. \end{proof} It remains to prove lemmas~\ref{lmaCompactness}--\ref{d_estimates}. The proof of lemma~\ref{d_estimates} is short, and we present it here. \begin{proof}[Proof of lemma~\ref{d_estimates}] Since the heat kernel in $x$-$t$ coordinates is common knowledge, we return to the $x$-$t$ coordinates and prove $d$ satisfies the second inequality in~\eqref{eqnOmegaDDecay}. Let $\bar G(x, t) = G(x / \sqrt{t}) / t$ be the heat-kernel. Observe \begin{align*} \norm{d(t)}_{L^p} &= \norm{d_0 * \bar G(t)}_{L^p} = \norm[\Big]{ \int_{\mathbb{R}^2} d_0(y) \bar G(x-y, t) \, dy }_{L^p(x)} \\ &= \norm[\Big]{ \int_{\mathbb{R}^2} d_0(y) \paren[\big]{ \bar G(x-y, t) - \bar G(x, t) } \, dy }_{L^p(x)} \\ &\leqslant \frac{1}{t^{1 - 1/p}} \int_{\mathbb{R}^2} \abs{d_0(y)} \norm[\big]{ G( x - t^{-1/2} y ) - G(x) }_{L^p(x)} \, dy \\ & \leqslant \frac{C}{t^{\frac{3}{2} - \frac{1}{p}}} \int_{\mathbb{R}^2} \abs{y d_0(y)} \, dy, \end{align*} which implies the second inequality in~\eqref{eqnOmegaDDecay} and concludes the proof. \end{proof} The proof of lemma~\ref{lmaCompactness} is technical; we postpone the proof of~\eqref{eqnWbound} to section~\ref{sxnInequalities} and the proof of compactness to section~\ref{sxnCompactness}. We prove lemma~\ref{lmaMeanZeroCTraj} in section~\ref{sxnLyapunov}. \iffalse The rest of this section is devoted to these proofs. We begin by presenting the proof of lemma~\ref{lmaMeanZeroCTraj} in section~\ref{sxnLyapunov}. Following this we outline the proof lemma~\ref{lmaCompactness} in section~\ref{sxnCompactness}. \highlight{TODO: Finally, we conclude this section by proving the required heat kernel bounds that obtain~\eqref{eqnWL1toLinf}.} \sidenote{TODO: Move proofs of the vorticity bounds to a subsection of this section, and describe it here. Perhaps the well-posedness stuff can come as an additional section here? The reader / referee can safely skip to the next section if he finds that too standard.} \begin{shaded} \textbf{Old stuff.} In order to prove theorem \ref{thmBetaZero} we follow the work of Gallay and Wayne in \cites{bblGallayWayne05, bblGallayWayne02} with modifications noted due to the additional divergence term for the current system. In the interest of brevity, when the nonzero divergence only slightly changes the calculations then we will only briefly outline the proof. The general strategy first introduces the localized spaces [snip] definition of $L^p(m)$ was here [/snip] We then use Lyapunov functions to show that complete trajectories in these localized spaces will converge to an Oseen vortex with the same mass, $\alpha G$. Then we show that complete trajectories in $L^1$ are compact and converge to elements in $L^2(m)$. This is enough to bootstrap the convergence in the localized space $L^2(m)$ to convergence in $L^1$. Lastly, we note that it is enough to show convergence of $W$ in $L^1$. Indeed, using lemma~\ref{e_vorticity_decay}, we see that $W$ is bounded in $L^\infty$. Hence, by interpolation, convergence of $W$ to $\alpha G$ is proved for $p\in[1,\infty)$. The case for $p=\infty$ follows by applying the strategies of lemma~\ref{e_vorticity_decay} to $P = W - \alpha G$ and considering the problem posed with initial time $\tau/2$. This will give us \[ \|P(\tau)\|_\infty \leqslant C \|P(\tau/2)\|_1. \] Hence, taking limits as $\tau$ tends to infinity proves \eqref{eqnMeanZeroConv} for $p = \infty$. The argument is broken up as follows. In section~\ref{sxnLyapunov}, we develop the Lyapunov function machinery and use it to prove the theorem. This is the main thrust of the argument, and it relies on the compactness results of the following section, section~\ref{sxnCompactness}. Finally, we briefly remark on the well-posedness of the system in $L^2(m)$ in section~\ref{sxnWellPosed}. \end{shaded} \fi \subsection{Characterization of complete trajectories.}\label{sxnLyapunov} The characterization of complete trajectories to the system~\eqref{eqnW}--\eqref{eqnU}\xspace when $\beta = 0$ is identical to the characterization of complete trajectories of the 2D Navier-Stokes equations presented in~\cite{bblGallayWayne05}. Since the proof is short and elegant, we reproduce it here for the reader's convenience. There are two steps to this proof: First, show that in a complete trajectory both $W$ and $D$ must have constant sign. Of course, since $D$ is mean-zero, this forces $D = 0$ identically, and reduces to the situation already considered by Gallay and Wayne~\cite{bblGallayWayne05}. Second, the most interesting step, is to use the Boltzmann entropy functional to show that $W$ must be a scalar multiple of a Gaussian. This is exactly what fails in the case where $D$ is not mean zero. We state each of these steps as lemmas, below: \begin{lemma}\label{lmaSign} Suppose $m>3$ and $(W,D)\in C^0(\mathbb{R}, L^2(m)^2)$ is a solution of the system~\eqref{eqnW}--\eqref{eqnU}\xspace which is bounded in $L^2(m)$. Then both $W$ and $D$ must have constant sign. \end{lemma} \begin{lemma}\label{lmaEntropy} Let $(W, D)$ be a solution to the system~\eqref{eqnW}--\eqref{eqnU}\xspace with $W_0 \in L^2(m)$, $D_0 = 0$, $W_0 \geqslant 0$. For the relative entropy $H$ given by \begin{equation}\label{eqnHdef} H(W) = \int_{\mathbb{R}^2} W \ln \paren[\Big]{\frac{W}{G}} \, d\xi, \end{equation} we have \begin{equation}\label{eqnHdecay} \partial_\tau H = -\int_{\mathbb{R}^2} W \abs[\Big]{ \nabla \ln \paren[\Big]{ \frac{W}{G} } }^2 \, d\xi. \end{equation} \end{lemma} Lemma~\ref{lmaMeanZeroCTraj} immediately follows from lemmas~\ref{lmaSign}--\ref{lmaEntropy}, and we spell it out here for completeness. \begin{proof}[Proof of lemma~\ref{lmaMeanZeroCTraj}] By lemma~\ref{lmaSign}, we know that both $W$ and $D$ have constant sign. Since $\int D = 0$, this forces $D = 0$ identically. Further, by symmetry we can assume $W \geqslant 0$. Note that by the comparison principle the set $L^2(m) \cap \set{ \tilde{W} \geqslant 0}$ is invariant under the dynamics of the system~\eqref{eqnW}--\eqref{eqnU}\xspace. Restricting our attention to this set, we observe that the entropy $H$ is strictly decreasing except on the set of equilibria $\tilde{W} = \theta G$. By LaSalle's invariance principle this implies that $W = \theta G$ for some $\theta$. Since $\int W = \alpha$ this forces $\theta = \alpha$ concluding the proof. \end{proof} It remains to prove lemmas~\ref{lmaSign} and~\ref{lmaEntropy}, which we do in sections~\ref{sxnSign} and~\ref{sxnEntropy} respectively. \subsubsection{The sign of complete trajectories.}\label{sxnSign} The main idea behind the proof of lemma~\ref{lmaSign} is that the $L^1$ norm can be used as a Lyapunov functional. However, we first need a relative compactness lemma to guarantee that the $\alpha$ and $\omega$-limit sets are non-empty, and we state this next. \begin{lemma}\label{lmaWCompact} Let $m>3$ and suppose $(W,D)\in C^0(\mathbb{R}, L^2(m)^2)$ is a solution to the system~\eqref{eqnW}--\eqref{eqnU}\xspace which is bounded in $L^2(m)$. The trajectory $\{( W(\tau), D(\tau) )\}_{\tau\in \mathbb{R}}$ is relatively compact in $L^2(m)$. \end{lemma} Lemma~\ref{lmaWCompact} is also used in the proof of lemma~\ref{lmaCompactness}, and we defer its proof to section~\ref{sxnCompactness}. We prove lemma~\ref{lmaSign} next. \begin{proof}[Proof of lemma~\ref{lmaSign}] Define the Lyapunov function $\Phi$ by $\Phi(W, D) = \norm{W}_{L^1} + \norm{D}_{L^1}$. We claim that $\Phi$ is always decreasing, and is strictly decreasing in time if and only if one of $W$ and $D$ does not have a constant sign. To see this, define $W^+$ and $W^-$ to be the solutions to \begin{equation*} \partial_\tau W^{+} + \nabla \cdot (U W^+) = \mathcal{L} W^+ \quad\text{and}\quad \partial_\tau W^{-} + \nabla \cdot (U W^-) = \mathcal{L} W^-, \end{equation*} with initial data $W_0^+ = \max\set{W, 0}$ and $W_0^- = \max\set{-W,0}$ respectively. We clarify that $U = K_{BS}* W$ here, and does not depend on $W^+$ or $W^-$. Clearly $W^\pm \geqslant 0$ and $W = W^+ - W^-$ for all time. Further, if both $W^+$ and $W^-$ are non-zero initially, the strong maximum principle implies that for any $\tau > 0$ the supports of $W^\pm(\tau)$ will necessarily intersect. Consequently, for any $\tau > 0$, \begin{multline} \int_{\mathbb{R}^2} \abs{W(\xi, \tau)} \, d\xi < \int_{\mathbb{R}^2} \paren{W^+(\xi, \tau) + W^-(\xi, \tau)} \, d\xi \\ = \int_{\mathbb{R}^2} \paren{W_0^+(\xi) + W_0^-(\xi)} \, d\xi = \int_{\mathbb{R}^2} \abs{W_0} \, d\xi.\label{L1_decrease} \end{multline} A similar argument can be applied to $D$ and replacing $\tau = 0$ with any arbitrary time $\tau_0$ will show that $\Phi$ is strictly decreasing in time if and only if either $W$ or $D$ do not have a constant sign. To see that complete trajectories have constant sign, we appeal to lemma~\ref{lmaWCompact} to guarantee that the trajectory $\set{(W(\tau), D(\tau)}_{\tau \in \mathbb{R}}$ has both an $\alpha$ and an $\omega$-limit. Now choose two sequences of times $(\overline{\tau}_n) \to \infty$ and $(\underline{\tau}_n) \to -\infty$ such that \begin{equation*} \underline{W} = \lim W(\underline{\tau}_n) \quad\text{and}\quad \overline{W} = \lim W( \overline{\tau}_n ) \quad \text{in $L^2(m)$}. \end{equation*} Since $\int W$ is conserved we must have $\int \overline{W} = \int \underline{W}$. Further, by LaSalle's invariance principle both $\overline{W}$ and $\underline{W}$ have constant sign. Consequently, for any $\tau \in \mathbb{R}$, \begin{equation*} \abs[\Big]{ \int_{\mathbb{R}^2} \underline{W} \, d\xi } = \int_{\mathbb{R}^2} \abs{ \underline{W} } \, d\xi \geqslant \int_{\mathbb{R}^2} \abs{ W(\tau) } \, d\xi \geqslant \int_{\mathbb{R}^2} |\overline{ W }| \, d\xi = \abs[\Big]{ \int_{\mathbb{R}^2} \overline{ W } \, d\xi } = \abs[\Big]{ \int_{\mathbb{R}^2} \underline{ W } \, d\xi }. \end{equation*} Hence, $\int |W(\tau)|d\xi$ is constant in $\tau$. This, along with~\eqref{L1_decrease}, shows that $W$ has a constant sign. A similar argument can be applied to $D$. This shows that $\Phi$ is constant in time and hence both $W$ and $D$ must have constant sign. \end{proof} \subsubsection{Decay of the Boltzmann entropy.}\label{sxnEntropy} The use of the relative entropy $H$ in this context was suggested by C. Villani, and the decay (when $D = 0$) is a direct calculation that was carried out in~\cite{bblGallayWayne05}*{lemma 3.2}. We briefly sketch a few details here for the readers convenience. \begin{proof}[Proof of lemma~\ref{lmaEntropy}] Differentiating~\eqref{eqnHdef} with respect to $\tau$ gives \begin{equation*} \partial_\tau H = \int_{\mathbb{R}^2} \paren[\Big]{1 + \ln \paren[\Big]{\frac{W}{G}} } \partial_\tau W = \int_{\mathbb{R}^2} \paren[\Big]{1 + \ln \paren[\Big]{\frac{W}{G}} } \paren[\big]{\mathcal{L} W - \nabla \cdot (U W)}. \end{equation*} Using the identity $\nabla G / G = -\xi / 2$ and the term involving $\mathcal{L}$ simplifies to \begin{multline*} \int_{\mathbb{R}^2} \paren[\Big]{1 + \ln\paren[\Big]{\frac{W}{G}} } \mathcal{L} W \, d\xi = -\int_{\mathbb{R}^2} \paren[\Big]{\nabla W + \frac{\xi}{2} W} \cdot \paren[\Big]{\frac{\nabla W}{W} - \frac{\nabla G}{G} } \, d\xi \\ = -\int_{\mathbb{R}^2} W \abs[\Big]{ \frac{\nabla W}{W} - \frac{\nabla G}{G} }^2 \, d\xi = -\int_{\mathbb{R}^2} W \abs[\Big]{ \nabla \ln \paren[\Big]{ \frac{W}{G} } }^2 \, d\xi. \end{multline*} We claim the convection terms integrate to $0$. Indeed, \begin{equation*} - \int_{\mathbb{R}^2} \paren[\Big]{1 + \ln\paren[\Big]{\frac{W}{G}} } \nabla \cdot (U W) \, d\xi = \int_{\mathbb{R}^2} U \cdot \nabla W \, d\xi + \frac{1}{2} \int_{\mathbb{R}^2} W U \cdot \xi \, \, d\xi. \end{equation*} The first term on the right clearly integrates to $0$. If $U$ decayed sufficiently at infinity, we can write $W = \nabla \times U$, integrate the second term by parts, and obtain \begin{equation}\label{eqnNonMagic} \frac{1}{2} \int_{\mathbb{R}^2} W U \cdot \xi \, \, d\xi = \frac{1}{4} \int_{\mathbb{R}^2} \xi \cdot \nabla^\perp \abs{U}^2 \, d\xi = 0. \end{equation} Without the decay assumption one can use the Biot-Savart law and Fubini's theorem (see for instance~\cite{bblGallayWayne05}*{lemma 3.2}) and still show this term integrates to $0$. This immediately yields~\eqref{eqnHdecay} as desired. \end{proof} \section{Stability when the initial divergence has non-zero mean}\label{sxnNonMeanZero} In this section, we study the long time behaviour of the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace when $\beta \neq 0$ (i.e.\ when the mean of the initial divergence is non-zero) and prove theorem~\ref{thmBetaNonZero}. Unlike the behaviour in section~\ref{sxnMeanZero}, the divergence $D$ of the equilibrium solution to the system~\eqref{eqnW}--\eqref{eqnU}\xspace is non-zero. Consequently, the steady state of the system~\eqref{eqnW}--\eqref{eqnU}\xspace is no longer a Gaussian (like the Oseen vortex), but the radial function $W_s$ defined by~\eqref{eqnWs}. We remark, however, that different, non-radial, steady solutions to the system~\eqref{eqnW}--\eqref{eqnU}\xspace may exist and we can neither prove nor disprove their existence. Further it turns out that the radial state $W_s$ doesn't ``play nice'' with the non-linearity. We are unable to show decay of the analogue of the Boltzmann entropy~\eqref{eqnHdef}, which is a key step in both~\cite{bblGallayWayne05} and the proof of theorem~\ref{thmBetaZero}. We can, however, show that $W_s$ is stable under small perturbations globally in time (theorem~\ref{thmBetaNonZero}) using techniques that are similar to those in~\cites{bblRodrigues09,bblGallayMaekawa13}. This is the main goal of this section. In section~\ref{sxnWs}, we derive an explicit equation for the radial steady state~$W_s$. In section~\ref{sxnEntropyFail}, we compute the evolution of the Boltzmann entropy functional mainly to point out the breaking point of the argument of Gallay and Wayne~\cite{bblGallayWayne05}. In section~\ref{sxnRodrigues}, we use a different method (similar to that in~\cite{bblRodrigues09}) to prove stability under small perturbations (theorem~\ref{thmBetaNonZero}) modulo the proofs of a few estimates which are presented in section~\ref{sxnDivBounds}. \subsection{The radial steady state}\label{sxnWs} Since the equation for $D$ is linear, we find that $D \to \beta G$ as $\tau \to \infty$. This can be seen, for instance, by noticing that $D - \beta G$ satisfies the heat equation in Euclidean coordinates with initial mean zero. An argument analogous to the proof of lemma~\ref{d_estimates} gives the precise decay. Turning to $W$, we denote the steady state by $W_s$. For convenience, we normalize $W_s$ so that $\int W_s d\xi = 1$. We claim that a unique radial steady state exists, and is exactly given by~\eqref{eqnWs}. (We can not, however, rule out the possibility that other non-radial steady states exist.) To see that the unique radial steady state satisfies~\eqref{eqnWs}, we use equation~\eqref{eqnW} to obtain \[ 0 = -\left(K_{BS}* W_s\right) \cdot \nabla W_s - \beta\nabla\cdot\left((\nabla^{-1} G) W_s\right)+ \mathcal{L} W_s, \] in $L^2_w$. Under the assumption that $W_s$ is radial, $K_{BS}* W_s \cdot \nabla W_s = 0$ and hence \[ \beta\nabla\cdot\left((\nabla^{-1} G) W_s\right) = \mathcal{L} W_s = \nabla \cdot \paren[\Big]{G \nabla\frac{W_s}{G} }. \] Consequently, \[ \nabla^\perp \varphi = -\beta \nabla^{-1} G W_s + G \nabla \frac{W_s}{G}. \] for some function $\varphi$. Since the right hand side is radially pointing and smooth, we must have $\nabla^\perp \varphi = 0$ identically. Switching to polar coordinates immediately shows that $W_s$ satisfies~\eqref{eqnWs}, and reverting back to the $x$ and $t$ coordinates shows that $(\tilde \omega_\beta, \tilde d_\beta)$, defined in~\eqref{eqnWsInXandT}, is the unique radially symmetric, self-similar solution to the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace. \subsection{The Boltzmann entropy.}\label{sxnEntropyFail} Before embarking on the proof of theorem~\ref{thmBetaNonZero}, we briefly study the analogue of the Boltzmann entropy in this situation. Naturally, the Gaussian in this context needs to be replaced with $W_s$, the solution to~\eqref{eqnWs}, and so~\eqref{eqnHdef} now becomes \begin{equation*} H(W) = \int_{\mathbb{R}^2} W \ln \paren[\Big]{\frac{W}{W_s}} \, d\xi. \end{equation*} Computing $\partial_\tau H$ and performing a calculation similar to that in section~\ref{sxnEntropy} we obtain \begin{equation*} \begin{split} \partial_\tau H &= \int_{\mathbb{R}^2} W (K_{BS}* W) \cdot \paren[\Big]{ \frac{\nabla W}{W} - \frac{\nabla W_s}{W_s} } \, d\xi -\int_{\mathbb{R}^2} W \abs[\Big]{ \frac{ \nabla W}{W} - \frac{\nabla W_s}{W_s} }^2 \, d\xi \\ &= - \int_{\mathbb{R}^2} W (K_{BS}* W) \cdot \frac{\nabla W_s}{W_s} \, d\xi -\int_{\mathbb{R}^2} W \abs[\Big]{ \frac{ \nabla W}{W} - \frac{\nabla W_s}{W_s} }^2 \, d\xi. \end{split} \end{equation*} The second term is of course always negative. The first term can be simplified using~\eqref{eqnWs} to \begin{multline*} - \int_{\mathbb{R}^2} W (K_{BS}* W) \cdot \frac{\nabla W_s}{W_s} \, d\xi \\ = \int_{\mathbb{R}^2} W (K_{BS}* W) \cdot \frac{\xi}{2} \, d\xi + \beta \int_{\mathbb{R}^2} W (K_{BS}* W) \cdot \frac{\xi}{2 \pi \abs{\xi}^2 } \paren[\Big]{1 - 4 \pi G} \, d\xi. \end{multline*} The first term on the right integrates to $0$ (by equation~\eqref{eqnNonMagic}). Further for any radial function (hence certainly for $W = W_s$) the second term vanishes. Consequently, \begin{multline*} \partial_\tau H = -\int_{\mathbb{R}^2} W \abs[\Big]{ \frac{ \nabla W}{W} - \frac{\nabla W_s}{W_s} }^2 \, d\xi \\ + \beta \int_{\mathbb{R}^2} (W - W_s) K_{BS}* (W-W_s) \cdot \frac{\xi}{2 \pi \abs{\xi}^2 } \paren[\Big]{1 - 4 \pi G} \, d\xi. \end{multline*} While the second term on the right should, in principle, be small (at least for small values of $\beta$ and when $W$ is close to $W_s$), we are (presently) unable to dominate this by the first term and show that $\partial_\tau H \leqslant 0$. Thus we do not know whether the steady state $W_s$ is stable under large perturbations. \subsection{Stability under small perturbations}\label{sxnRodrigues} We now turn to proving stability of $(\tilde \omega_\beta, \tilde d_\beta )$ as stated in theorem~\ref{thmBetaNonZero}. \begin{proof}[Proof of theorem~\ref{thmBetaNonZero}] Using the $\xi$-$\tau$ coordinates, let $(W,D)$ be solutions to the system~\eqref{eqnW}--\eqref{eqnU}\xspace with initial data $W_0, D_0 \in L^2_w$. Define the perturbations from the steady state $D_p$, $U_p$ and $W_p$ by \begin{equation}\label{eqnWpDpDef} W_p \defeq W - \alpha W_s, \quad D_p \defeq D - \beta G, \quad\text{and}\quad U_p \defeq K_{BS}* W_p + \nabla^{-1} W_p. \end{equation} In this setting, theorem~\ref{thmBetaNonZero} will follow if we establish \begin{align} \label{eqnWpBound} \|W_p(\tau)\|_w &\leqslant C \paren[\big]{ \|W_p(\tau_0)\|_w e^{-\gamma\tau} + \norm{D_p(\tau_0)}_{L^1(1)}e^{-\tau/2} } \end{align} for some constant $C$, where $\tau_0 = \log(t_0)$. As before, the estimate for $D$ in theorem~\ref{thmBetaNonZero} is analogous to lemma~\ref{d_estimates}. To begin we state one basic result without proof. First, a straightforward adaptation of the work in~\cite{bblRodrigues09}*{theorem~1} yields the following existence result. \begin{lemma}\label{lmaExistence} For $\eps_0>0$, there exists $\delta_0>0$, depending only on $\alpha$, such that if $W(0), D(0) \in L_w^2$ and \[ |\beta| + \|W_p(\tau_0)\|_w + \|D_p(\tau_0)\|_w \leqslant \delta_0, \] then there is a unique solution to the system~\eqref{eqnW}--\eqref{eqnU}\xspace such that, for all $\tau$, \begin{equation}\label{eqnWpDpSmall} \|D_p(\tau)\|_w + \|W_p(\tau)\|_w \leqslant \eps_0. \end{equation} \end{lemma} In order to show convergence to the steady state, we work with the equation for the perturbation, \begin{equation}\label{eqnPerturbative} \partial_\tau W_p + \nabla \cdot \paren[\big]{ U W_p + \alpha K_{BS}* W_p W_s + \alpha \nabla^{-1} D_p W_s} = \mathcal L W_p. \end{equation} We multiply~\eqref{eqnPerturbative} by $G^{-1}W_p$ and integrate to obtain \begin{multline}\label{eqnWpInt} \frac{1}{2} \partial_\tau \norm{W_p}_w^2 + \int_{\mathbb{R}^2} G^{-1} W_p \nabla \cdot \paren[\big]{ U W_p + \alpha K_{BS}* W_p W_s + \alpha \nabla^{-1} D_p W_s} \\ = \int_{\mathbb{R}^2} G^{-1} W_p \mathcal{L} W_p. \end{multline} We estimate each term individually. First, for the right hand side, we use a coercivity estimate proven in~\cite{bblRodrigues09}. Namely, since $\int W_p \, d\xi = 0$, for any $\gamma \in (0, 1/2)$ and $\eps>0$ such that $\gamma + 1000\eps < 1/2$, we have \begin{equation}\label{eqnLSpectral} -\int G^{-1} W_p \mathcal{L} W_p \geqslant (\gamma+\eps) \|W_p\|_w^2 + \frac{1-2(\gamma+\eps)}{2}\left[\frac{1}{3}\|\nabla W_p\|_w^2 + \frac{1}{32}\|\xi W_p\|_w^2\right]. \end{equation} This is proved by first observing operator $L \defeq -G^{-1/2} \mathcal L G^{1/2}$ is a harmonic oscillator with spectrum $\set{ 0, 1/2, 1, 3/2, \dots }$ where $0$ is a simple eigenvalue. Combining this with a standard energy estimate shows~\eqref{eqnLSpectral}, and we refer the reader to~\cite{bblGallayWayne02}*{Appendix A} or~\cite{bblRodrigues09}*{\S3.1} for the details. We assume, without loss of generality, that $\gamma > 1/4$. For the first term in the integral on the left of~\eqref{eqnWpInt}, observe \begin{align} \MoveEqLeft \nonumber \int_{\mathbb{R}^2} G^{-1} W_p \nabla \cdot (U W_p) \, d\xi = \int_{\mathbb{R}^2} \paren[\big]{ G^{-1} W_p^2 D + \frac{1}{2} G^{-1} U \cdot \nabla \paren[\big]{W_p^2} } \, d\xi \\ \nonumber &= \frac{1}{2} \int_{\mathbb{R}^2} G^{-1} W_p^2 \paren[\big]{D - \frac{1}{2} \xi \cdot U } \, d\xi \\ \label{eqnWpInt1} &= \frac{1}{2} \int_{\mathbb{R}^2} G^{-1} W_p^2 \paren[\big]{D - \frac{1}{2} \xi \cdot \nabla^{-1} D } \, d\xi + \int_{\mathbb{R}^2} G^{-1} W_p (K_{BS}* W_p) \cdot \nabla W_p \, d\xi, \end{align} since $K_{BS}* W_s \cdot \xi = 0$. To estimate this we claim \begin{gather} \label{eqnDbound} \|D\|_w + \|D\|_{L^\infty} + \|\nabla^{-1} D\|_{L^\infty} \leqslant C \left[ \abs{\beta} + \|D_p(0)\|_w\right], \\ \label{eqnBSbound} \llap{\text{and}\qquad} \norm{K_{BS}* W_p}_{L^\infty} \leqslant C \paren[\big]{ \norm{W_p}_w + \norm{\nabla W_p}_w }, \end{gather} for some constant $C$ that is independent of $D_0, W_p$ and $\beta$. To avoid breaking continuity we defer the proof of these estimates to section~\ref{sxnDivBounds} and continue with our proof of theorem~\ref{thmBetaNonZero} here. Let $\eps_0$ to be a small constant to be determined later. Using lemma~\ref{lmaExistence}, choose $\delta_0$ to guarantee~\eqref{eqnWpDpSmall} holds. Then, returning to~\eqref{eqnWpInt1} we see \begin{equation*} \abs[\Big]{\int_{\mathbb{R}^2} G^{-1} W_p \nabla \cdot (U W_p) \, d\xi} \leqslant C \paren[\big]{ \abs{\beta} + \norm{D_p(\tau_0)}_w + \eps_0 } \paren[\big]{ \norm{W_p}_w^2 + \norm{\nabla W_p}_w^2 }. \end{equation*} For the second term in the integral on the left of~\eqref{eqnWpInt} we obtain smallness by using the fact that this term vanishes when $W_s = G$. Indeed, \begin{equation*} \alpha \int_{\mathbb{R}^2} G^{-1} W_p \, K_{BS}* W_p \cdot \nabla W_s \, d\xi = -\alpha \int_{\mathbb{R}^2} G^{-1} W_s K_{BS}* W_p \paren[\big]{ \nabla W_p + \frac{\xi}{2} W_p } \, d\xi, \end{equation*} which vanishes when $W_s = G$ due to the identity~\eqref{eqnNonMagic}. Consequently, \begin{multline}\label{eqnWpInt2} \alpha \int_{\mathbb{R}^2} G^{-1} W_p \, K_{BS}* W_p \cdot \nabla W_s \, d\xi \\ = -\alpha \int_{\mathbb{R}^2} G^{-1} (W_s - G) K_{BS}* W_p \paren[\big]{ \nabla W_p + \frac{\xi}{2} W_p } \, d\xi. \end{multline} We claim that for all $\beta$ sufficiently small, \begin{equation}\label{eqnWsMinusG} \norm{W_s - G}_w \leqslant C \abs{\beta}, \end{equation} for some universal constant $C$. Again, to avoid breaking continuity, we defer the proof of~\eqref{eqnWsMinusG} to section~\ref{sxnDivBounds}, and continue with the proof theorem~\ref{thmBetaNonZero}. Equations~\eqref{eqnWpInt2} and~\eqref{eqnWsMinusG} immediately show \begin{align} \MoveEqLeft\nonumber \abs[\Big]{ \alpha \int_{\mathbb{R}^2} G^{-1} W_p \, K_{BS}* W_p \cdot \nabla W_s \, d\xi } \\ \nonumber &\leqslant C \abs{\alpha \beta} \norm{K_{BS}* W_p}_{L^\infty} \norm{W_p}_w \norm{\nabla W_s}_w \\ \label{eqnWpInt22} &\leqslant C \abs{\alpha \beta} \paren{ \norm{W_p}_w^2 + \norm{\nabla W_p}_w^2 }. \end{align} For the last inequality above we absorbed $\norm{\nabla W_s}_w$ into the constant $C$, and used~\eqref{eqnBSbound} and interpolation. For the last term in the integral on the left of~\eqref{eqnWpInt} observe \begin{align*} \abs[\Big]{\alpha \int_{\mathbb{R}^2} G^{-1} W_p \nabla \cdot ( \nabla^{-1} D_p W_s ) \, d\xi} &= \abs[\Big]{\alpha \int_{\mathbb{R}^2} G^{-1} W_p (D_p W_s + \nabla^{-1} D_p \cdot \nabla W_s) \, d\xi} \\ &\leqslant \abs{\alpha} \norm{W_p}_w \paren[\big]{\norm{D_p}_{L^\infty} + \norm{\nabla^{-1} D_p}_{L^\infty}} \norm{W_s}_w \\ & \leqslant C \abs{\alpha} \norm{W_p}_w \paren[\big]{ \norm{D_p}_{L^\infty} + \norm{D_p}_{L^1}^{1/2} \norm{D_p}_{L^\infty}^{1/2} }. \end{align*} The last estimate followed from the interpolation inequality \begin{equation}\label{eqnGradInvDBd} \norm{\nabla^{-1} D_p}_{L^\infty} \leqslant C \norm{D_p}_{L^1}^{1/2} \norm{D_p}_{L^\infty}^{1/2}, \end{equation} the proof of which can be found in~\cite{bblRodrigues09} or~\cite{bblGallayWayne02} (see also proposition~\ref{kernel_bounds} in section~\ref{sxnCompactness}, below). Since $D_p$ satisfies~\eqref{eqnD} with mean-zero initial data $D_p(\tau_0) \in L^1(1)$, it must satisfy the decay estimate~\eqref{eqnDfastDecay}. Thus \begin{align*} \abs[\Big]{\alpha \int_{\mathbb{R}^2} G^{-1} W_p \nabla \cdot ( \nabla^{-1} D_p W_s ) \, d\xi} &\leqslant C \norm{D_p(\tau_0)}_{L^1(1)} e^{-\tau/2} \norm{W_p}_w \\ &\leqslant \frac{\eps}{8} \norm{W_p}_w^2 + C \norm{D_p(\tau_0)}_{L^1(1)}^2 e^{-\tau}. \end{align*} Making $(1 + \abs{\alpha})\abs{\beta}$, $\delta_0$ and $\eps_0$ small enough, our estimates so far give \begin{multline*} \frac{1}{2} \partial_\tau \|W_p\|_w^2 + (\gamma+\eps) \|W_p\|_w^2 + \frac{1 - 2(\gamma+\eps)}{2} \left[\frac{1}{3}\|\nabla W_p\|_w^2 + \frac{1}{32} \|\xi W_p \|_w^2\right]\\ \leqslant \eps\left[\|W_p\|_w^2 + \|\xi W_{p, k+1}\|_w^2 + \|\nabla W_p\|_w^2\right] + C e^{-\tau} \norm{D_p(\tau_0)}_{L^1(1)}^2. \end{multline*} Because we chose $\eps$ small enough, the first three terms on the right can be absorbed in the left. Consequently, \begin{equation*} \partial_\tau \|W_p(\tau)\|_w^2 + 2\gamma \|W_p\|_w^2 \leqslant C e^{-\tau} \norm{D_p(0)}_{L^1(1)}^2, \end{equation*} which immediately implies~\eqref{eqnWpBound}. \end{proof} \subsection{Proofs of estimates}\label{sxnDivBounds} In this section, we prove the bounds~\eqref{eqnDbound}, \eqref{eqnBSbound} and \eqref{eqnWsMinusG}, which were used in the proof of theorem~\ref{thmBetaNonZero}. We begin with the bounds on the divergence. \begin{lemma} Let $D$ satisfy~\eqref{eqnD} with initial data $D_0 \in L^2(w)$, and let $\beta = \int D_0 d\xi$. Then if $D_p = D - \beta G$, there exists a uniform constant $C>0$ such that~\eqref{eqnDbound} holds. \end{lemma} \begin{proof} Multiplying~\eqref{eqnD} by $G^{-1}D$, integrating and using the coercivity estimate~\eqref{eqnLSpectral} gives \[ \frac{1}{2}\partial_\tau \|D\|_w^2 + \frac{1}{4}\left[ \|D\|_w^2 + \frac{1}{3} \|\nabla D\|_w^2 + \frac{1}{32} \|\xi D\|_w^2 \right] \leqslant 0. \] Integrating this inequality in $\tau$ gives us the desired inequality for $\|D\|_w$. Further, in the standard $x$-$t$ coordinates, $D$ solves the heat equation. The classical estimates for solutions to the heat equation give us \[ \|D(\tau)\|_{L^\infty} + \|D(\tau)\|_{L^1} \leqslant C\|D(\tau_0)\|_{L^1} \leqslant C \|D(\tau_0)\|_w. \] Combined with the interpolation inequality~\eqref{eqnGradInvDBd} this yields the same bound for $\norm{\nabla^{-1} D}_{L^\infty}$, completing the proof. \end{proof} Now we turn to~\eqref{eqnBSbound}, which follows using the Sobolev embedding theorem and interpolation. \begin{proof}[Proof of inequality~\eqref{eqnBSbound}] We know that the Biot-Savart operator satisfies the interpolation inequality \begin{equation*} \norm{K_{BS}* W_p}_{L^\infty} \leqslant C \norm{W_p}_{L^{4/3}}^{1/2} \norm{W_p}_{L^4}^{1/2}. \end{equation*} The proof is the same as that of~\eqref{eqnGradInvDBd}, and can be found in~\cites{bblRodrigues09, bblGallayWayne02} (see also proposition~\ref{kernel_bounds} in section~\ref{sxnCompactness}, below). Combining this with Sobolev inequality we obtain \begin{multline*} \norm{K_{BS}* W_p}_{L^\infty} \leqslant C \norm{W_p}_{L^{4/3}}^{1/2} \norm{W_p}_{L^4}^{1/2} \leqslant C \norm{W_p}_{L^{4/3}}^{1/2} \norm{\nabla W_p}_{L^{4/3}}^{1/2} \\ \leqslant C \norm{W_p}_{L^2(w)}^{1/2} \norm{\nabla W_p}_{L^2(w)}^{1/2} \leqslant C \paren[\big]{\norm{W_p}_{L^2(w)} + \norm{\nabla W_p}_{L^2(w)} }, \end{multline*} as desired. \end{proof} Finally, we prove~\eqref{eqnWsMinusG} showing $W_s$ is close to $G$ when $\beta$ is small. \begin{lemma} Let $W_s\in L_w^2$ be a solution to equation~\eqref{eqnWs}. Then there is a universal constant $C > 0$ such that such that the inequality~\eqref{eqnWsMinusG} holds for all $\beta$ sufficiently small. \end{lemma} \begin{proof} Define $P_s = W_s - G$. Notice that this solves \[ \mathcal{L} P_s = \beta G P_s + \beta\nabla^{-1} G \cdot \nabla P_s + \beta G^2 + \beta \nabla^{-1} G \cdot \nabla G.\] Multiply this equation by $G^{-1} P_s$ and using~\eqref{eqnLSpectral}, with $\gamma = 1/4$, to obtain that \[\begin{split} \frac{1}{4} \|P_s\|_w^2 + \frac{1}{4}\left[\frac{1}{3}\|\nabla P_s\|_w^2 + \frac{1}{32}\|\xi P_s\|_w^2\right] &\leqslant - \int G^{-1} P_s \mathcal{L} P_s\\ &= - \beta \int P_s^2 - \beta \int G^{-1} P_s \nabla^{-1} G\cdot \nabla P_s \\ & -\beta \int GP_s - \beta\int G^{-1} P_s \nabla^{-1} G \cdot \nabla G\\ &\leqslant (2|\beta|+\eps)\left[ \|P_s\|_w^2 + \|\nabla P_s\|_w^2\right] + |\beta|^2C_\eps .\end{split}\] Here $\eps < 1/20$ is a positive constant. Then when $\beta$ is sufficiently small, we may absorb the terms on the last line into the left hand side, giving~\eqref{eqnWsMinusG} as desired. \end{proof} \section{Bounds for the vorticity}\label{sxnInequalities} Bounds on the vorticity to the standard 2D incompressible Navier-Stokes equations are well known. In this section we prove the analogues of these bounds for the extended Navier-Stokes equations~\eqref{eqnExtendedNS1}. We begin with the vorticity decay in $L^p$. The strategy for this proof is not entirely different from the classical case, however, the appearance of a divergence term complicates matters and yields a slightly different final estimate. We will use this estimate in the proof of~\eqref{eqnWbound} and in our discussion of well-posedness in section~\ref{sxnWellPosed}. \begin{lemma}\label{e_vorticity_decay} Let $p$ be an element of $[1,\infty]$, and suppose that $(\omega, d)$ solve the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace with $\omega_0, d_0 \in L^1$. Then there exists $C>0$, depending only on $p$, $\|\omega_0\|_{L^1}$, and $\|d_0\|_{L^1}$ such that \begin{equation}\label{vort_decay} \|\omega\|_{L^p} + t^{1/2}\|\nabla \omega\|_{L^p} \leqslant \frac{C}{t^{1-1/p}} \end{equation} and \begin{equation}\label{vort_grad_decay} \|\nabla\omega\|_{L^p} \leqslant \frac{C}{t^{3/2 - 1/p}}. \end{equation} \end{lemma} \begin{proof} We omit the proof of the bound on the gradient. Indeed, by following the work in~\cite{bblKato94}*{proposition~4.1}, we note that the estimate relies only on~\eqref{vort_decay} and Duhamel's principle. In view of this, obtaining this result is a straightforward adaptation. Now, we obtain the $L^p$ bound by obtaining a bound in $L^1$ and $L^\infty$ and interpolating. The $L^1$ bound follows by splitting $\omega_0$ into its positive and negative parts, using the maximum principle, and using that the mass is preserved. The classical technique for obtaining the $L^\infty$ bound has three steps: (i) get a bound on the $L^2$ norm in terms of the $L^1$ norm divided by $t^{1/2}$, (ii) show that this gives a bound on the $L^\infty$ norm in terms of the $L^2$ norm divided by $t^{1/2}$ for the adjoint problem, and (iii) apply these inequalities over $[0,t/2]$ and $[t/2,t]$ to finish. Since the work in (ii) is the same as the work in (i) and since (iii) is unchanged from the classical setting, we simply show the first step (i). To this end, multiplying our equation by $\omega$ and integrating by parts gives us \[ \frac{d}{dt} \|\omega \|_{L^2}^2 \leqslant -2\| \nabla \omega\|_{L^2}^2 + 2\|d\|_{L^\infty} \|\omega\|_{L^2}^2 .\] Using the Fourier transform, we see that there is a constant $C>0$ such that for any $R$, \[\begin{split} \|\hat\omega\|_{L^2}^2 &\leqslant \int_{B_R^c} \frac{|\xi|^2}{R^2}|\hat\omega|^2d\xi + \int_{B_R} |\hat\omega|^2 d\xi\\ &\leqslant \frac{1}{R^2} \int |\xi|^2 |\omega|^2 d\xi + \int_{B_R} \|\hat\omega\|_{L^\infty}^2 d\xi\\ &\leqslant \frac{1}{R^2} \|\nabla\omega\|_{L^2}^2 + C R^2 \|\omega\|_{L^1}^2. \end{split}\] Using $R = \|\omega\|_{L^2}^2/(2C\|\omega_0\|_{L^1}^2)$ along with these inequalities yields \begin{equation}\label{e_diff_inequality} \frac{d}{dt} \|\omega\|_{L^2}^2 \leqslant \left[ \frac{C \|d_0\|_{L^1}}{t} - \frac{\|\omega\|_{L^2}^2}{2C \|\omega_0\|_{L^1}^2} \right] \|\omega\|_{L^2}^2\\ .\end{equation} Here we used the standard estimates for the heat equation, and then we used Young's inequality. Define $\phi(t) = t \|\omega\|_{L^2}^2$ to obtain \[ \phi'(t) \leqslant \frac{\phi}{t}\left[\|d_0\|_{L^1} - \frac{\phi}{2C \|\omega_0\|_{L^1}^2} + 1 \right]. \] This implies that $\phi \leqslant 2C \|\omega_0\|_{L^1}^2[ \|d_0\|_{L^1}^2 + 1]$, which proves our claim. \end{proof} Now, we prove the pointwise, heat kernel type bound on the vorticity when $\beta = 0$ stated in lemma~\ref{lmaCompactness}. We use the increased decay of the heat equation when the initial data is mean-zero here. The key point here is that the $L^\infty$ norm of the divergence is integrable in time, so we may reproduce the classical arguments in this case. We follow the work of Carlen and Loss in~\cite{bblCarlenLoss95} in order to do this. \begin{proof}[Proof of~\eqref{eqnWbound}] Our first step is to obtain bounds for the equation \begin{equation}\label{kernel_linear_problem} \phi_t = \Delta \phi + \nabla\cdot(b \phi) + c \phi. \end{equation} which depend only on certain norms of $b$ and $c$. To this end, fix $T>0$ and we let $r(t)$ be a monotone increasing, smooth function defined on $[0,T]$ to be determined later. In addition, we may assume without loss of generality that $\phi$ is non-negative. Then we calculate \[\begin{split} r(t)^2 \|\phi\|_{L^r}^{r-1} \frac{d}{dt}\|\phi\|_{L^r} &= \dot{r} \int \phi^r \log\left( \frac{\phi^r(x)}{\|\phi\|_{L^r}}\right) dx + r(t)^2 \int \phi^{r-1}\phi_t dx\\ &= \dot{r} \int \phi^r \log\left( \frac{\phi^r(x)}{\|\phi\|_{L^r}}\right) dx\\ &\qquad + r(t)^2 \int \phi^{r-1}\left(\Delta \phi + \nabla\cdot(b\phi) + c\phi\right)dx\\ &= \dot{r} \int \phi^r \log\left( \frac{\phi^r(x)}{\|\phi\|_{L^r}}\right) dx + 4(r-1) \int \left| \nabla\left(\phi^{r/2}\right)\right|^2 dx\\ &\ \ \ \ + \int r(r-1) \phi^r \left(\nabla \cdot b\right) + r^2 \int c\phi^r dx. \end{split}\] The log-Sobolev inequality~\cite{bblCarlenLoss95}*{Equation (1.17)}, which the authors derive from the work in~\cite{bblGross75}, is \begin{equation}\label{lsi} \int |f|^2 \log\left(\frac{f^2}{\|f\|_{L^2}^2}\right) dx + (2 + \log(a))\int |f|^2 dx \leqslant \frac{a}{\pi}\int |\nabla f|^2 dx, \end{equation} for any $f \in H^1$ and $a\in (0,\infty)$. Applying this with $a = 4\pi (r-1)/\dot{r}$, gives us \[\begin{split} r(t)^2 \|\phi\|_{L^r}^{r-1} \frac{d}{dt}\|\phi\|_{L^r} \leqslant& - \dot{r} \left(2 + \log\left( \frac{4\pi(r-1)}{\dot{r}}\right)\right) \|\phi\|_{L^r}^r\\ & + \left(r(r-1) B(t) + r^2 C(t)\right)\|\phi\|_{L^r}^r ,\end{split}\] where $B(t) = \|\nabla\cdot b(t,\cdot)\|_{L^\infty}$ and $C(t) = \|c(t,\cdot)\|_{L^\infty}$. Now we set $G(t) = \log \|\phi\|_{L^r}$ and $s = 1/r$ to obtain \[ \frac{dG}{dt} \leqslant \dot{s}\left(2 + \log(4\pi s(1-s))\right) - \dot{s}\log\left(-\dot{s}\right) + (1-s)B(t) + C(t) .\] Letting $s(t)$ be a linear interpolation of $1$ and $0$ over $[0,T]$, we see that $\dot{s} = -T^{-1}$. Then we may integrate this to obtain \[ G(T) - G(0) \leqslant 4 - \log(4\pi) - \log(T) + \int_0^T \left[B(t) + C(t)\right] dt .\] Exponentiating gives us \begin{equation}\label{preliminary_heat_bound} \|\phi(T)\|_{L^\infty} \leqslant \frac{K}{T} \exp\left( \int_0^T \left[B(t) + C(t)\right] dt\right) .\end{equation} In order to get pointwise decay from \eqref{preliminary_heat_bound}, we look at the operator \[P^{(\alpha)}(T,x,y) := e^{-\alpha \cdot x} P(T,x,y) e^{\alpha\cdot y},\] where $P$ is the solution kernel for our linear problem \eqref{kernel_linear_problem} with $c\equiv 0$ and $\alpha(x,y)$ is a function to be identified later. We assume that $b$ can be written as $b = b_1+b_2$ where $\nabla\cdot b_1 = 0$ \begin{equation}\label{eqnB} \|b_1(t)\|_{L^\infty} \leqslant \frac{K_1}{\sqrt{t+1}}, \quad \|\nabla\cdot b_2\|_{L^\infty} \leqslant \frac{K_2}{(t+1)^{3/2}}, \quad\text{and}\quad \|b_2\|_{L^\infty} \leqslant \frac{K_2}{(t+1)}. \end{equation} In the application we have in mind, $b_1$ comes from the Biot-Savart kernel of the vorticity, while $b_2$ comes from $\nabla^{-1}$ of the divergence. We wish to obtain bounds for $P$ through our integral bounds on $P^{(\alpha)}$. To this end, we notice that $P^{(\alpha)}$ is the solution kernel for the problem \[ \phi_t = \Delta \phi + \nabla\cdot(( b + 2\alpha) \phi ) + (\alpha\cdot b + |\alpha|^2) \phi. \] Applying \eqref{preliminary_heat_bound}, and noticing that $\nabla\cdot(b + 2\alpha) = \nabla\cdot b$, we obtain, for any $\alpha$, \[\begin{split} P^{(\alpha)}&(T,x,y) \leqslant\\ &\frac{K}{T} \exp\left( 2 \int_0^T \left[ K_2(t+1)^{-3/2} + K_2^2(t+1)^{-2} + |\alpha| K_1 (t+1)^{-1/2} + |\alpha|^2\right] dt\right). \end{split}\] Choosing \[ \alpha = - \frac{1}{4T}\frac{(x-y)}{|x-y|} \left[ |x-y| - 2 K_1\sqrt{T+1}\right]_+, \] using the definition of $P^{(\alpha)}$, and integrating in time, we obtain \[ P(T,x,y) \leqslant \frac{K}{T} \exp\left( (4K_2 + 2K_2^2) - \frac{1}{8T} \left[|x-y| - 2K_1\sqrt{T+1} \right]_+^2 \right). \] By possibly changing the constants, we may obtain \[ P(T,x,y) \leqslant \frac{C}{T} \exp\left( - \frac{|x-y|^2}{CT}\right). \] To conclude, we apply the above to equation~\eqref{eqnENSOmega}, by choosing $b_1 = K_{BS}* \omega$ and $b_2 = \nabla^{-1} d$. Lemmas~\ref{d_estimates} and~\ref{e_vorticity_decay} and interpolation inequalities of the form~\eqref{eqnGradInvDBd} show that~\eqref{eqnB} is satisfied, concluding the proof. \end{proof} \section{Relative compactness of complete trajectories}\label{sxnCompactness} In this section we prove lemmas~\ref{lmaCompactness} and~\ref{lmaWCompact}, showing that complete trajectories in $L^1$ are relatively compact. The development is similar to \cite{bblGallayWayne05}, and the main difference here is the additional divergence term which requires us to alter many of the proofs. We first work up towards proving lemma~\ref{lmaWCompact}, and then use this to prove lemma~\ref{lmaCompactness}. \subsection{The semi-group of \texorpdfstring{$\mathcal L$}{L} and apriori bounds.} In order to obtain the desired compactness results, we will need estimates on various quantities. We will state these estimates here, but we will omit the proofs and provide references. Let $S(\tau) \defeq \exp\paren{\tau \mathcal{L}}$ be the semigroup generated by the operator $\mathcal{L}$. First we recall some estimates on the operator $S(\tau)$. In order to state these, we define the function \[a(\tau) \defeq 1 - e^{-\tau}.\] This function appears naturally with the change of variables. We recall a lemma on the operator $S$ from \cite{bblGallayWayne02}. \begin{lemma}\cite{bblGallayWayne02}*{Appendix~A}\label{SemiGroupBds} \begin{enumerate} \item For $m>1$, $S(\tau)$ is a bounded operator on $L^2(m)$. In addition, $\nabla S(\tau)$ is bounded away from $\tau = 0$. More precisely, there is a universal constant $C$ such that \[ \|S(\tau)\|_{L^2(m)\to L^2(m)} \leqslant C, \ \ \|\nabla S(\tau)\|_{L^2(m)\to L^2(m)} \leqslant \frac{C}{\sqrt{a(\tau)}}.\] \item Let $L_0^2(m)$ be the space of $L^2(m)$ functions with integral zero. For $\mu \in (0,1/2]$ and $m > 1 + 2\mu$ and $\tau > 0$, there is a universal constant $C$ such that \[\|S(\tau)\|_{L^2_0(m)\to L^2_0(m)} \leqslant C e^{-\mu \tau}, \ \ \|\nabla S(\tau)\|_{L^2_0(m)\to L^2_0(m)} \leqslant C\frac{e^{-\mu \tau}}{\sqrt{a(\tau)}}.\] \item For $1 \leqslant q \leqslant p \leqslant \infty$, $T>0$, $m \in [0,\infty)$ and $\alpha \in \mathbb{N}^2$, there is a constant $C_T$, depending on $T$, such that \[\|\partial^\alpha S(\tau) f\|_{L^p(m)} \leqslant \frac{C_T}{a(\tau)^{(q^{-1} - p^{-1})+ |\alpha|/2}} \|f\|_{L^q(m)},\] for any $f \in L^q(m)$ and any $0 < \tau \leqslant T$. \end{enumerate} \end{lemma} \noindent We note that the commutator of $\nabla$ and $S(\tau)$ is computed as \[\partial_i S(\tau) = e^{\tau/2} S(\tau) \partial_i.\] In addition, we need the well-known bounds on Biot-Savart kernel and $\nabla^{-1}$. The proof of this proposition may be found in~\cite{bblRodrigues09}*{proposition 1} and~\cite{bblGallayWayne02}*{Appendix B}. \begin{proposition}\label{kernel_bounds} Denote by $K$ either the operator $K_{BS}*$ or the operator $\nabla^{-1}$. Then the following inequalities hold for any $f$ such that the right hand side of each inequality is finite. \begin{enumerate} \item If $1 < p < 2 < q$ and $1 + q^{-1} - p^{-1} = 1/2$ then there is a constant $C$ such that \[ \|Kf\|_{L^q} \leqslant C \|f\|_{L^p}.\] \item If $1 \leqslant p < 2 < q \leqslant \infty$ and $0 < \theta < 1$ satisfy \[\frac{\theta}{p} + \frac{1-\theta}{q} = \frac{1}{2},\] then there is a constant $C$ such that \[\|Kf\|_{L^\infty} \leqslant C \|f\|_{L^p}^\theta \|f\|_{L^q}^{1-\theta}.\] \item There exists a constant $C_p>0$ depending only on $p$ such that if $p>1$ then \[\|\nabla Kf\|_{L^p} \leqslant C_p \|f\|_{L^p}.\] \item If $0 < m < 1$ and $q > 2$ then there is a constant $C_q$, depending only on $q$, such that \[ \|Kf\|_{L^q(m-2/q)} \leqslant C_q \|f\|_{L^2(m)}.\] \end{enumerate} \end{proposition} Finally, we state an \textit{a priori} bound on solutions to the system~\eqref{eqnW}--\eqref{eqnU}\xspace. The proof of this lemma is a straightforward adaptation of~\cite{bblGallayWayne05}*{lemma~2.1}. \begin{lemma}\label{WeightedBound} Suppose that $(W,D)$ solves the system~\eqref{eqnW}--\eqref{eqnU}\xspace in the space \begin{equation*} C^0([0,T],L^2(m))\cap C^0((0,T],H^1(m)) \end{equation*} with $W_0\in L^2(m)$ and $D_0\in L^2(m)$ as the initial conditions for $W$ and $D$ respectively. Then there is a constant $C$ such that \[ \norm{W(\tau)}_{L^2(m)} + a(\tau)^{1/2}\|\nabla W(\tau)\|_{L^2(m)} \leqslant C.\] \end{lemma} \iffalse \subsection*{An estimate for compactness} Here we briefly outline the proof of lemma~\ref{WeightedBound} which demonstrates that solutions of the system~\eqref{eqnW}--\eqref{eqnU}\xspace are bounded in $H^1(m)$. This is first done by demonstrating that solutions to the system~\eqref{eqnW}--\eqref{eqnU}\xspace are unique in $H^1(m)$ on a finite interval $[0,T]$ where $T>0$ may be small, and then using a continuation argument to extend the interval in time. Uniqueness of solutions in $H^1(m)$ can be established via a fixed point argument by referring to \[\begin{split} W(\tau) = S(\tau)W_0-\int_0^\tau S(\tau-s)\nabla\cdot(U(s)W(s))ds. \end{split}\] Following \cite{bblGallayWayne02}{lemma 3.1} it is straightforward to see that the nonlinear term above rewritten for two separate solutions $W_1(\tau)$ and $W_2(\tau)$ with corresponding induced velocities $U_1(\tau)$ and $U_2(\tau)$ respectively, as \[\begin{split} R(\tau) = \int_0^\tau S(\tau-s)\nabla\cdot(U_1(s)W_2(s))ds, \end{split}\] can be bounded as \sidenote{Wed 11/19 LK: What are these $m$s? Should they be $L^2(m)$s?} \[\begin{split} \sup \|R(\tau)\|_m \leqslant C_0\left(\sup \|W_1(\tau)\|_m+1\right)\sup\|W_2(\tau)\|_m, \end{split}\] where $C_0(m,T) \rightarrow 0$ as $T\rightarrow 0$. A bound on \[\begin{split} \nabla R(\tau) = \int_0^\tau \nabla S(\tau-s)\nabla\cdot(U_1(s)W_2(s))ds \end{split}\] can be found similar to that in \cite{bblGallayWayne05}, but as above, with a slight modification due to the nonzero divergence $D$. This shows that $R(\tau)$ is bounded in $H^1(m)$ and the constant on the right hand side of the bound, $C(T)\rightarrow 0$ as $T\rightarrow 0$. These bounds in concert with lemma \ref{SemiGroupBds} guarantee unique solutions in $H^1(m)$. \fi \subsection{Compactness in \texorpdfstring{$L^2(m)$}{L2(m)}.} First we show relative compactness of complete trajectories on $\mathbb{R}_+$ in $L^2(m)$. This is accomplished by decomposing the remainder term into convenient functions, two of which decay to zero and one whose trajectory is relatively compact. \begin{lemma}\label{WCompactForward} Assume that $m>3$ and that $(W,D)\in C^0([0,\infty), L^2(m)^2)$ is a solution to the system~\eqref{eqnW}--\eqref{eqnU}\xspace, and is bounded in $L^2(m)$. The trajectory $\{(W,D)\}_{\tau\in \mathbb{R}_{\geqslant 0}}$ is relatively compact in $L^2(m)$. \end{lemma} \begin{proof} We work here with $W$ only, but the proof for $D$ is similar and simpler. We define the remainder, $R$, to be such that $W = \alpha G + R$. One can check that \begin{equation*} \partial_\tau R = \mathcal{L} R - \alpha \Lambda R - N(R) - \nabla\cdot(W \nabla^{-1} D). \end{equation*} where \begin{equation*} \alpha \Lambda R \defeq \left(\alpha K_{BS}* G \cdot \nabla R + \alpha K_{BS}* R \cdot \nabla G\right) \quad\text{and}\quad N(R) \defeq K_{BS}* R\cdot \nabla R. \end{equation*} Hence we may write \begin{equation}\label{e_duhamel} R(\tau, \xi) = S(\tau)R_0 - R_1 - R_2 \end{equation} where \begin{align*} R_1 &\defeq \int_0^\tau S(\tau-s) (\alpha \Lambda R(s) + N(R)(s) ) \, ds\\ \text{and}\quad R_2 &\defeq \int_0^\tau S(\tau-s) \nabla\cdot(W(s) \nabla^{-1} D(s)) \, ds. \end{align*} The first term tends to zero by part two of lemma \ref{SemiGroupBds} and the fact that $\int R_0 d\xi = 0$. It follows from the work in lemma~2.2 in \cite{bblGallayWayne05} that $R_1$ is bounded in $L^2(m+1)$, and, hence, is a relatively compact trajectory. Thus, we need only show that $R_2$ tends to zero. To this end, we use the first inequality in lemma~\ref{SemiGroupBds} to obtain \[\begin{split} \|R_2\|_{L^2(m)} &\leqslant \int_0^\tau e^{-\frac{1}{2}(\tau-s)} \|\nabla S(\tau-s) (W\nabla^{-1} D)(s)\|_{L^2(m)} ds\\ &\leqslant C\int_0^\tau \frac{e^{-\frac{1}{2}(\tau-s)}}{\sqrt{a(\tau-s)}} \|(W\nabla^{-1} D)(s)\|_{L^2(m)} ds\\ &\leqslant C\int_0^\tau \frac{e^{-\frac{1}{2}(\tau-s)}}{\sqrt{a(\tau-s)}} \|\nabla^{-1} D(s)\|_{L^\infty} \|W(s)\|_{L^2(m)} ds .\end{split}\] The results of lemma~\ref{d_estimates} and proposition~\ref{kernel_bounds} imply that $\|\nabla^{-1} D(s)\|_{L^\infty}$ tends to zero as $s$ tends to infinity. Hence, we see that $\|R_2\|_{L^2(m)}$ tends to zero as $\tau$ tends to infinity. \end{proof} Now we will show relative compactness of complete trajectories in $L^2(m)$, i.e. we will prove lemma~\ref{lmaWCompact}. Our method of proof will be similar to above. \begin{proof}[Proof of lemma~\ref{lmaWCompact}] Again we will look at $R$ as above and only work with $W$. This time we will decompose $R$ as \[\begin{split} R(\tau) &= S(\tau - \tau_0) R(\tau_0) - \int_{\tau_0}^\tau S(\tau - s)\left( \alpha \Lambda R(s) + N(R)(s)\right) ds\\ &~~~ - \int_{\tau_0}^\tau S(\tau - s) \nabla\cdot(W(s) \nabla^{-1} D(s)) ds ,\end{split}\] where $\tau_0<\tau$. Since $R \in L^2_0(m)$, by construction, it follows from lemma~\ref{SemiGroupBds} that $S(\tau - \tau_0)R(\tau_0)$ tends to zero as $\tau_0$ tends to negative infinity. Hence we may write \begin{equation*} R(\tau) = -R_1 - R_2, \end{equation*} where \begin{align*} R_1 \defeq \int_{-\infty}^\tau S(\tau - s)\left( \alpha \Lambda R(s) + N(R)(s)\right) \, ds \\ \text{and}\quad R_2 \defeq \int_{-\infty}^\tau S(\tau - s) \nabla\cdot(W(s) \nabla^{-1} D(s)) \, ds. \end{align*} As before, showing that $R_1$ is relatively compact is exactly as in \cite{bblGallayWayne05}. Thus, we need only investigate $R_2$, which we handle similarly to the previous lemma. We will show that $R_2$ is bounded in $L^2(m+r)$ for some $r>0$. For any $q \in (1,2)$, lemma \ref{SemiGroupBds} gives us \[ \|R_2\|_{L^2(m+r)} \leqslant C \int_{-\infty}^\tau \frac{e^{-\frac{1}{2}(\tau-s)}}{a(\tau-s)^{1/q}} \|W \nabla^{-1} D\|_{L^q(m+r)} ds. \] H\"older's inequality implies that \[ \|W \nabla^{-1} D\|_{L^q(m)} \leqslant \|W\|_{L^2(m)} \|\nabla^{-1} D\|_{L^{2q/(2-q)}(r)}. \] The first term is bounded due to the assumptions in the statement of the current lemma. For the remaining term concerning the divergence $D$, we apply proposition~\ref{kernel_bounds} to see that, letting $\tilde m = r + (2-q)/q$, and choosing $r$ and $q$ such that $\tilde m \leqslant m$, \[ \|\nabla^{-1} D\|_{L^{2q/(2-q)}(r)} \leqslant C \|D\|_{L^2(\tilde m)} \leqslant C \|D\|_{L^2(m)}. \] Hence $R_2$ is bounded in $L^2(m+r)$. Lemma~\ref{SemiGroupBds} and lemma~\ref{WeightedBound} imply that $R_2$ is also bounded in $H^1(m)$, so that Rellich's theorem, see e.g.~\cite{bblReedSimon78}*{theorem~XIII.65} implies that $R_2$ is relatively compact in $L^2(m)$, finishing the proof. \end{proof} In order to conclude, we need that bounded trajectories in $L^1$ are relatively compact. In order to show this, one may reproduce the proof of~\cite{bblGallayWayne05}*{lemma~2.5} as it relies only on a pointwise estimate on $W$, which we recreate in~\eqref{eqnWbound}. This yields the final lemma we need to prove the necessary compactness. \subsection{Convergence in \texorpdfstring{$L^\infty$}{L-infty}} In this section we prove convergence of $W$ to $\alpha G$ in $L^\infty$, as stated in theorem~\ref{thmBetaZero}. \begin{lemma}\label{lmaLinf} Let $(W, D)$ solve the system~\eqref{eqnW}--\eqref{eqnU}\xspace with initial data $(W_0, D_0)$ such that $W_0, D_0 \in L^1(1)$. If $\alpha = \int W_0 \, d\xi$ and $\beta = \int D_0 \, d\xi = 0$, then \begin{equation*} \lim_{\tau\to\infty} \left\| W - \alpha G\right\|_{L^\infty} = 0. \end{equation*} \end{lemma} \begin{proof} Recall that we have shown that $W$ converges to $\alpha G$ in $L^p$ for all $p\in[1,\infty)$. As in~\eqref{e_duhamel}, letting $R = W-\alpha G$, we may write an integral equation for $R$ using the semigroup $S$. We will use this to show that $\|R\|_{L^\infty}$ tends to zero. As above, $R$ satisfies \[\begin{split} R(\tau) &= S(1)R(\tau-1) - \int_{\tau-1}^\tau S(\tau-s)\left(\alpha \Lambda R(s) + N(R)(s)\right)ds\\ & - \int_{\tau-1}^\tau S(\tau-s) \nabla\cdot( W(s)\nabla^{-1}D(s))ds. \end{split}\] First, we use the third conclusion of lemma~\ref{SemiGroupBds} with $p = \infty$, $q =1$, $\alpha = 0$ and $m = 0$ on the first term. Hence, we have that \[ \|S(1)R(\tau-1)\|_{L^\infty} \leqslant C \|R(\tau-1)\|_{L^1}. \] Since $\|R(\tau-1)\|_{L^1}$ tends to zero, then $\|S(1)R(\tau-1)\|_{L^\infty}$ tends to zero. We may use this same strategy to deal with the rest of the terms. First we look at \[ \Lambda R = (K_{BS}* G) \cdot R + (K_{BS}* R)\cdot \nabla G =\nabla \cdot( (K_{BS}* G) R + (K_{BS}* R) G). \] Then lemma~\ref{SemiGroupBds}, implies that, \[\begin{split} \norm[\Big]{\int_{\tau-1}^\tau S(\tau-s) \Lambda R(s) ds}_{L^\infty} &\leqslant \int_{\tau-1}^\tau (\|(K_{BS}* G) R\|_{L^1} + \|(K_{BS}* R) G\|_{L^1}) ds\\ &\leqslant C \int_{\tau-1}^\tau ( \|R\|_{L^1} + \|K_{BS}* R\|_{L^\infty})ds. \end{split}\] Since $R$ tends to zero in $L^p$ for all $p$, then lemma~\ref{kernel_bounds} implies that $K_{BS}* R$ tends to zero in $L^\infty$. Next, we deal with the term involving $N(R)$. Notice that $N(R) = \nabla\cdot((K_{BS}* R) R)$. Hence, as above, we obtain \[\begin{split} \norm[\Big]{\int_{\tau-1}^\tau S(\tau-s) N(R)(s) ds}_{L^\infty} &\leqslant \int_{\tau-1}^\tau \|(K_{BS}* R) R\|_{L^1} ds\\ &\leqslant C \int_{\tau-1}^\tau \|K_{BS}* R\|_{L^\infty} \|R\|_{L^1} ds. \end{split}\] Hence, this term tends to zero as well. Finally, for the last term, we obtain \[\begin{split} \norm[\Big]{\int_{\tau-1}^\tau S(\tau-s) \nabla\cdot(W\nabla^{-1}D)(s) ds}_{L^\infty} &\leqslant \int_{\tau-1}^\tau \|W \nabla^{-1}D\|_{L^1} ds\\ &\leqslant C \int_{\tau-1}^\tau \|\nabla^{-1}D\|_{L^\infty} \|W\|_{L^1} ds. \end{split}\] Using lemma~\ref{d_estimates} and lemma~\ref{kernel_bounds}, we see that $\|\nabla^{-1}D\|_{L^\infty}$ tends to zero. This finishes the proof that $\|R\|_{L^\infty}$ tends to zero. \end{proof} \section{Brief Remarks on Well-posedness}\label{sxnWellPosed} The well-posedness of the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace in classical or Lebesgue spaces is very similar to the development in~\cites{bblBen-Artzi94, bblBrezis94, bblKato94}. For the weighted spaces, one may look to the strategies of~\cites{bblGallayWayne02, bblRodrigues09}. Since the adaptations required in our setting are minimal, we only briefly comment on the manner of proof. First, we discuss the primary a priori estimates in each of these spaces. Then, we discuss the iterative scheme used to prove local existence. \subsection*{A Priori Estimates} The main a priori estimates in $L^p$ and in $L^2_w$ follow as in the work of lemma~\ref{e_vorticity_decay} and section~\ref{sxnNonMeanZero}, respectively. The a priori estimate in $L^2(m)^2$ is a slight modification of the argument of \cites{bblGallayWayne02}. To this end, multiply~\eqref{eqnW} by $|\xi|^{2m}W$ to obtain \[\begin{split} \frac{1}{2}\frac{d}{d\tau} &\int |\xi|^{2m}W^2 d\xi + \int |\xi|^{2m} W \nabla\cdot(UW)d\xi\\ & = \int |\xi|^{2m}\left\{W\Delta W + \frac{W}{2}(\xi\cdot\nabla)W + W^2\right\}d\xi. \end{split}\] Integrating by parts, we see that these terms can be rewritten as \[\begin{split} &\int |\xi|^{2m} W(\Delta W)d\xi = -\int |\xi|^{2m}|\nabla W|^2 d\xi + 2m(m-1)\int |\xi|^{2m-2}W^2 d\xi ,\\ &\int |\xi|^{2m}\frac{W}{2}(\xi\cdot\nabla )W d\xi = -\frac{m+1}{2}\int |\xi|^{2m} W^2d\xi,\\ &\int |\xi|^{2m} W\nabla \cdot(UW)d\xi = \frac{1}{2}\int |\xi|^{2m}DW^2 d\xi + \frac{1}{2}\int |\xi|^{2m}\nabla\cdot (UW^2)d\xi\\ &\quad\quad= \frac{1}{2}\int |\xi|^{2m}DW^2d\xi - m\int |\xi|^{2m-2}(\xi\cdot U)W^2d\xi. \end{split}\] By noting that for any $\eps>0$ there is a $C_\eps>0$ so that $|\xi|^{2m-2}\leqslant \eps|\xi|^{2m} + C_\eps$, we see that \[\begin{split} \frac{1}{2}\frac{d}{d\tau}\int |\xi|^{2m} W^2d\xi &+ \int |\xi|^{2m} |\nabla W|^2 d\xi + \frac{m-1 - 4\eps}{2} \int |\xi|^{2m} W^2 d\xi\\ &\leqslant C_\eps \int W^2 d\xi + C_\eps \|U\|_\infty^{2m} \int W^2 d\xi + \frac{\|D\|_\infty}{2} \int |\xi|^{2m} W^2 d\xi .\end{split}\] We know that $\|D\|_{L^\infty}$ decays to zero, and there is sufficient control over $\|W\|_{L^2}$ and $\|U\|_{L^\infty}$ by lemma~\ref{e_vorticity_decay} and proposition~\ref{kernel_bounds}. Hence choosing $\eps>0$ sufficiently small and integrating the above inequality yields the apriori estimate required in $L^2(m)^2$. These a priori estimates are summarized in the following proposition. \begin{proposition} Fix $(W_0, D_0)\in X$ where $X$ is either $L^2(m)^2$, with $m>1$ and $\int D_0 d\xi = 0$, or $(L^2_w)^2$. Then there exists a unique solution to the system~\eqref{eqnW}--\eqref{eqnU}\xspace which satisfies \[ \|W(\tau)\|_{X} \leqslant C .\] Here $C$ is a constant depending only on the initial data and which tends to zero as $\|W_0\|_{X}$ tends to zero. \end{proposition} \subsection*{An Iterative Scheme} To prove existence and uniqueness of classical solutions with initial data in $L^1$ we follow~\cite{bblBen-Artzi94}. For existence, we begin with smooth initial data, and use an iterative argument to obtain the existence of solutions which are bounded in $L^p$ for every $p$. The key contribution here is that we iterate only in the vorticity, leaving the divergence fixed as solutions to the heat equation follow from the classical theory. We define $\omega_0=0$ and then let $\omega_k$ be the solution to the linear system \[\begin{split} \partial_t \omega_k + \nabla\cdot(u_{k-1} \omega_k) &= \Delta \omega_k\\ u_k &= \nabla^{-1} d + K_{BS}* \omega_k. \end{split}\] Bounds similar to lemma~\ref{e_vorticity_decay} can be obtained for this system, establishing the existence of a solution. Uniqueness follows by directly estimating the difference of two solutions. Afterwards, a continuity argument is used to extend this to any initial data in $L^1$. In general, this argument differs from that in~\cite{bblBen-Artzi94} only in the appearance of an extra term involving $d$ in several of the estimates. However, this extra term behaves much better than the non-linear term as the classical theory on the heat equation for $d$ yields appropriate bounds on the divergence in any of the required spaces. In particular, this gives us the following result which we state without proof. \begin{proposition} Suppose that $\omega_0$ and $d_0$ are elements of $L^1(\mathbb{R}^2)$. Then there exist $\omega, d \in C(\mathbb{R}_+, L^1) \cap C(\mathbb{R}_+, W^{1,1}\cap W^{1,\infty})$, where $\mathbb{R}_+ := (0,\infty)$, which are the unique solutions to the system~\eqref{eqnENSOmega}--\eqref{eqnENSu}\xspace. \end{proposition} \end{document}
arXiv
\begin{definition}[Definition:Line at Infinity] Let $\LL$ be a straight line embedded in a cartesian plane $\CC$ given in homogeneous Cartesian coordinates by the equation: :$l X + m Y + n Z = 0$ Let $l = m = 0$. Then from Intersection of Straight Line in Homogeneous Cartesian Coordinates with Axes, $\LL$ intersects both the $x$-axis and the $y$-axis at the point at infinity. Such a straight line cannot exist on $\CC$, so such an $\LL$ is known as the '''line at infinity'''. \end{definition}
ProofWiki
Free body diagram¶ Marcos Duarte Figures from De motu animalium by Giovanni Alfonso Borelli (1608-1679), the father of biomechanics, depicting a static analysis of forces acting on the human body. In the mechanical modelling of an inanimate or living system, composed by one or more bodies (bodies as units that are mechanically isolated according to the question one is trying to answer), it is convenient to isolate each body (be they originally interconnected or not) and identify each force and moment of force (torque) that act on this body in order to apply the laws of mechanics. The free body diagram (FBD) of a mechanical system or model is the representation in a diagram of all forces and moments of force acting on each body, isolated from the rest of the system. The term free means that each body, which maybe was part of a connected system, is represented as isolated (free) and any existent contact force is represented in the diagram as forces (action and reaction) acting on the formely connected bodies. Then, the laws of mechanics are applied on each body, and the unknown movement, force or moment of force can be found if the system of equations is determined (the number of unknown variables can not be greater than the number of equations for each body). How exactly a FBD is drawn for a mechanical model of something is dependent on what one is trying to find. For example, the air resistance might be neglected or not when modelling the movement of an object and the number of parts the system is divided is dependent on what is needed to know about the model. The use of FBD is very common in biomechanics; a typical use is to use the FBD in order to determine the forces and torques on the ankle, knee, and hip joints of the lower limb (foot, leg, and thigh) during locomotion, and the FBD can be applied to any problem where the laws of mechanics are needed to solve a problem. Equilibrium conditions¶ In a static situation, the following equilibrium conditions, derived from Newton-Euler's equations for linear and rotational motions must be satisfied for each body under analysis using FBD: $$ \begin{array}{l l} \sum \mathbf{F} = 0 \\ \\ \sum \mathbf{M} = 0 \end{array} $$ That is, the vectorial sum of all forces and moments of force acting on the body must be zero hence the body is not moving or rotating. And this must hold in all directions if we use the Cartesian coordinate system because the movement is independent in the orthogonal directions: $$ \sum \mathbf{F} = 0 \;\;\;\implies\;\;\; \begin{array}{l l} \sum F_x = 0 \\ \sum F_y = 0 \\ \sum F_z = 0 \end{array} $$$$ \sum \mathbf{M} = 0 \;\;\implies\;\;\; \begin{array}{l l} \sum M_x = 0 \\ \sum M_y = 0 \\ \sum M_z = 0 \end{array} $$ Although many forces and moments of force can actuate on a rigid body, their effects in terms of motion, translation and rotation of the body, are the same of a single force acting on the body center of gravity and a couple of antiparallel forces (or simply, a force couple) that generates a moment of force. This moment of force has the particularity that is not around a fixed axis of rotation and because of that it is also called free moment or pure moment. The next figure illustrates this principle: Figure. Any system of forces applied to a rigid body can be reduced to a resultant force applied on the center of mass of the body plus a force couple. Based on the concept of force couple, we can make a distinction between torque and moment of force: torque is the moment of force caused by a force couple. We can also refer to torque as the moment of a couple. This distinction is more common in Engineering Mechanics. The equilibrium conditions and the fact that a multiple force system can be reduced to a simpler system will be useful to guide the drawing of a FBD. Other important principles for a mechanical analysis are the principle of moments and the principle of transmissibility: Varignon's Theorem (Principle of moments)¶ The moment of a force about a point is equal to the sum of moments of the components of the force about the same point. Note that the components of the force don't need to be orthogonal. Principle of transmissibility¶ For rigid bodies with no deformation, an external force can be applied at any point on its line of action without changing the resultant effect of the force. Example (From Meriam 1997). For the figure below, calculate the magnitude of the moment about the base point O of the 600-N force in five different ways. Figure. Can you calculate the torque of the force above by five different ways? M = np.cross([2,4,0], [600*np.cos(40*np.pi/180), -600*np.sin(40*np.pi/180),0]) print("The magnitude of the moment of force is: %d Nm" %np.around(np.linalg.norm(M))) The magnitude of the moment of force is: 2610 Nm Let's start using the FBD to solve simple problems in mechanics. Example 1: Ball resting on the ground¶ Our interest is to draw the FBD for the ball: Figure. Free-body diagram of a ball resting on the ground. W is the weight force due to the gravitational forces which act on all particles of the ball but the effects of the weight force on each particle of the ball are the same if we draw only one force (the ball weight) acting on the center of gravity of the ball. N is the normal force, it is perpendicular to the surface of contact and prevents the ball from penetrating the surface. The vector representing a force must be drawn with its origin at the point of application of the force. However, for sake of clarity one might draw the contect forces outside the body with the tip of the vector at the contact region (note that in the figure above, we had to draw small vectors to avoid the superposition of them). It is not necessary to draw the body in the FBD, just the vectors representing the forces would be enough. The ball also generates forces on the ground (the ball attracts the Earth with the same force but opposite direction and pushes the ground downwards), but since the question was to draw the FBD for the ball only, we don't need to care about that. When drawing a FBD, it's not necessary to portrait the body with details; one can draw a simple line to represent the body or nothing at all, only caring to draw the forces and moments of force in the right places. From the FBD, one can now derive the equilibrium equations: In the vertical direction (there's nothing at the horizontal direction): $$ \mathbf{N}+\mathbf{W}=0 \;\;\; \implies \;\;\; \mathbf{N}=-\mathbf{W} $$ Where $\mathbf{W}=m\mathbf{g}\;\;(g=-10 m/s^2)$. This very simple case already illustrates a common problem when drawing a FBD and deriving the equations: Is W negative or positive? Is $\mathbf{W}$ representing a vector quantity that we don't explicitly express knowledge about its direction or we will express its direction with a sign? Will we substitute the symbol $\mathbf{W}$ by $m\mathbf{g}$ or $-m\mathbf{g}$? And in any of these cases, is $g$ equals to $10$ or $-10 m/s^2$? This problem is known as double negative and it happens when we negate twice something we wanted to express as negative, for example, $\mathbf{W}=-m\mathbf{g}$, where $g=-10 m/s^2$. Be carefull and consistent with the convention adopted. If in the reference system we chose, the upward direction is positive, the value of $\mathbf{W}$ should be negative. So, if we write the equation as $\mathbf{N}+\mathbf{W}=0$, it's because we are representing the vectors and the value of $\mathbf{W}$ is either equal to $m\mathbf{g}$ (with $g=-10 m/s^2$) or $-m\mathbf{g}$ (with $g=10 m/s^2$). It would be also correct to write the equation as $\mathbf{N}-\mathbf{W}=0$ where $\mathbf{W}$ is equal to $m\mathbf{g}$ (with $g=10 m/s^2$), but the problem with this convention is that we are constraining the direction of the vector to only one possibility, but this might not be always true. The best option is to draw and write in the equations in vectorial form, without their signals and let the signals appear when the numerical values for these quantities are inputted. But it is really a matter of convention, once a convention is adopted, you should grab it with all your forces! Example 2: Person standing still¶ Our interest is to draw the FBD for a person standing still: Figure. Free-body diagram of a person standing. The FBD for the standing person above is similar to the previous one; the difference is that now there are two surfaces of contact (the two feet). And under each foot, there is an area of contact, not just a single point. In somewhat similar to the center of gravity, it is possible to compute a center of force, representing the point of application of the resultant vertical force of the ground on the foot (and it is even possible to compute a single center of force considering both feet). $\mathbf{N_1}$ and $\mathbf{N_2}$ are the normal forces acting on the feet and they were arbitrarily drawn acting on the middle of the foot, but the subject could stand in such a way that the center of force could be in the toes, for example. To exactly determine where the center of force is under each foot, we need an instrument to measure these forces and the point of application of the resultant vertical force. One instrument for that, used in biomechanics, is the force plate or force platform. In biomechanics, the center of force in this context is usually called center of pressure. Let's derive the equilibrium equation only for the forces in the vertical direction: $$ \mathbf{N_1}+\mathbf{N_2}+\mathbf{W}=0 \;\;\; \implies \;\;\; \mathbf{N_1}+\mathbf{N_2}=-\mathbf{W} $$ Where $\mathbf{W}=m\mathbf{g}\;\;(g=-10 m/s^2).$ This FBD, although simple, has already a complication to be solved: if the body is at rest, we know that the magnitude of the weight is equal to the magnitude of the sum of $\mathbf{N_1}$ and $\mathbf{N_2}$, but we are unable to determine $\mathbf{N_1}$ and $\mathbf{N_2}$ individually (the person could stand with more weight on one leg than in the other). But the total center of force between $\mathbf{N_1}$ and $\mathbf{N_2}$ must be exactly in the line of action of the weight force otherwise a moment of force will appear and the body will rotate. Example 3: Two boxes on the ground¶ Our interest is to draw the FBD for both boxes: Figure. Free-body diagram of two boxes on the ground. Now we have to consider the forces between the two boxes. Note that the forces $\mathbf{W_1}$ e $\mathbf{W_2}$ are the gravitational forces due to Earth. The boxes also attract each other but these forces are negligible and we don't need to draw them. The FBD for box 1 is the same as we drew for the ball before. For the box 2, as it acts on box 1 with the contact force $\mathbf{N_1}$ (the normal force) that prevents the box 1 from penetrating the surface of box 2, box 1 reacts with the same magnitude of force but in opposite direction, this is the force $\mathbf{-N_1}$. Remember from Newton's third law that the action and reaction force act on different bodies and these forces do not cancel each other. For body 1: $$ \mathbf{N_1}+\mathbf{W_1}=0 \;\;\; \implies \;\;\; \mathbf{N_1}=-\mathbf{W_1} $$ $$ \mathbf{N_2}+\mathbf{W_2}-\mathbf{N_1}=0 \implies \;\;\; \mathbf{N_2}=-\mathbf{W_2}+\mathbf{N_1} \;\;\; \implies \;\;\; \mathbf{N_2}=-\mathbf{W_2}-\mathbf{W_1} $$ Where $\mathbf{W_1}=m_1\mathbf{g}\;\;\text{and}\;\;\mathbf{W_2}=m_2\mathbf{g}\;\;(g=-10 m/s^2).$ Note that the magnitude of $\mathbf{N_1}$ is equal to the magnitude of $\mathbf{W_1}$ and the magnitude of $\mathbf{N_2}$ is equal to the sum of the magnitudes of $\mathbf{W_1}$ and $\mathbf{W_2}$. At the end of the first example it's written: "The best option is to draw and write in the equations, the vectors, without their signals and let the signals appear when the numerical values for these quantities are inputted." If this is to be followed, why then the representation of $-\mathbf{N_1}$ acting on body 2 in the FBD? The answer is that this is a different minus sign; it means that whatever is the value of this force, it should be the opposite of the normal force acting on body 1. Example 4: One segment fixed to a base (no movement)¶ Now we have a segment fixed to a base by a joint and we want to draw the FBD for the segment: Figure. Free-body diagram of one segment and one joint. On any joint one can have a force and a moment of force (but a joint with a free axis of rotation, offering no resistance to rotation, generates no moment of force, only joint forces). For this example, we guessed the most general case, a joint with a force and a moment of force, but we may find later that one of these quantities is zero, this is ok. And we arbitrarily chose the directions for joint force and moment of force; the actual directions don't matter now, but if we can choose any one, let's be positive! Forces: $$ \mathbf{F} + \mathbf{W} = 0 \;\;\;\implies\;\;\; \begin{array}{l l} \mathbf{F_{x}} + 0 = 0 \;\;\;\implies\;\;\; \mathbf{F_{x}} = 0 \\ \mathbf{F_{y}} + \mathbf{W} = 0 \;\;\;\implies\;\;\; \mathbf{F_{y}} = -\mathbf{W} \\ \end{array} $$ Moments of force around the center of gravity of the segment: $$ \mathbf{M}+\mathbf{r_{cgp}}\times\mathbf{F}=0 \;\;\; \implies \;\;\; \mathbf{M}+\frac{\ell}{2}W=0 \;\;\; \implies \;\;\; \mathbf{M}=-\frac{\ell}{2}m\mathbf{g} $$ Remember that the direction of $\mathbf{M}$ is perpendicular to the plane of the FBD, it is in the $\mathbf{z}$ direction, because of the cross product. Where $\mathbf{r_{cgp}}$ is the position vector from the center of gravity to the proximal joint and $\mathbf{W}=m\mathbf{g}\;(g=-10 m/s^2)$. We can calculate the moment of force around any point that the condition of equilibrium must hold. In this example, we could have calculated the moment of force direct around the joint; in this case the weight causes a moment of force and the joint force does not. To find the direction of the moment of force might be difficult sometimes. If we had numbers for all these variables, we could simply do the mathematical operations and let the results indicate the direction. For example, suppose the center of gravity is at the origin and that the bar has mass 1 kg and length 1 m. Writing the equilibrium equation for the moments of force around the center of gravity, the moment of force at the joint is: m, l, g = 1.0, 1.0, -10.0 # m = 1kg, l = 1m, g = -10 m/s2 r = [-l/2-0, 0, 0] # note how the position vector from origin to F is created: tip minus tail F = [0, -m*g, 0] # this is the joint force not the gravitational force M = -np.cross( r, F ) print("The moment of force at the joint is (in Nm):", M) The moment of force at the joint is (in Nm): [-0. -0. 5.] The moment of force $\mathbf{M}$ is positive (counterclockwise direction) to balance the negative moment of force due to the gravitational force. Note that $\mathbf{M}$ is a vector with three components, where only the third component, in the $\mathbf{z}$ direction, is nonzero (because $\mathbf{r}$ and $\mathbf{F}$ where both in the plane $\mathbf{xy}$). And writing the equilibrium equation for the moments of force around the joint, the moment of force at the joint is: r = [l/2-0, 0, 0] # note how the position vector from origin to W is created: tip minus tail F = [0, m*g, 0] # this is the gravitational force The same result, of course. Example 5: Two segments and two joints¶ Now we have two segments and two joints and we want to draw the FBD for each segment: Figure. Free-body diagram of two segments and two joints. Note that the action-reaction principle applies to both forces and moments of force. In the FBD we draw the vectors representing forces and moments of forces without knowing their magnitude and direction. For example, the fact we drew the force F1 pointing upward and to the right direction is arbitrary. The calculation may reveal that this force points in fact to the opposite direction. There is no problem with that, the FBD is just an initial representation of all forces and moments of forces in the mechanical model. But once an arbitrary direction is chosen, we have to be consistent in representing the reaction forces and moments of forces. In a joint it may actuate more than one force and one moment of force. For example, for a human joint, typically there are many tendons and ligaments crossing the joint and the articulation has a region of contact. However, as we saw earlier, all the forces and moments of force can be reduced to a single force acting on the joint center (in fact to any point we desire) and to a free moment (a force couple) as we just drew. Let's derive the equilibrium equations for the forces and moments of force around the center of gravity: $$ \mathbf{F_1} + \mathbf{W_1} = 0 $$$$ \mathbf{M_1} + \mathbf{r_{cgp1}}\times\mathbf{F_1} = 0 $$ $$ \mathbf{F_2} + \mathbf{W_2} - \mathbf{F_1} = 0 $$$$\mathbf{M_2} + \mathbf{r_{cgp2}}\times\mathbf{F_2} + \mathbf{r_{cgd2}}\times-\mathbf{F_{2}} - \mathbf{M_1} = 0 $$ Where p and d stands for proximal and distal joints (with respect to the fixed extremity) and $\mathbf{r_{cg\:ji}}$ is the position vector from the center of gravity of body i to the joint j. It is not possible to solve the problem starting with the body 2 because for this body there are more unknown variables than equations (both joint forces and moments of force are unknown for body 2). If we start by body 1, there is only one unknown joint force and moment of force and the system is solvable. In general, this is the approach we should take in biomechanics: for a multi-body system, start with the body with least unknown variables which usually has a free extremity or it has a sensor in this extremity able to measure the unknown quantity. $$ \mathbf{F_{1x}} + 0 = 0 \;\;\;\implies\;\;\; $$$$\mathbf{F_{1x}} = 0 $$$$\mathbf{F_{1y}} + \mathbf{W_1} = 0 \;\;\;\implies\;\;\; $$$$\mathbf{F_{1y}} = -m_1\mathbf{g} $$ Moments of force (around $cg_1$): $$ \mathbf{M_1}+\frac{\ell_1}{2}cos(\theta_1)\mathbf{W_1}=0 \;\;\;\implies\;\;\; $$$$ \mathbf{M_1}=-\frac{\ell_1}{2}cos(\theta_1)m_1\mathbf{g} $$ Where $\theta_1$ is the angle of segment 1 with the horizontal and $g=-10 m/s^2.$ $$ \mathbf{F_{2x}} + 0 - \mathbf{F_{1x}} = 0 \;\;\;\implies\;\;\; $$$$ \mathbf{F_{2x}} = 0 $$$$ \mathbf{F_{2y}} + \mathbf{W_2} - \mathbf{F_{1y}} = 0 \;\;\;\implies\;\;\; $$$$ \mathbf{F_{2y}} = -m_1\mathbf{g} - m_2\mathbf{g} $$ $$ \mathbf{M_2} - \frac{\ell_2}{2}\mathbf{F_{2y}} - \frac{\ell_2}{2}\mathbf{F_{1y}} - \mathbf{M_1} = 0 \;\;\;\implies\;\;\; $$$$ \mathbf{M_2} + \frac{\ell_2}{2}(m_1\mathbf{g} + m_2\mathbf{g}) + \frac{\ell_2}{2}m_1\mathbf{g} + \frac{\ell_1}{2}cos(\theta_1)m_1\mathbf{g} = 0 \;\;\;\implies\;\;\; $$$$ \mathbf{M_2} = -\frac{\ell_2}{2}m_2\mathbf{g} - \left(\ell_2 + \frac{\ell_1}{2}cos(\theta_1)\right)m_1\mathbf{g} $$ Where $g=-10 m/s^2.$ This solution makes sense: The force in joint 1 is the necessary force to support the weight of body 1, while the force in joint 2 is the necessary force to support the weight of bodies 1 and 2. Both $\mathbf{M_1}$ and $\mathbf{M_2}$ should be positive (counterclockwise direction) because these joint moments of force are necessary to support the bodies against gravity (which generates a negative (clockwise direction) moment of force on the joints). The magnitudes of the moments of force due to the weight of each body are simply the product between the correspondent body weight and the horizontal distance of the body center of gravity to the joint. This problem could have been solved calculating the moments of force around each joint instead of around the center of gravity; maybe it would have been simpler, but it would give the same results. You are invited to check that. The case of an accelerated rigid body¶ The formalism above can be extended for an accelerated rigid body where the following conditions must be satisfied: $$ \sum \mathbf{F} = m\mathbf{a_{cm}} $$$$ \sum \mathbf{M_O} = \frac{d\mathbf{L_O}}{dt} $$ Where $O$ is a reference point from which the movement (translation and rotation) of the rigid body is described ($O$ is an arbitrary position, it can be in any place in space). The equations above are called dynamic equations. For a two-dimensional movement and if the reference point $O$ is at the body center of mass (i.e., for a rotation around the center of mass), the dynamic equations become: $$ \sum F_x = ma_{cm,x} $$$$ \sum F_y = ma_{cm,y} $$$$ \sum M_z = I_{cm}\alpha_z $$ Where $\mathbf{z}$ is the axis of rotation passing through the body center of mass and perpendicular to the plane of movement. If the reference point $O$ does not coincide with the body center of mass, the sum of moments of force on this point, $\sum M_{z,O}$, is equal to the sum of moments of force around the center of mass, $\sum M_{z,cm}$, plus the moment of force due to the sum of the external forces, $\sum\mathbf{F}=m\mathbf{a}_{cm}$, acting on the center of mass in relation to point $O$: $$ \sum M_{z,O} = I_{cm}\alpha_z + \mathbf{r}_{cm,O}\times m \mathbf{a}_{cm} $$ The equation above can also be understood as: the time rate of change of the total angular momentum around a reference point $O$, $d\mathbf{L_O}/dt$, is equal to the time rate of change of the angular momentum around the body center of mass, $I_{cm}\alpha_z$, plus the time rate of change of the angular momentum of the body center of mass around the reference point $O$, $\mathbf{r}_{cm,O}\times m \mathbf{a}_{cm}$. In a variation of the equation above, the vector product at the right side can be solved if we compute $d$, the (perpendicular) distance to the line of action of the acceleration vector: $$ \sum M_{z,O} = I_{cm}\alpha_z + m \mathbf{a}_{cm}d $$ Note that if the linear acceleration vector $\mathbf{a}_{cm}$ passes through the reference point $O$, $d=0$, and the equation above becomes the equation for a rotation around the center of mass. We can express the moment of inertia around the reference point $O$ instead of around the body center of mass, in this case, the linear acceleration will also be expressed as the acceleration of the reference point $O$: $$ \sum M_{z,O} = I_{O}\alpha_z + \mathbf{r}_{cm,O}\times m \mathbf{a}_O $$ If the acceleration vector $\mathbf{a}_O$ passes through the center of mass, the cross product is zero and the equation above reduces to a simpler case (but now not around the body center of mass). And there is another condition where a simplification also occurs: if the acceleration $\mathbf{a}_O$ is zero, that is, if the reference point $O$ is fixed, the cross product is zero again. Which reference point to use for solving the dynamic equations is usually a matter of what turns the solution simpler. For example, if the rotation of the body occurs around a fixed axis, it is convenient to sum the momentts of force around this axis to eliminate the unknown force in the axis. In human motion analysis, particularly in the three-dimensional case, summing the moments of force around the body center of mass is typically simpler. Example 6: example 5 with one segment accelerated¶ Consider that the segment 1 of example 5 is accelerated with linear acceleration $\mathbf{a}_1$, angular acceleration $\alpha_1$, has a moment of inertia around its center of gravity $I_1$, and this movement happens in the plane (so, the vector $\alpha_1$ is has the $\mathbf{z}$ direction). Now, the dynamic equations for the forces and moments of force around the center of gravity are: $$ \mathbf{F_1} + \mathbf{W_1} = m_1a_1 $$$$ \mathbf{M_1} + \mathbf{r_{cgp1}}\times\mathbf{F_1} = I_1\alpha_1 $$ So, for the forces: $$ \mathbf{F_{1x}} + 0 = m_1a_{1x} \;\;\;\implies\;\;\; $$$$\mathbf{F_{1x}} = m_1a_{1x} $$$$\mathbf{F_{1y}} + \mathbf{W_1} = m_1a_{1y} \;\;\;\implies\;\;\; $$$$\mathbf{F_{1y}} = m_1a_{1y} - m_1\mathbf{g} $$ And for the moments of force (around $cg_1$): $$ \mathbf{M_1} + \frac{\ell_1}{2} sin(\theta_1)m_1a_{1x} - \frac{\ell_1}{2} cos(\theta_1)(m_1a_{1y} - m_1\mathbf{g}) = I_1\alpha_1 \;\;\;\implies\;\;\; $$$$ \mathbf{M_1} = I_1\alpha_1 + \frac{\ell_1}{2} \left( cos(\theta_1)(m_1a_{1y} - m_1\mathbf{g}) - sin(\theta_1)m_1a_{1x} \right) $$ Where $g = -10 m/s^2$. The equations for segment 2 are the same as in example 5, as the values of $\mathbf{F_1}$ and $\mathbf{M_1}$ are different now because of the acceleration, so it will be the values of $\mathbf{F_2}$ and $\mathbf{M_2}$. We also can solve the dynamic equation for the moments of force, now around the joint 1: $$ \mathbf{M_1} + \mathbf{r_{cgp1}}\times\mathbf{W_1} = I_1\alpha_1 + \mathbf{r}_{1cm,j1}\times m \mathbf{a}_1 \;\;\;\implies\;\;\; $$$$ \mathbf{M_1} + \frac{\ell_1}{2}cos(\theta_1)m_1\mathbf{g} = I_1\alpha_1 - \frac{\ell_1}{2}sin(\theta_1)m_{1}a_{1x} + \frac{\ell_1}{2}cos(\theta_1)m_{1}a_{1y} \;\;\;\implies\;\;\; $$$$ \mathbf{M_1} = I_1\alpha_1 + \frac{\ell_1}{2} \left( cos(\theta_1)(m_{1}a_{1y} - m_1\mathbf{g}) - sin(\theta_1)m_{1}a_{1x} \right) $$ Same result as before, of course. Problems¶ Estimate the moments of force exerted by a $1kg$ ball to be held by you with outstretched arm horizontally relative to an axis that passes through: a) Pulse b) Elbow c) Shoulder A person performs isometric knee extension using a boot of $200N$ weight. Consider that the distance between the boot center of gravity and the center of the knee is $0.40 m$; that the quadriceps tendon is inserted at $5 cm$ from the joint in a 30$^o$ angle, that the mass of the leg + foot is $4 kg$, that the center of gravity of the leg + foot is $20 cm$ from the center of the knee joint, and that at a $0^o$ the knee is extended. a) Calculate the muscle and joint forces at the knee angles 0$^o$, 45$^o$, and 90$^o$. A simple and clever device to estimate the position of the center of mass position on a body is the reaction board illustrated below. Figure. Illustration of the reaction board device for the estimation of the center of mass position on a body. a) Derive the equation to determine the center of mass position considering the parameters shown in the figure above. b) Show that it's possible to estimate the mass and center of mass position of a segment by asking the subject to move his or her segment and recalculating the center of mass position. Consider the situation illustrated in the figure below where a person is holding a dumbbell. Figure. A person holding a dumbbell and some of the physical quantities related to the mechanics of the task. a) Determine the value of the elbow flexion force. b) Determine the forces acting on the elbow joint. What are the average moment of force (torque) and force that must be applied by the shoulder flexor muscle for a period of 0.3 s to stop the motion of the upper limb with angular speed of $5 rad/s$? Consider a radius of giration for the upper limb of $20 cm$; a mass of the upper limb of $3.5 kg$, and that the shoulder flexor muscle is inserted at a distance of $1.5 cm$ from the shoulder perpendicular to the axis of rotation. Consider the system in Example 4, but now the segment (with $1m$ length and $4kg$ mass) is attached to the base by a joint that allows free rotation around the $\mathbf{z}$ axis. At the instant shown in Example 4 (at the horizontal), the segment has an angular velocity equals to $-5 rad/s$. Consider $g=-10 m/s^2$. For this instant, determine: a) The angular acceleration b) The force at the joint c) The moment of force at the joint Consider the foot segment during standing still (foot acceleration is zero) where the entire body is supported by the foot as illustrated in the figure. The distance in relation to the ankle joint of the foot center of mass is $6 cm$ and of the point of application of the resultant ground reaction force (center of pressure, COP) is $4 cm$ as illustrated in the figure. The foot mass is $0.9 kg$ and the ground reaction force (Ry1) is $588 N$. Consider $g=-9.8 m/s^2$. Figure. A foot and its free-body diagram. Figure from Winter (2009). a) Determine the moment of force and force at the ankle. Consider the foot segment at an instant during gait when the entire body is supported by the foot as illustrated in the figure. The coordinates of the ankle joint, foot center of mass and the point of application of the resultant ground reaction force (center of pressure, COP) are given in centimeters at the laboratory coordinate system. The foot accelerations are $a = (3.25, 1.78) m/s^2$, and $\alpha_z = −45.35 rad/s^2$ and the moment of inertia is $0.01 kgm^2$. Consider $g=-9.8 m/s^2$. Figure. Free-body diagram of the foot. Figure from Winter (2009). Study the content and solve the exercises of the text Forces and Torques in Muscles and Joints. References¶ Hibbeler RC (2012) Engineering Mechanics: Statics. Prentice Hall; 13 edition. Hibbeler RC (2012) Engineering Mechanics: Dynamics. Prentice Hall; 13 edition. Ruina A, Rudra P (2013) Introduction to Statics and Dynamics. Oxford University Press. Winter DA (2009) Biomechanics and motor control of human movement. 4 ed. Hoboken, EUA: Wiley. Zatsiorsky VM (2002) Kinetics of human motion. Champaign, IL: Human Kinetics.
CommonCrawl
Data-driven prediction of COVID-19 cases in Germany for decision making Lukas Refisch1,2 na1, Fabian Lorenz1,3 na1, Torsten Riedlinger4, Hannes Taubenböck4,5, Martina Fischer6, Linus Grabenhenrich6,7, Martin Wolkewitz1, Harald Binder1,8 & Clemens Kreutz1,3,8 The COVID-19 pandemic has led to a high interest in mathematical models describing and predicting the diverse aspects and implications of the virus outbreak. Model results represent an important part of the information base for the decision process on different administrative levels. The Robert-Koch-Institute (RKI) initiated a project whose main goal is to predict COVID-19-specific occupation of beds in intensive care units: Steuerungs-Prognose von Intensivmedizinischen COVID-19 Kapazitäten (SPoCK). The incidence of COVID-19 cases is a crucial predictor for this occupation. We developed a model based on ordinary differential equations for the COVID-19 spread with a time-dependent infection rate described by a spline. Furthermore, the model explicitly accounts for weekday-specific reporting and adjusts for reporting delay. The model is calibrated in a purely data-driven manner by a maximum likelihood approach. Uncertainties are evaluated using the profile likelihood method. The uncertainty about the appropriate modeling assumptions can be accounted for by including and merging results of different modelling approaches. The analysis uses data from Germany describing the COVID-19 spread from early 2020 until March 31st, 2021. The model is calibrated based on incident cases on a daily basis and provides daily predictions of incident COVID-19 cases for the upcoming three weeks including uncertainty estimates for Germany and its subregions. Derived quantities such as cumulative counts and 7-day incidences with corresponding uncertainties can be computed. The estimation of the time-dependent infection rate leads to an estimated reproduction factor that is oscillating around one. Data-driven estimation of the dark figure purely from incident cases is not feasible. We successfully implemented a procedure to forecast near future COVID-19 incidences for diverse subregions in Germany which are made available to various decision makers via an interactive web application. Results of the incidence modeling are also used as a predictor for forecasting the need of intensive care units. The current COVID-19 pandemic is far from over and affects more or less every country on the globe. The evolution of new variants of concerns, such as Delta and possibly Omicron increase infectiousness of the disease around the globe. Several vaccines have been developed and came to widespread application in 2021 but did not yet reach enough people to effectively contain the virus evolution and spread. In Germany, the situation in late fall of 2021 is grim: Hospitals and hospital personnel are working at their limit capacity to treat individuals infected with COVID-19. Due to exhausted capacities in some regions, the air force of the national army has started to fly patients across the country to enable treatment of every individual that needs intensive care, often including ventilation. Mathematical models of infectious disease epidemiology have experienced a boost of attention since the beginning of the COVID-19 pandemic. One can divide these models into three categories according to their purpose: scenario simulation, nowcasting, and forecasting. Scenario simulation focuses on different assumptions about some aspects of the model in order to compare and illustrate differences between several scenarios of in principle conceivable progressions of the transmission and other dynamics, which do not allow for proper uncertainty assessment. These approaches are used to examine the impact of changing certain parameters in the system, e.g. social behaviour, vaccination rate, etc, see e.g. [1]. Nowcasting focuses on the precise description of the present situation based on incomplete, noisy and/or systematically biased data about the current state [2, 3]. Forecasting tries to make predictions about the near future providing policy makers with reliable estimates of advancing developments [4]. Similar to nowcasting, forecasting is strongly oriented towards realistic settings. The work presented in this publication focuses on a near-future prediction and can therefore be classified as forecasting. Resources of hospitals are limited and decision makers have to organize planning of capacities on a regional level. We provide a forecasting tool about the situation on the incidence level of cases as well as the intensive care unit occupation level. The SPoCK project In Germany, local health authorities collect data about the infection dynamics on population level as mandated by the Infektionsschutzgesetz "infection protection act" (IfSG) and report it to the national public health institute, the Robert Koch-Institut (RKI). In addition, the DIVI Intensivregister, which is run by RKI with support of the Deutschen Interdisziplinären Vereinigung für Intensiv- und Notfallmedizin "German Interdisciplinary Association for Intesive and Emergency Medicine" (DIVI), collects and publishes data about the daily occupations of intensive care unit (ICU) capacities on the clinic level. The project named Steuerungs-Prognose von Intensivmedizinischen COVID-19 Kapazitäten (SPoCK) makes use of these data sources and forecasts in a data-driven manner the number of occupied ICU beds. The workflow within the SPoCK project is depicted in Fig. 1. Schematic workflow of the SPoCK project. The SPoCK project predicts the needed hospital capacity of ICUs for COVID-19 patients. A key ingredient is the number of newly reported cases from the RKI which also has to be predicted (indicated by blue box). Results are used for visualization by the DLR and by decision makers, such as the BBK and RKI as well as local and regional health authorities Several decision makers including the Bundesgesundheitsministerium "Federal Ministry of Health" (BMG), the RKI, the local planners of ICU capacities as well as the Bundesamt für Bevölkerungsschutz und Katastrophenhilfe "Federal Office of Civil Protection and Disaster Assistance" (BBK) incorporate these predictions into their risk assessment of the current COVID-19 situation. SPoCK is utilizing a two-step procedure: Data-driven forecasting of the future number of daily infections with COVID-19. In addition, the predicted incidences are visualized on an interactive web application provided by the Deutsches Luft- und Raumfahrtzentrum (DLR) called Pandemic Mapping and Information System for Germany (panDEmis). The number of occupied ICU beds is fitted and forecasted by our cooperation partners. The results of the first step are utilized as a main predictor to obtain short-term future predictions on the level of COVID-19-specific occupation of beds and hence ICU capacities. In this paper, we describe the first step of Spock, i.e. fitting and short term forecasting of the newly reported cases of COVID-19 in Germany. That means, we describe the daily analysis and prediction and publication via panDEmis of incident cases of COVID-19 in different regions in Germany which are, in addition to the entire country, the 16 federal states (Bundesländer) and their 413 counties (Land- und Stadtkreise), summing to a total of 430 regions. A standard approach when describing infectious disease transmission are compartmental models or Susceptible-Infected-Recovered (SIR) -like models [5]. In general, both approaches divide the population into subpopulations with disjoint properties. Transition rates allow for flows between the subpopulations and define, in combination with the initial values of the subpopulations, the time evolution of the system. The ordinary differential equation (ODE) representation of the compartmental scheme we use is the well-known Susceptible-Exposed-Infected-Recovered (SEIR) model [6]: $$\begin{array}{*{20}l} \begin{array}{rrrrrl} \dot{S} &=& -\beta(t)\cdot I\cdot S/N & & \\ \dot{E} &=& \beta(t)\cdot I\cdot S/N & -\delta\cdot E & \\ \dot{I} &=& & \delta\cdot E & -\gamma\cdot I \\ \dot{R} &=& & & \gamma\cdot I \end{array} \end{array} $$ with N=S+E+I+R resembling the entire population and where the dot notation is used to indicate time derivatives. Furthermore β,γ,δ resemble the infection rate, the rate to become infectious and the rate with which one dies or recoveres, respectively. The rationale in choosing this model class is that it is concise which is important for frequent evaluation and allows for a more flexible infection time when compared with the standard SIR model. A special characteristic of the current pandemic is the massive political and social reaction. In contrast to, e.g. the annual influenza season during which the social and professional life used to proceed pretty much as usual, the COVID-19 pandemic has led to vast political interventions and personal restrictions aiming mainly at the reduction of infections [7]. Within the SEIR scheme these changes over time can be described by a time-dependent infection rate β(t) which translates to an effective time-dependent reproduction number \(R(t)=\frac {\beta (t)\cdot S}{\gamma \cdot N}\). The latter quantifies how many other people are infected on average by a single infectious individual and determines at which rate the number of currently infectious individuals is growing (R(t)>1) or decaying (R(t)<1). It should be noted that, despite the fact that β(t) is extrapolated as remaining constant (see Eq. 4), R(t) is not necessarily constant. This is because R(t) includes the monotonously decreasing susceptible density \(\frac {S(t)}{N}\). The dynamics of all additional states can, for one example, be found in the supplement (Additional file 1). There are several studies dealing with the problem of time-dependent infection rate in different manners. For example, at the beginning of the COVID-19 pandemic the impact of different non-pharmacological interventions (NPIs) was examined via step functions that implement β(t) via different variants of (smoothed) step functions, e.g. to examine the impact of different NPIs [8–11]. Often, these approaches are restricted to time ranges in which the infection rate is assumed to be constant or monotonously decreasing or increasing, respectively. In contrast, we aim for a more general approach which enables the infection rate to vary flexibly, i.e. to decrease and/or increase repeatedly within the considered time range. This is necessary for an accurate description of the COVID-19 transmission dynamics since it is influenced by many factors that may vary over the course of the ongoing COVID-19 pandemics: Various NPIs are implemented, repealed and reintroduced iteratively [12]. The population's compliance to regulative measures changes over time [13]. Seasonal effects, e.g. weather conditions, lead to changes in infection risk [14]. Mutations alter the physiological mechanisms underlying the disease transmission and other aspects [15]. Vaccinations reduce the population's susceptible fraction [16]. Air pollution may enhance COVID-19 severity [17]. Quantifying the effects of the above points on the infection rate is hardly feasible and within an evolving pandemic it is fairly impossible. Therefore, we omit an explicit formulation of the above effects and strive for an estimation of an effective infection rate. In order to fit a strictly positive and time-dependent infection rate simultaneously with the SEIR model's parameters, we introduce the following parametrization for the infection rate: $$\begin{array}{*{20}l} \beta(t) = b\cdot \frac{1}{1+\mathrm{e}^{-f(t)}}\:\:, \end{array} $$ where the argument of the exponential function is given by an interpolating cubic spline $$\begin{array}{*{20}l} f(t) = \text{cubic\_spline}\left(t,\{\tau_{i},u_{i}\}_{i\in\{1,\dots,n\}}\right)\:\:. \end{array} $$ We utilize joint estimation of input spline and ODE parameters as introduced for biological systems in [18]. The composition of the interpolating spline (3) with the logistic function (2) allows for a nearly arbitrary time dependence, while still ensuring that the infection rate β(t) is strictly positive, smooth and restricted to a maximal value b. The cubic spline curve is determined by estimated parameters ui=cubic_spline(τi) that represent its values at fixed and evenly spaced dates τi for i∈{1, …, n−2} which cover the time range of observed data. We chose n=15 which leads to roughly one degree of freedom per month which turned out to be a reasonable choice during the development process. In general, there is a trade off: It should be flexible enough to describe all infection waves, but it is also necessary to have no overfitting in any of the fitted regions. In our model, the last two spline knots are placed after the date tLast of the last data point: τn−1=tLast+50d and τn=tLast+300d. The value un−1 is fitted to allow for some flexibility in the most recent regime, whereas un=0 is fixed for numerical stability and reflecting the end of the pandemic in at least 300 days. The predictions for the infection dynamics are primarily determined by the time-dependent infection rate β(t). In general, assumptions for the future development of β(t) are difficult to justify as many different factors contribute to it. For illustrative purposes, several different assumptions could be made and visualised as done e.g. in various online simulator tools [19]. For example, one such scenario study nicely illustrates the effectiveness of a Test-Trace-Isolate strategy [20]. For a data-driven approach focused on short-term forecasts, we need to be more practical: For extrapolation purposes, we fix $$ \beta(t>t_{\text{Last}}) = \beta(t_{\text{Last}}) $$ i.e. we assume the infection rate to be constant starting from the day where the last data point is reported. Alternatively, for β(t>tLast) some functional form incorporating the derivative or even higher-order derivatives could be utilized. As it is a priori totally unclear, which functional form and additional assumptions might be appropriate, we decided to go for the most simple ansatz by fixing it to β(tLast). Note also, that by fixing at t>tLast we already have some kind of extending as the model system has an integrated delay due to its structure. Data-driven approach Typically, there exist a multitude of model classes and structures which can be used to describe the same phenomenon. However, it is generally not possible to transfer results about estimated parameters between different models in a straightforward manner due to their differing mechanistic structures. To circumvent this problem, we here rely on a purely data-driven approach meaning that no prior knowledge about parameter values is incorporated into the optimization procedure. The only three a priori fixed parameters are the initial number of individuals in the susceptible, the exposed and the recovered state: Sinit,Einit and Rinit. Time point zero t0 is set to the first day that has at least a total of 100 reported cases to ensure the well-mixing assumption of ODE modeling. Sinit was set to the total population of the respective region as given by the Federal Statistical Office of Germany [21]. Einit was set to γ·Iinit/δ, which is motivated by the assumption that \(\dot {I}\approx 0\) at the beginning of an epidemic reflecting a slow onset. Rinit is set to zero. The only remaining initial occupation number Iinit is estimated from the data. Link between model and observed data In order to calibrate the ODE model, it needs to be linked to the observed data. The data we use for calibration is the daily incidence yi published by the reporting date (Meldedatum) ti at the local health authority. Therefore, we introduce the observation function $$\begin{array}{*{20}l} y(t_{i}) = q\cdot\lambda_{D(t_{i})}\cdot\left(\delta\cdot E(t_{i})\cdot \Delta\right)\:\:, \end{array} $$ where the parameters can be interpreted as follows: q∈[0,1] is the fraction of all infectious individuals that are detected and reported. D(ti)∈{1,...,7} is an index for the weekday at date ti where {1,...,7} are naturally identified with the weekdays W={Monday,…,Sunday}. λD is a factor for the weekday D that adjusts for the weekly modulation occurring in the IfSG data (see Weekly modulation factors). (δ·E(t)·Δ) approximates the influx into the state I(t) of Eq. 1. As the considered data represents daily incidences, we set Δ to 1 day. This approximation of the true incidence quantity \(\int _{t-1}^{t}\delta \cdot E(t^{\prime }) \mathrm {d}t^{\prime }\) is exact if the state E(t) remains constant within that day. Comparison with this exact but computationally much more expensive approach showed minor deviations for real data applications. The observable function (5) connects the model's predictions to the reported data. The observations are assumed to scatter around this mean according to a normal distribution: $$\begin{array}{*{20}l}y_{i} = y(t_{i}) + \epsilon_{i},\quad\epsilon_{i}\sim\mathcal{N}(0,\sigma_{i}^{2})\:\:. \end{array} $$ As we are dealing with a count process we use the standard deviation inspired by a Poisson model $$ \sigma_{i}=C\cdot\sqrt{1+y(t_{i})}\:\: $$ where the addition of 1 accounts for numerical instabilities if the number of infected y(ti) becomes very low. As the standard deviation grows with the square root of the incidences, the variance grows linearly with the expectation value. The error parameter C is fitted jointly with all others. Investigated time frame The results of the presented ansatz are calculated on a daily basis. The data used for fitting consists then of a time course from the start of the pandemic, March 1st, 2020 through the most recent report with one data point per day. In this paper, we present the methodology and the results were generated on April 1st, 2021. The data fitted had therefore registered infections up to March 31st, 2021. We publish and assess predictions for a forecast horizon of three weeks. This period was selected because we think that the assumption of Eq. 4 is justifying no much longer time frame. Weekly modulation factors The IfSG data shows an oscillatory pattern with a period of one week which can be quickly evaluated by plotting distribution of incidences per weekday relative to the rolling 7-day average: we provide an analyzing figure in the supplement. The main reason for this is the reporting procedure, displaying a major delay during weekends, instead of actual infection dynamics. Therefore, we account for this effect within the observation function via seven weekday-specific factors λD with the integer D∈{1,...,7}. In order to guarantee that the factors λD essentially do not change the 7-day-incidence and separate the weekly modulation from a global scaling of the observation function, which is realized via the factor q, we, furthermore, set the constraint that $$\begin{array}{*{20}l} \sum_{D\in\{1,...,7\}}\lambda_{D}=7\:\:. \end{array} $$ As a consequence, we are left with six degrees of freedom to describe the weekly effects. For a convenient implementation in the used software, we introduce a Fourier series with six parameters Θweekly={A1,A2,A3,ϕ1,ϕ2,ϕ3}: $$\begin{array}{*{20}l} \psi(t) = A_{0} + \sum_{k=1}^{3} A_{k}\cdot \cos\left(k\omega t + \phi_{k}\right) \end{array} $$ where offset and frequency are fixed to $$\begin{array}{*{20}l} A_{0} = 1,\quad \omega = \frac{2\pi}{7\,\text{days}}\:\:. \end{array} $$ Instead of fitting the factors λD directly, we rewrite them in terms of equation (9) as $$\begin{array}{*{20}l} \lambda_{D} = \frac{\psi(D)}{\sum_{j=1}^{7}\psi(j)}\:\: \end{array} $$ and calibrate the parameters Θweekly. Doing so allows to set the amplitudes A1,A2 and A3 to zero in order to get an adjusted curve that does not feature the weekly oscillations and therefore reflects the ideal case of no reporting artifacts in the data. Correction of last data points The IfSG data published on date tn contains information about the reported cases at all past dates tn,tn−1,…,t1 since the beginning of reporting. However, due to reporting delays between the test facilities, the local health authorities and the RKI, the data update from date tn−1 to tn contains not only cases that were reported to the local health authorities at date tn−1, but also before that at dates tn−2,tn−3,… and so on. This means that the number of reported cases on day tn will be underestimated especially for the most recent dates. Meaningful handling of this data artifact can be done in at least two ways: For instance, one could choose to ignore some of the latest data points, since they are most prominently affected by this data artifact. An alternative is to estimate the systematic deviation from historically published data sets. In order to avoid the bias towards smaller incidences in the prediction, the data can be adjusted accordingly. Therefore, one assumes, that the future data sets of tn will not change reported counts older than four weeks tn−28. Let \(N_{t_{1}}^{t_{2}}\) denote the number of reported cases, that were published at time point t1 to be reportedly infected at date t2 where \(N_{t_{1}}^{t_{2}>t_{1}} = 0\) as future cases cannot be reported. Then, one can learn from this history of published data sets the correction factor CFk $$ CF_{k} = \frac{\sum_{\hat t} N_{\hat t}^{\hat t-k}}{\sum_{\hat t} N_{t_{\text{Last}}}^{\hat t-k}} $$ the initial publication of k day old counts had to be corrected to obtain the number in the latest data set tn. The factors CFk can then be applied to the newest data set. This was done for Germany and all the federal states separately. We showcase the resulting differences of these two data preprocessing strategies in Averaging of approaches section. We give some summary statistics of this quantity in the supplement. For the county level, this adjustment is not as crucial for two reasons: 1) the count numbers are much lower, so the stochasticity can lead to wrong correction factors and 2) the shape of the estimated dynamics is inherited from the federal states in our model. Parameter estimation In general, we follow the maximum likelihood estimation (MLE) approach. As there are a total of 429 regions for which the data has to be fitted and predictions are calculated, we rely on a two-step procedure to reduce computation time which is described in the following paragraphs. Federal states and Germany The parameter estimation problem given by the above defined ODE model and the IfSG daily incidence data is solved separately for Germany and each federal state by an MLE approach. The latter has been well established for ODE models [22]. The deviation between data and the model's observation function as specified in Eq. 5 is minimized, taking into account the error model of Eqs. 6 and (7). The simultaneous parameter estimation of the spline parameters ui follows the lines of [18]. In particular, no explicit regularization term is implemented that penalizes non-vanishing spline curvatures. A full list of parameters Θ and their estimation results \(\hat \Theta \) is shown in the supplement (Additional file 1) for one example, the region of Germany. County level Analysis at the rural and urban county level (Land- and Stadtkreise) is important to obtain a spatially resolved picture of the infection dynamics in Germany. The previously described approach is computationally not feasible because the analysis of 429 regions cannot be performed within 24 hours without access to a sufficiently large computing cluster which can be used 24/7 without queuing. Moreover, the number of infected individuals can generally be so small at the county level that inference and prediction based on a purely deterministic model is not appropriate. Therefore, we used the results on the higher-level administrative structure, i.e. the fitted model of the federal state, as prior information about the dynamics, and scaled it down to the county level for predictions. More specifically, the county-level data was used to merely estimate two parameters in a county-specific manner: the scaling parameter q from equation (5), which in this context can be related to the proportion of current infections occurring in the county c, and the error parameter C from equation (7) which quantifies the stochasticity of county-level observations analogous to its meaning on the level of federal states. All other parameter values for a county c are taken from the estimated set of parameters \(\hat \Theta _{FS(c)}\) for the corresponding federal state FS(c). The county-level dynamics might change rapidly as new clusters of infection emerge. For predictions, it is important that such rapid changes are detected by the model calibration procedure, i.e. fitting of q and C has to account for such rapid changes. We implemented this requirement by exponentially weighting down the county level data observed in the past by increasing the standard deviations via $$ {}\sigma^{2}_{i}\longleftarrow\frac{\sigma^{2}_{i}}{w_{i}}\:\:,\quad w_{i}=A\cdot\sqrt{\left(\exp{(t_{i}-t_{\text{Last}})/\tau}\right)^{2}+\left(w_{\min}/A\right)^{2}}\:\:. $$ Here, A=7.56 denotes the normalization factor that ensures that the sum of all weights wi is equal to one. Furthermore, wmin=0.01·A denotes the minimal weight factor used for data observed in the past. wmin is necessary for numerical reasons: the first summand of the square root is exponentially decreasing towards zero and would (without additional second summand) lead to a divergence of the used standard deviation. The value of 0.01 is somewhat arbitrary. It effectively serves as a lower bound on the weights (or upper bound on standard deviation, respectively) for data points that are long time ago. Thorough evaluation of this hyperparameter of value 0.01 has not been performed, however it is not expected to have a crucial impact on results. Moreover, we chose τ=7 as time-constant of this weighting step. To be clear, on the county-level, σi from equation (7) should be thought of as first being transformed according to the mapping (13) before entering equation (6) as the standard deviation of Gaussian observation errors. Just as the analysis for the federal states, the described scaling procedure for the counties is updated on a daily basis, i.e. the county-specific parameters q and C are updated every day. This accounts for time-dependent deviations of the local infection history on the federal state level, i.e. each county has an individual kinetics. Calculation of uncertainties To quantify the uncertainty in the predictions of the model, our forecasting tool provides confidence intervals along with proposed predictions. Here, we describe two main sources of uncertainties: parameter uncertainty and approach uncertainty. The first is captured by simulating all parameter combinations that agree with the observed data as will be explained in Profile likelihood analysissection, the second is incorporated by running the analysis with several models as detailed in Averaging of approaches section. Profile likelihood analysis For non-linear models, uncertainties for estimated parameters can be determined using the profile likelihood (PL) method which estimates parameter values that still ensure agreement of model and data to a certain confidence level in a pointwise and iterative manner [23]. This approach has been showcased for infectious disease models [24]. Parameter uncertainties naturally translate to prediction uncertainties which can be analyzed systematically [25]. Following the given references, we simulate the data-compatible parameter combinations from the parameter profiles and then take the envelope of the resulting family of curves to obtain confidence intervals. One could also analyze the uncertainty of a model prediction directly via the prediction profile likelihood method [26]. Prediction profiles need to be computed via a costly iterative fitting procedure for each predicted quantity and time point separately. However, by using the parameter combinations from the profile likelihood method, we can calculate uncertainties for any desired model quantities and time points only by simulation, thus rendering this method more efficient for our purposes. Averaging of approaches When utilizing ODE models to describe certain aspects of reality, a multitude of assumptions are implicitly made, which include (but are not limited to) the selected model structure, the noise model of the data, the appropriate data preprocessing. All these decisions result in a certain approach. These necessary decisions along the modeling process impact the space of possibly described and therefore also predicted dynamics. To account for this origin of uncertainty, we perform the procedure described so far simultaneously for several approaches and merge their results into one comprehensive result. The latter is done by taking the mean / minimum / maximum of the different approaches' MLE / lower bound / upper bound curves. Accounting for different modeling decisions prevents overconfidence in the results. Since April 2020, the described methodology has delivered daily predictions and the ansatz has evolved and several changes and refinements have been implemented. Currently, the resulting predictions for ICU bed capacity, which use estimated incidences derived by the present paper as a main predictor, are reported two times per week to public health decision makers. The presented methodology and results were generated on April 1st, 2021. The data fitted had therefore registered infections up to March 31st, 2021. COVID-19 spread in Germany For the aggregated data over all of Germany, we obtained a fit and predictions with uncertainties as shown in Fig. 2. The fitted data can be described by the model (panels a and b) and the prediction is a reasonable continuation of the last data points. Since we adjusted for weekday effects, the adjusted trajectory can be assessed and results in a smoothing of the trajectory (panel c). The estimated reproduction number R(t) oscillates around a value of 1 and illustrates the effect of politics' countermeasures and the population's compliance to them (panel d). In general, oscillations in dynamical systems often are attributed to a feedback with delay, which is also the case here for the reproduction number R(t). Several additional quantities of interest, such as the 7-day incidence (panel e) or the cumulative number of cases (panel f) can be computed from the model's predictions. In addition, the associated confidence intervals of these quantities can be determined using the parameter sets below the 95% threshold of likelihood profiles. We stress here again, that only the incidence data was used for model calibration (panels a and b). Fit and prediction for Germany. The incidence data of the entire time course is fitted (panel a) to estimate all dynamic parameters including the time-dependent infection rate that corresponds to R(t) (panel d). Predictions of incidences (panels b and c) and derived quantities (panels e and f) for a zoomed in time span are shown. 95%-confidence intervals (color-shaded areas) are inferred by profile likelihood calculation. The independent results for all federal states are shown in the supplement (Additional file 1) COVID-19 spread in subregions of Germany For the county-level (Landkreise) we obtain results by the scaling approach described in County level section. The shape of dynamics is preserved and describes the latest data. Due the exponential scaling on later data points, it is unlikely that the entire time course is described well by the scaled dynamics. As we are primarily interested in the forecast, we display only the latest time interval. The data is more noisy due lower numbers of cases and inhabitants (Fig. 3). Here, we show already merged results for clarity (see Approach averaging section). Results of all the counties can be found in the supplement (Additional file 1), where we also display already merged results for clarity (see Approach averaging section). Fit and prediction for one federal state and four counties. The dynamic of the one exemplary federal state Baden-Württemberg (panel a) governs the dynamics of the corresponding Landkreise, four of them are shown here (panels b through e). For regions with fewer inhabitants, lower case numbers are expected: note the different scaling of the y-axis for federal state and counties Approach averaging The analyses can be carried out for different approaches representing a variety of a priori equally feasible modeling strategies. To account for the uncertainty that arises from (possibly over-)simplifying modelling assumptions, those different approaches are analyzed independent from each other. After results for all regional entities, i.e. federal states (as in Fig. 2 and counties Fig. 3) have been obtained for each approach, the results are merged into one comprehensive prediction, which features by construction (see Averaging of approaches) a higher uncertainty, now including both the uncertainty in the data and the uncertainty which modeling strategy is used. We illustrate this for two different approaches which differ only in the handling of the most recent data points (Fig. 4). In general, this methodology generalizes to an arbitrary number of different approaches with the available computing resources as the only limiting factor. Merged Approaches for the example of Germany. The two approaches differ in their data handling strategies for considering reporting delays: Approach 1 (panel a) simply ignores the two latest data points. Approach 2, in contrast, uses estimated correction factors on the latest data points (panel b). The result of the merging (panel c) indicates that both approaches describe the data well, but make differing predictions. Therefore the resulting uncertainty is bigger than the individual uncertainties. In general, this procedure generalizes to more different approaches Availability of results Sound political or social decisions are based on an empirical or prognostic foundation. To make the daily generated predictions available to various stakeholders, the forecasts are integrated into a web-application called panDEmis: In this interactive application, the recent infection situation is analyzed and displayed. For all registered users of the DIVI Intensivregister the tool is available at https://pandemis.dlr.de/de/#/overview. Current capacities of hospital beds and intensive care units, exposed population in the catchment areas of hospitals are merged with the forecast data. The combined display of all available data sets allows a situation picture for each day including also for past and future time steps. Figure 5 shows different features of the web-application from May 17th, 2021 for the occurrence of infection in the map entire Germany (panel b), as well as for the selected administrative district of Bayern (panel a). Here, the blue graphs represent 1) the daily reported new infections by RKI, 2) the incidence of COVID-19 cases in the past 7 days per 100,000 people and 3) the cumulative infections. The prognosis is displayed as red curve, including a 95% confidence interval. All data can be interactively analyzed and visualized for different administrative units, i.e. federal states and county level. panDEmis visualization. On the interactive web application called panDEmis, predictions for incidences, 7-day average, as well as cumulative cases can be inspected for all subregions (panel a). The region can be selected through a map indicating all the regions (panel b). For the chosen regional district, historic data sets and predictions can be selected and different layers can be chosen for visualization (panel c). Additionally, key figures about the current pandemic situation, such as incidences and ICU bed capacities are displayed for the selected region (panel d) The results of this incidence modeling approach are also a main predictor for a prediction analysis of ICU beds. The results of this second analysis step which is not detailed within this paper, is available for all registered users of the DIVI Intensivregister at https://www.intensivregister.de/#/aktuelle-lage/prognosen. Different model classes as ODE models or stochastic differential equation (SDE) models with or without mixed effects could be used for a data-driven parameter estimation approach. An SDE approach might be beneficial for small regions with low infection numbers or during times with very low total infection numbers. In these cases local outbreaks dominate the infection dynamics and the population is not well-mixed which renders an ODE approach ineffective. A well-mixed system (or here: population) implies that the infection probability for all susceptible persons is equally high or low and infection dynamics follows some averaged infection probability. For the presented regional entities, the underlying assumptions for ODE modeling are reasonable and the ODE model was successfully adapted. We here focused on a pragmatic procedure that allows daily analysis and reliably calculates predictions. When fitting data about the number of reported cases of an infectious disease outbreak, it is beneficial to fit incidences (or fluxes) instead of the total (or cumulative) number of cases [27]. The residuals of a fit on cumulative data will be correlated by construction as every data point must be higher than the previous one, which clearly conflicts wit the following: Most noise models assume independent measurement errors. Thus, the uncertainty will be underestimated in these cases and obtained results will be overly confident. By fitting the model to incidence data, the measurement errors are not correlated by this effect. Of course, there can be additional reasons for correlations in the residuals. A good example for this is the prominent weekday effect in the data: If it was not corrected for in the observation function, this effect would lead to correlated residuals. The presented modeling approach heavily relies on the time-dependent infection rate β(t). We assume dynamic processes to be continuously differentiable which leads to a smoothing of possible steps in the real infection rate which might occur due to rapid policy changes. Also, the temporal change β(t) incorporates many different mechanisms, which include but are not limited to: vaccinations, NPIs, changes in compliance to NPIs, viral mutations, seasonality and testing frequency. For an assumed constant vaccination rate, we saw that our approach delivers the same results when omitting the explicit vaccination state since β(t) is flexible enough to compensate the vaccination effect. The time dependence of β leads to an oscillation of reproduction number R(t). This is in line with several publications [11, 28, 29] reporting similar behavior of the reproduction number. In general, it is a priori unclear how much flexibility this function should have. In the presented procedure, this corresponds to the number of knots employed in the spline. The spline's freedom should allow for a good fit of the dynamics, but also prevent overfitting. Furthermore, the dynamics of the prediction are primarily determined by the value of R(t) at the latest data point. Hence, this value should not be estimated by too few data points meaning that the last spline knot should not be too close to the end of the time series. Any prediction model used for forecasting should not exceed a certain time period as the future infection rate is hard to determine. But even at a short prediction time span, it is unclear how recent political measures and the population's resulting behavior will alter the future infection rate. Therefore, we assume β(t) to be constant starting at the last data points. By additional precise knowledge about the effect of planned or recently made political decisions or other effects like weather conditions, this assumption could be further refined. In contrast to other modeling approaches, we do not feed the actual NPIs into the model, but can instead correlate the estimated time development in a second step of the infection rate to NPIs. Quantifying the NPIs' effect and time lag on R(t) is difficult as most NPIs are not imposed or lifted independently of each other and estimates will therefore be highly correlated [30]. This means, our modeling ansatz cannot contribute to the quantification of the NPIs' effect on infection numbers. Similarly, age- and time-resolved contact patterns did not enter our modeling ansatz and we can therefore not infer any quantitative statement regarding these quantities. Our main focus was predictions of case numbers and there are (by construction) no reliable estimates of future NPIs and/or contact patterns. Whenever discussing the required amount of flexibility to obtain a good model fit, one should be aware of biasvariance-tradeoff: The introduction of more parameters included to explain a certain time dependence (reducing the bias), the bigger the resulting prediction uncertainty will be (increasing the variance). Similar arguments can be made when discussing the amount of utilized spline parameters or accounting for age structure. More available and consistent data can help. There are no explicit states in our model to distinguish between recovered and dead people, mainly for the reason that there is no reliable data over the entire time course for those quantities. Recovered individuals are not tested to be non-sick anymore, and people who died were not consistently assessed in real-time in Germany. These omission from the model make quantitative assessments of death rates, (probably time dependent) risk of death and recovery rates not possible. As the goal was to predict development of case numbers, and these events happen downstream without a feedback, these shortcomings are not crucial to us. Furthermore, the unobserved infected and infectious individuals are not in an explicit state. This fact is compensated by two aspects: Firstly, the used data does not contain information about the duration from beginning of infectivity to reporting to the local health authority. Thus, since the additional state would not help to better describe the used data, it is omitted. Secondly, the factor q introduced in the observation function in Link between model and observed data section accounts for individuals that are overseen at all times. The estimated dark figure from Eq. 5 when fitting only incidence data is in the presented modeling approach in most regions compatible with a broad set of values ranging from 0.1 to 1 within the confidence level. This means that anywhere between 10% to 100% of all cases are detected by local authorities and both edge cases still agree sufficiently with the data. Therefore, the dark figure can not be estimated solely based on reported incidence cases. For reliable determination of the dark figure, additional testing in pre-specified cohorts is necessary. We presented a data-driven ODE approach to fit and predict incidences of COVID-19 cases for different subregions of Germany. The key ingredients in doing so are 1) likelihood-based estimation and uncertainty quantification and 2) a time-dependent infection rate which is estimated by utilizing a cubic spline. All parameters are estimated from data and uncertainty in parameter estimates are translated to prediction uncertainty. As many different modeling assumptions will affect the outcomes, we average over similarly plausible approaches to account for this source of uncertainty. A major constraint for a feasible analysis strategy is a maximum runtime of 24 hours as the analysis should be repeated on a daily basis in an automated manner including the respectively newest data set. In the future, more work for validation of competing modeling approaches and comparison of the various efforts undertaken in the currently highly dynamic field of mathematical modeling of infectious diseases is needed and will certainly be seen. Data is collected and published to the public by Robert-Koch-Institute on a daily basis. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Results from modelling analyses comprise big data sets and require specialized software to interpret: Matlab with Data2Dynamics [31]. The source code used during the current study are available from the corresponding author on reasonable request. Missing Open Access funding information has been added in the Funding Note. Malkov E. Simulation of coronavirus disease 2019 (COVID-19) scenarios with possibility of reinfection. Chaos Solitons Fractals. 2020; 139:110296. https://doi.org/10.1016/j.chaos.2020.110296. an der Heiden M, Hamouda O. Erfassung der SARS-CoV-2-Testzahlen in Deutschland - Nowcasting. Epidemiologisches Bull. 2020; 17:10–7. Günther F, Bender A, Katz K, Küchenhoff H, Höhle M. Nowcasting the COVID-19 pandemic in bavaria. Biom J. 2020; 63(3):490–502. https://doi.org/10.1002/bimj.202000112. Shinde GR, Kalamkar AB, Mahalle PN, Dey N, Chaki J, Hassanien AE. Forecasting Models for Coronavirus Disease (COVID-19): A Survey of the State-of-the-Art. SN Comput Sci. 2020; 1(4):197. https://doi.org/10.1007/s42979-020-00209-9. Kermack WO, McKendrick AG. A contribution to the mathematical theory of epidemics. Proc R Soc A. 1927; 115(772):700–21. Keeling MJ, Rohani P. Modeling Infectious Diseases in Humans and Animals. Princeton: Princeton University Press; 2008, pp. 41–4. https://doi.org/10.1515/9781400841035. Maier BF, Brockmann D. Effective containment explains subexponential growth in recent confirmed COVID-19 cases in China. Science. 2020; 368(6492):742–6. https://doi.org/10.1126/science.abb4557. Dehning J, Zierenberg J, Spitzner FP, Wibral M, Neto JP, Wilczek M, Priesemann V. Inferring change points in the spread of COVID-19 reveals the effectiveness of interventions. Science. 2020; 369(6500). https://doi.org/10.1126/science.abb9789. Linka K, Peirlinck M, Kuhl E. The reproduction number of COVID-19 and its correlation with public health interventions. Comput Mech. 2020; 66(4):1035–50. https://doi.org/10.1007/s00466-020-01880-8. Flaxman S, Mishra S, Gandy A, Unwin HJT, Mellan TA, Coupland H, Whittaker C, Zhu H, Berah T, Eaton JW, Monod M, Ghani AC, Donnelly CA, Riley S, Vollmer MAC, Ferguson NM, Okell LC, Bhatt S. Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature. 2020; 584(7820):257–61. https://doi.org/10.1038/s41586-020-2405-7. Dings C, Götz K, Och K, Sihinevich I, Selzer D, Werthner Q, Kovar L, Marok F, Schräpel C, Fuhr L, Türk D, Britz H, Smola S, Volk T, Kreuer S, Rissland J, Lehr T. Mathematische Modellierung und Vorhersage von COVID-19 Fällen,Hospitalisierung (inkl. Intensivstation und Beatmung) und Todesfällen in dendeutschen Bundesländern. 2021. https://covid-simulator.com/wp-content/uploads/2021/04/Report_2021_03_31.pdf. Accessed 1 Apr 2021. Mendez-Brito A, El Bcheraoui C, Pozo-Martin F. Systematic review of empirical studies comparing the effectiveness of non-pharmaceutical interventions against COVID-19. J Infect. 2021; 83(3):281–93. https://doi.org/10.1016/j.jinf.2021.06.018. WHO Regional Office for Europe. Pandemic fatigue – reinvigorating the public to prevent COVID-19. Policy framework for supporting pandemic prevention and management. Copenhagen: WHO Regional Office for Europe; 2020. https://apps.who.int/iris/bitstream/handle/10665/335820/WHO-EURO-2020-1160-40906-55390-eng.pdf. Fontal A, Bouma MJ, San-José A, López L, Pascual M, Rodó X. Climatic signatures in the different COVID-19 pandemic waves across both hemispheres. Nat Comput Sci. 2021; 1(10):655–65. https://doi.org/10.1038/s43588-021-00136-6. Ramesh S, Govindarajulu M, Parise RS, Neel L, Shankar T, Patel S, Lowery P, Smith F, Dhanasekaran M, Moore T. Emerging SARS-CoV-2 Variants: A Review of Its Mutations, Its Implications and Vaccine Efficacy. Vaccines. 2021; 9(10):1195. https://doi.org/10.3390/vaccines9101195. Harder T, Külper-Schiek W, Reda S, Treskova-Schwarzbach M, Koch J, Vygen-Bonnet S, Wichmann O. Effectiveness of COVID-19 vaccines against SARS-CoV-2 infection with the Delta (B.1.617.2) variant: second interim results of a living systematic review and meta-analysis, 1 January to 25 August 2021. Euro Surveill Bull Eur Sur Les Mal Transmissibles Eur Commun Dis Bull. 2021;26(41). https://doi.org/10.2807/1560-7917.ES.2021.26.41.2100920. Ali N, Fariha KA, Islam F, Mishu MA, Mohanto NC, Hosen MJ, Hossain K. Exposure to air pollution and COVID-19 severity: A review of current insights, management, and challenges. Integr Environ Assess Manag. 2021; 17(6):1114–22. https://doi.org/10.1002/ieam.4435. Schelker M, Raue A, Timmer J, Kreutz C. Comprehensive estimation of input signals and dynamics in biochemical reaction networks. Bioinformatics. 2012; 28(18):529–34. https://doi.org/10.1093/bioinformatics/bts393. Noll NB, Aksamentov I, Druelle V, Badenhorst A, Ronzani B, Jefferies G, Albert J, Neher RA. COVID-19 Scenarios: an interactive tool to explore the spread and associated morbidity and mortality of SARS-CoV-2. medRxiv. 2020;2020–050520091363. https://doi.org/10.1101/2020.05.05.20091363. Contreras S, Dehning J, Loidolt M, Zierenberg J, Spitzner FP, Urrea-Quintero JH, Mohr SB, Wilczek M, Wibral M, Priesemann V. The challenges of containing SARS-CoV-2 via test-trace-and-isolate. Nat Commun. 2021; 12(1):378. https://doi.org/10.1038/s41467-020-20699-8. Kreisfreie Städte und Landkreise nach Fläche, Bevölkerung und Bevölkerungsdichte am 31.12.2019 - Statistisches Bundesamt. https://www.destatis.de/DE/Themen/Laender-Regionen/Regionales/Gemeindeverzeichnis/Administrativ/04-kreise.html. Accessed 1 Oct 2021. Raue A, Schilling M, Bachmann J, Matteson A, Schelker M, Kaschek D, Hug S, Kreutz C, Harms BD, Theis FJ, Klingmüller U, Timmer J. Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. PLoS ONE. 2013; 8(9):74335. https://doi.org/10.1371/journal.pone.0074335. Kreutz C, Raue A, Kaschek D, Timmer J. Profile likelihood in systems biology. FEBS J. 2013; 280(11):2564–71. https://doi.org/10.1111/febs.12276. Tönsing C, Timmer J, Kreutz C. Profile likelihood-based analyses of infectious disease models. Stat Methods Med Res. 2017;962280217746444. https://doi.org/10.1177/0962280217746444. Steiert B, Raue A, Timmer J, Kreutz C. Experimental Design for Parameter Estimation of Gene Regulatory Networks. PLoS ONE. 2012; 7(7):40052. https://doi.org/10.1371/journal.pone.0040052. Kreutz C, Raue A, Timmer J. Likelihood based observability analysis and confidence intervals for predictions of dynamic models. BMC Syst Biol. 2012; 6(1):120. https://doi.org/10.1186/1752-0509-6-120. King AA, Domenech de Cellès M, Magpantay FMG, Rohani P. Avoidable errors in the modelling of outbreaks of emerging pathogens, with special reference to Ebola. Proc R Soc B Biol Sci. 2015;282(1806). https://doi.org/10.1098/rspb.2015.0347. Khailaie S, Mitra T, Bandyopadhyay A, Schips M, Mascheroni P, Vanella P, Lange B, Binder SC, Meyer-Hermann M. Development of the reproduction number from coronavirus SARS-CoV-2 case data in Germany and implications for political measures. BMC Med. 2021; 19(1):32. https://doi.org/10.1186/s12916-020-01884-4. Abbott S, Hellewell J, Thompson RN, Sherratt K, Gibbs HP, Bosse NI, Munday JD, Meakin S, Doughty EL, Chun JY, Chan Y-WD, Finger F, Campbell P, Endo A, Pearson CAB, Gimma A, Russell T, CMMID COVID modelling group, Flasche S, Kucharski AJ, Eggo RM, Funk S. Estimating the time-varying reproduction number of SARS-CoV-2 using national and subnational case counts. Wellcome Open Res. 2020; 5:112. https://doi.org/10.12688/wellcomeopenres.16006.1. Haug N, Geyrhofer L, Londei A, Dervic E, Desvars-Larrive A, Loreto V, Pinior B, Thurner S, Klimek P. Ranking the effectiveness of worldwide COVID-19 government interventions. Nat Hum Behav. 2020; 4(12):1303–12. https://doi.org/10.1038/s41562-020-01009-0. Raue A, Steiert B, Schelker M, Kreutz C, Maiwald T, Hass H, Vanlier J, Tönsing C, Adlung L, Engesser R, Mader W, Heinemann T, Hasenauer J, Schilling M, Höfer T, Klipp E, Theis F, Klingmüller U, Schöberl B, Timmer J. Data2Dynamics: a modeling environment tailored to parameter estimation in dynamical systems. Bioinformatics. 2015; 31(21):3558–60. https://doi.org/10.1093/bioinformatics/btv405. We thank Matthäus Lottes, Janina Esins and the team from DIVI Intensivregister at the RKI. Also, we thank the statisticians of the Robert-Koch-Institute who are responsible for processing of the raw and routine data. We thank Mario Menk, Steffen Weber-Carstens, Christian Karagiannidis, Uwe Janssens for fruitful discussions during project planning and implementation. Thanks to Rafael Arutjunjan for critically revising the manuscript. Open Access funding enabled and organized by Projekt DEAL. The SPoCK-project to conduct daily analysis of modeling results was funded by the German Bundesministerium für Gesundheit (BMG). The funding body was not involved in the design of the study, nor in data collection, analysis, and interpretation, nor did it take part in writing the manuscript. Lukas Refisch and Fabian Lorenz contributed equally to this work. Institute of Medical Biometry and Statistics, Faculty of Medicine and Medical Center, University of Freiburg, Stefan Meier Str. 26, Freiburg, 79104, Germany Lukas Refisch, Fabian Lorenz, Martin Wolkewitz, Harald Binder & Clemens Kreutz Institute of Physics, University of Freiburg, Hermann-Herder-Str. 3, Freiburg, 79104, Germany Lukas Refisch Centre for Integrative Biological Signalling Studies (CIBSS), Schänzlestr. 18, Freiburg, 79104, Germany Fabian Lorenz & Clemens Kreutz German Aerospace Center, Earth Observation Center, Münchener Str. 20, Weßling, 82234, Germany Torsten Riedlinger & Hannes Taubenböck Institute for Geography and Geology, Julius-Maximilians-Universität Würzburg, Am Hubland, Würzburg, 97074, Germany Hannes Taubenböck Robert-Koch-Institute, Department for Methodology and Research Infrastructure, Nordufer 20, Berlin, 13353, Germany Martina Fischer & Linus Grabenhenrich Charité - Universitätsmedizin Berlin, Department of Dermatology, Venerology and Allergology, Luisenstraße 2, Berlin, 10117, Germany Linus Grabenhenrich Freiburg Center for Data Analysis and Modelling (FDM), University of Freiburg, Ernst-Zermelo-Str. 1, Freiburg, 79104, Germany Harald Binder & Clemens Kreutz Fabian Lorenz Torsten Riedlinger Martina Fischer Martin Wolkewitz Harald Binder Clemens Kreutz Lead conception of the overall study design: LG, HB; Conceived and designed the analysis: LR, FL, CK; Collected the data: MF, RKI; Contributed data or analysis tools: LR, FL, MW, CK; Performed the analysis: LR, FL, CK; Integration and description of information system components (panDEmis): HT, TR; Wrote the paper: LR, FL, CK. All authors read and approved the final manuscript. Correspondence to Clemens Kreutz. All authors have completed the ICMJE uniform disclosure form at https://www.icmje.org/disclosure-of-interest/ and declare: no support from any organization for the submitted work; no financial relationships with any organizations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work. We provide a supplement to give more insights about utilized models and obtained results. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Refisch, L., Lorenz, F., Riedlinger, T. et al. Data-driven prediction of COVID-19 cases in Germany for decision making. BMC Med Res Methodol 22, 116 (2022). https://doi.org/10.1186/s12874-022-01579-9 Infectious disease models Input estimation Ordinary differential equations Nonlinear systems SEIR models
CommonCrawl
\begin{document} \title{Sensitivity Analysis of the Maximum Matching Problem} \begin{abstract} We consider the \emph{sensitivity} of algorithms for the maximum matching problem against edge and vertex modifications. When an algorithm $A$ for the maximum matching problem is deterministic, the sensitivity of $A$ on $G$ is defined as $\max_{e \in E(G)}|A(G) \triangle A(G - e)|$, where $G-e$ is the graph obtained from $G$ by removing an edge $e \in E(G)$ and $\triangle$ denotes the symmetric difference. When $A$ is randomized, the sensitivity is defined as $\max_{e \in E(G)}d_{\mathrm{EM}}(A(G),A(G-e))$, where $d_{\mathrm{EM}}(\cdot,\cdot)$ denotes the earth mover's distance between two distributions. Thus the sensitivity measures the difference between the output of an algorithm after the input is slightly perturbed. Algorithms with low sensitivity, or \emph{stable} algorithms are desirable because they are robust to edge failure or attack. In this work, we show a randomized $(1-\epsilon)$-approximation algorithm with \emph{worst-case} sensitivity $O_{\epsilon}(1)$, which substantially improves upon the $(1-\epsilon)$-approximation algorithm of Varma and Yoshida (arXiv 2020) that obtains \emph{average} sensitivity $n^{O(1/(1+\epsilon^2))}$ sensitivity algorithm, and show a deterministic $1/2$-approximation algorithm with sensitivity $\exp(O(\log^*n))$ for bounded-degree graphs. We then show that any deterministic constant-factor approximation algorithm must have sensitivity $\Omega(\log^* n)$. Our results imply that randomized algorithms are strictly more powerful than deterministic ones in that the former can achieve sensitivity independent of $n$ whereas the latter cannot. We also show analogous results for vertex sensitivity, where we remove a vertex instead of an edge. As an application of our results, we give an algorithm for the online maximum matching with $O_{\epsilon}(n)$ total replacements in the vertex-arrival model. By comparison, Bernstein~et~al.~(J. ACM 2019) gave an online algorithm that always outputs the maximum matching, but only for bipartite graphs and with $O(n\log n)$ total replacements. Finally, we introduce the notion of normalized weighted sensitivity, a natural generalization of sensitivity that accounts for the weights of deleted edges. For a graph with weight function $w$, the normalized weighted sensitivity is defined to be the sum of the weighted edges in the symmetric difference of the algorithm normalized by the altered edge, i.e., $\max_{e \in E(G)}\frac{1}{w(e)}w\left(A(G) \triangle A(G - e)\right)$. Hence the normalized weighted sensitivity measures the weighted difference between the output of an algorithm after the input is slightly perturbed, normalized by the weight of the perturbation. We show that if all edges in a graph have polynomially bounded weight, then given a trade-off parameter $\alpha>2$, there exists an algorithm that outputs a $\frac{1}{4\alpha}$-approximation to the maximum weighted matching in $O(m\log_{\alpha} n)$ time, with normalized weighted sensitivity $O(1)$. \end{abstract} \thispagestyle{empty} \setcounter{page}{0} \section{Introduction} The problem of finding the maximum matching in a graph is a fundamental problem in graph theory with a wide range of applications in computer science. For example, the maximum matching problem on a bipartite graph $G$ captures a typical example where a number of possible clients want to access content distributed across multiple providers. Each client can download their specific content from a specific subset of the possible providers, but each provider can only connect to a limited number of clients. A maximum matching between clients and providers would ensure that the largest possible number of clients receive their content. However in many modern applications, the underlying graph $G$ represents some large dataset that is often dynamic or incomplete. In the above example, the content preference of clients may change, which alters the set of suppliers that provide their desired content. Connections between specific providers and clients may become online or offline, effectively adding or removing edges in the underlying graph. Providers and clients may themselves join or leave the network, adding or removing entire vertices from the graph. Thus, it is reasonable to assume that our knowledge of important properties of $G$ may also change or be incomplete. Nevertheless, we must extract information from our current knowledge of $G$ either for pre-processing or to perform tasks on the current infrastructure. At the same time, we would like to maintain as much consistency as possible when updates to $G$ are revealed. Motivated by a formal definition of consistency of algorithms across graph updates, Varma and Yoshida~\cite{VY19:sensitivity} first defined the \emph{average sensitivity} of a deterministic algorithm $A$ to be the Hamming distance\footnote{Here we regard the output as a binary string so we can think of the Hamming distance between outputs.} between the output of $A$ on graphs $G$ and $G-e$, where $G'$ is the graph formed by deleting a random edge of $G$. Then, they defined \emph{average sensitivity} for randomized algorithms as \[\underset{e\sim E(G)}{\mathbb{E}}\left[d_{\mathrm{EM}}(A(G),A(G-e))\right],\] where $d_{\mathrm{EM}}(\cdot,\cdot)$ denotes the earth mover's distance and $G-e$ is the graph obtained from $G$ by deleting an edge $e \in E(G)$. For the maximum matching problem, they showed a randomized $1/2$-approximation algorithm with average sensitivity $O(1)$ and a randomized $(1-\epsilon)$-approximation algorithm with average sensitivity $O(n^{1/(1+\epsilon^2)})$. \paragraph{Worst case sensitivity.} In this work, we continue the study of sensitivity for the maximum matching problem. Instead of average sensitivity as in~\cite{VY19:sensitivity}, we consider a stronger notion of (worst-case) sensitivity. Specifically, the \emph{sensitivity} of a deterministic algorithm $A$ is the maximum Hamming distance between the output of $A$ on graphs $G$ and $G'$, where $G'$ is the graph formed by deleting an edge of $G$. Then, the \emph{sensitivity} of a randomized algorithm $A$ is \[ \max_{e \in E(G)}d_{\mathrm{EM}}(A(G),A(G-e)). \] Clearly, the sensitivity of an algorithm is no smaller than its average sensitivity. As a natural variant, we also consider \emph{vertex sensitivity}, where we delete a vertex instead of an edge. To avoid confusion, sensitivity with respect edge deletion will be sometimes called \emph{edge sensitivity}. \subsection{Our Contributions} We first show that, for any $\epsilon>0$, there exists a randomized $(1-\epsilon)$-approximation algorithm whose sensitivity solely depends on $\epsilon$ (Section~\ref{sec:randomized}). \begin{restatable}{theorem}{thmrndmatching}\label{thm:matching} For any $\epsilon > 0$, there exists an algorithm that outputs a $(1-\epsilon)$-approximation to the maximum matching problem with probability at least $0.99$, using time complexity $O((n+m)\cdot K)$ and edge/vertex sensitivity $O(3^K)$, where $K = {(1/\epsilon)}^{2^{O(1/\epsilon)}}$. \end{restatable} This result improves upon the previous $(1-\epsilon)$-approximation algorithm~\cite{VY19:sensitivity} in that (1) the sensitivity is constant instead of $O(n^{1/(1+\epsilon^2)})$ and (2) it bounds worst-case sensitivity instead of average sensitivity. We observe that approximation is necessary to achieve a small sensitivity. For example, consider an $n$-cycle for an even $n$, and let $M_1$ and $M_2$ be the two maximum matchings of size $n/2$ in the graph. Consider a deterministic algorithm that always outputs a maximum matching, say, $M_1$ for the $n$-cycle. Then, it must output $M_2$ after removing an edge in $M_1$, and hence the sensitivity is $\Omega(n)$. With a similar reasoning, we can show a lower bound of $\Omega(n)$ for randomized algorithms. Also as we show in Section~\ref{subsec:lb-randomized}, the dependency on $\epsilon$ in Theorem~\ref{thm:matching} is necessary. One application of our low-sensitivity maximum matching algorithm is the online maximum matching problem with replacements, where updates to the graph $G$ arrive sequentially as a data stream and at all times over the stream, the algorithm must output a matching that is a ``good'' approximation to the maximum matching. The number of replacements at each time is informally the number of edges in the output matching that differ from the previous output matching, and the goal is to minimize the total number of replacements across the duration of the algorithm. \begin{restatable}{theorem}{thmmmrecourse}\label{thm:mm:recourse} There exists an online algorithm that outputs a $(1-\epsilon)$-approximation to the online maximum matching problem with probability $0.99$ and has $O_{\epsilon}(n)$ total replacements. \end{restatable} By comparison, Bernstein~et~al.~\cite{BernsteinHR19} gave an online algorithm that always outputs the maximum matching, but has $O(n\log n)$ total replacements and is only restricted to bipartite graphs. Thus our algorithm achieves worse approximation guarantees than the algorithm of~\cite{BernsteinHR19}, but better total number of replacements and applies for general graphs, rather than only for bipartite graphs, Next, we show a deterministic algorithm for finding a maximal matching on bounded degree graphs that has low sensitivity (Section~\ref{sec:deterministic}). Note that it has approximation ratio $1/2$ because the size of any maximal matching is a $1/2$-approximation to the maximum matching. \begin{restatable}{theorem}{thmdetmatching}\label{thm:det:matching} There exists a deterministic algorithm that finds a maximal matching with edge/vertex sensitivity $\Delta^{O\left(6^\Delta+\log^* n\right)}$, where $\Delta$ is the maximum degree of a vertex in the graph. \end{restatable} Then, we show that randomness is necessary to achieve sensitivity independent of $n$ (Section~\ref{sec:lb}): \begin{restatable}{theorem}{thmlbdeterministic}\label{thm:lb-deterministic} Any deterministic constant-factor approximation algorithm for the maximum matching problem has edge sensitivity $\Omega(\log^* n)$. \end{restatable} Namely, we show in Section~\ref{subsec:lb-deterministic-greedy} that we cannot obtain sublinear sensitivity just by derandomizing the randomized greedy algorithm. Theorems~\ref{thm:matching} and~\ref{thm:lb-deterministic} imply that randomized algorithms are strictly more powerful than deterministic ones in that the former can achieve sensitivity independent of $n$ whereas the latter cannot for the maximum matching problem. We then introduce the idea of \emph{weighted sensitivity}, which is a natural generalization of sensitivity for both the average and worst cases. For the problems that we consider, the sensitivity of a deterministic graph algorithm is the number of edges that changes in the output induced by the alteration of a single vertex/edge in the input. Thus for a weighted graph, the weighted sensitivity is the total weight of the edges that are changed in the output, following the deletion of a vertex/edge. For randomized algorithms, the definition extends naturally to the earth mover's distance between the distributions with the corresponding weighted loss function. Finally, we can also normalize by the weight of the edge that is deleted. The motivation for studying weighted sensitivity is natural; in many applications with evolving data, the notion of sensitivity arises in the context of recourse, a quantity that measures the change in the underlying topology of the optimal solution. For example in the facility location problem, the goal is to construct a set of facilities to minimize the sum of the costs of construction and service to a set of consumers. As the information about the set of consumers evolves, it would be ideal to minimize the number of relocations for the facilities, due to the construction costs, which is measured by the sensitivity of the algorithm. However, as construction costs may not be uniform, a more appropriate quantity to minimize would be the total cost of the relocations for the facilities, which is measured by the weighted sensitivity. Similarly, matchings are often used to maximize flow across a bipartite graph, but the physical structures that support the flow may incur varying costs to construct or demolish, corresponding to the amount of flow that the structures support. In this case, we note that it may not be possible for the worst case weighted sensitivity to be small. For example, if a single edge has weight $n^C$ for some large constant $C$ and the remaining edges have weight $1$, any constant factor approximation to the maximum weighted matching must include the heavy edge. But if the heavy edge is then removed from the graph, the weighted sensitivity of any constant factor approximation algorithm is $\Omega(n^C)$. This issue is circumvented by the normalized weighted sensitivity, which scales the sensitivity by the weight of the deleted edge. We give approximation algorithms for maximum weighted matching with low normalized weighted worst-case sensitivity. \begin{theorem} Let $G=(V,E)$ be a weighted graph with $\frac{1}{n^c}\le w(e)\le n^c$ for some constant $c>0$ and all $e\in E$. For a trade-off parameter $\alpha>2$, there exists an algorithm that outputs a $\frac{1}{4\alpha}$-approximation to the maximum weighted matching in $O(m\log_{\alpha} n)$ time and has normalized weighted sensitivity $O(1)$. \end{theorem} Our results also extend to $\alpha=2$ and general worst-case weighted sensitivity, i.e., weighted sensitivity that is not normalized. We detail these algorithms in Section~\ref{sec:weighted}. \subsection{Proof Sketch} We explain the idea behind the algorithm of Theorem~\ref{thm:matching}. For simplicity, we focus on edge sensitivity. We note that if we only sought a $1/2$-approximation to the maximum matching, then it would suffice to find any maximal matching. Although the well-known greedy algorithm produces a maximal matching, the output of the algorithm is highly sensitive to the ordering of the edges in the input. One may hope that, if we choose an ordering of the edges uniformly at random, then the resulting output will be stable against edge deletions to the underlying graph. This is not immediately obvious because the deleted edge will appear about halfway through the ordering (of the edges in the original graph) in expectation, so it seems possible that it can impact about the remaining half of the edges. Luckily, we show that the edges at the beginning of the ordering are significantly more important, so that even if the deleted edge appears about halfway through the ordering, the sensitivity of the maximal matching is $O(1)$ (Section~\ref{sec:greedy}). Our analysis is similar to~\cite{censor2016optimal}, who show that the vertices at the beginning of an ordering are significantly more important in maintaining a maximal independent set in the dynamic distributed model. Adapting this idea to a $(1-\epsilon)$-approximation is more challenging. The natural approach is to take a maximal matching and repeatedly find a large number of augmenting paths, but the change of even a single edge in a maximal matching can potentially impact a large number of edges if the augmenting paths are found in a sequential manner. We instead adapt a layered graph of~\cite{mcgregor2005finding} that is used to randomly find a large number of augmenting paths in a small number of passes in the streaming model. Crucially, we instead find a large number of disjoint augmenting paths in a small number of parallel rounds, which results in low sensitivity. Now we turn to explaining the idea behind Theorem~\ref{thm:det:matching}. Again we focus on edge sensitivity. Our algorithm first uses a deterministic local computation algorithm (LCA) of~\cite{ColeV86} for $6^{\Delta}$-coloring a graph $G$ with maximum degree $\Delta$, using $O(\Delta\log^*n)$ probes to an adjacency list oracle. Here we want to design an algorithm that answer queries about the colors of vertices by making a series of probes to the oracle. The answers of the algorithm must be consistent so that there exists at least one proper coloring that is consistent with the answers. In our case, each probe to the oracle is a query $(v,i)$ with $v\in V$ and a positive integer $i$. If the degree of $v$ is at least $i$, the oracle responds with the $i$-th neighbor of $v$ to the probe. Otherwise, the oracle outputs a special symbol $\bot$. In particular, the deterministic $6^{\Delta}$-coloring LCA only probes vertices that are within a ``small'' neighborhood of the query. Given a coloring for $G$, we then give a local distributed algorithm that takes a coloring of a graph and outputs a maximal matching. It follows from a framework of~\cite{ParnasR07} that our local distributed algorithm can actually be simulated by a deterministic LCA that again only probes a ``small'' neighborhood of the query. Thus to bound the sensitivity of the algorithm, we bound the number of queries for which a deleted edge would be probed. Since only a small number of queries probes the deleted edge, then the output of the algorithm only has a small number of changes and thus low worst-case sensitivity. Our lower bound of Theorem~\ref{thm:lb-deterministic} considers the set of length-$t$ cycles on a graph with $n$ vertices. Any matching on length-$t$ cycles can be represented as a series of indicator variables denoting whether edge $i\in[t]$ is in the matching. We can then interpret the indicator variables as an integer encoding from $0$ to $2^t-1$ through the natural binary representation. Ramsey theory claims that for $t=O(\log^* n)$, there exists a set $S$ of $t+1$ nodes of $n$ so that any subset of $t$ nodes has the same encoding. We then choose $G$ and $G'$ to be the cycle graphs consisting of the first $t$ nodes of $S$ and the last $t$ nodes of $S$, respectively. Since the encodings of the matchings of $G$ and $G'$ are the same, but the edge indices are shifted by one, it follows that $\Omega(t)$ edges must be in the symmetric difference between $G$ and $G'$, which implies from $t=O(\log^* n)$ that the worst-case sensitivity of the algorithm must be $\Omega(\log^* n)$. \subsection{Related Work} Varma and Yoshida~\cite{VY19:sensitivity} introduced the notion of sensitivity and performed a systematic study of average sensitivity on many graph problems. Namely, they gave efficient approximation algorithms with low average sensitivities for the minimum spanning forest problem, the global minimum cut problem, the minimum $s$-$t$ cut problem, and the maximum matching problem. They also introduced a low-sensitivity algorithm for linear programming, and proved many fundamental properties of average sensitivity, such as sequential or parallel composition. Peng and Yoshida~\cite{PengY20} gave an algorithm for the problem of spectral clustering with average sensitivity $\frac{\lambda_2}{\lambda_3^2}$, where $\lambda_i$ is the $i$-th smallest eigenvalue of the normalized Laplacian, which is small when there are exactly two clusters in the graph. The effects of graph updates have also been studied significantly in the dynamic/online model, where updates to the graph arrive in a stream, and the goal is to maintain some data structure to answer queries on the underlying graph so that both the update time and query time are efficient. Consequently, most of the literature for dynamic algorithms focuses on optimizing these quantities, rather than the changes in the output as the data evolves. Sensitivity analysis is more relevant when the goal of the dynamic/offline model is to minimize the number of changes between successive outputs of the algorithm over the stream. Lattanzi and Vassilvitski~\cite{LattanziV17} studied the problem of consistent $k$-clustering, where the goal is to maintain a constant-factor approximation to some underlying $k$-clustering problem, such as $k$-center, $k$-median, or $k$-means, while minimizing the total number of changes to the set of centers as the stream evolves. In this setting, each change to the set of center is known as a \emph{recourse}. Whereas the model of~\cite{LattanziV17} allows only insertions of new points, algorithms with low sensitivity are robust against both insertions and deletions. Cohen-Addad~\emph{et.~al.}~\cite{Cohen-AddadHPSS19} further considered the facility location problem in this model of maintaining a constant-factor approximation while minimizing the total recourse. Although the algorithm of~\cite{Cohen-AddadHPSS19} addresses both the insertions and deletions of points, their total recourse across the stream is $O(n)$, where $n$ is the length of the stream; this is inherent to the difficulty of their problem in the model. Whereas their work already provides an amortized $O(1)$ recourse per update, we also study the worst-case sensitivity in our work. Consistency for maximum matching has also been thoroughly studied, called the online matching problem with replacements. The problem was introduced by Grove~et~al.~\cite{GroveKKV95} for bipartite graphs, who gave matching upper and lower bounds of $\Theta(n\log n)$ total replacements when all vertices on one side of the partition have degree two. Chaudhuri~et~al.~\cite{ChaudhuriDKL09} showed that the greedy algorithm that repeatedly adds the shortest augmenting path from the newest arrived vertex has $\Theta(n\log n)$ total replacements in expectation for any arbitrary underlying bipartite graph, provided that the vertices on one side of the partition arrive in a random order. They also gave an algorithm with $O(n\log n)$ total replacements for acyclic bipartite graphs, as well as a tight asymptotic lower bound. For general bipartite graphs, Bosek~et~al.~\cite{BosekLSZ14} showed an algorithm with $O(n\sqrt{n})$ total replacements, using total time $O(m\sqrt{n})$, matching the best offline maximum matching algorithm for static bipartite graphs. Recently, Bernstein~et~al.~\cite{BernsteinHR19} gave an algorithm for online maximum bipartite matching with $O(n\log^2 n)$ total replacements, substantially progressing toward the strongest known lower bound, which is $\Omega(n\log n)$~\cite{GroveKKV95}. \subsection{Preliminaries} For a positive integer $n$, let $[n]$ denote the set $\{1,2,\ldots,n\}$. For a positive integer $n$ and $p \in [0,1]$, let $\mathcal{B}(n,p)$ be the binomial distribution with $n$ trials and success probability $p$. We use the notation $O_\epsilon(\cdot)$ to omit dependencies on $\epsilon$. Let $G=(V,E)$ be a graph. For an edge $e \in E$, let $N_G(e)$ be the ``neighboring'' edges of $e$ in $G$, that is, $N_G(e) = \{e' \in E \mid e' \neq e, |e' \cap e| \geq 1 \}$. We omit the subscript if it is clear from the context. For two (vertex or edge) sets $S$ and $S'$, let $d_{\mathrm{H}}(S,S') = |S \triangle S'|$, where $\triangle$ denotes the symmetric difference. Abusing the notation, for set of paths $\mathcal{P}$ and $\mathcal{P}'$, we write $d_{\mathrm{H}}(\mathcal{P},\mathcal{P}')$ to denote $d_{\mathrm{H}}(\cup_{P \in \mathcal{P}} V(P), \cup_{P \in \mathcal{P}'}V(P))$. For two random sets $X$ and $X'$, let $d_{\mathrm{EM}}(X,X')$ be the earth mover's distance between $X$ and $X$, where the distance between two sets is measured by $d_{\mathrm{H}}$, that is, \[ d_{\mathrm{EM}}(X,X') = \min_{\mathcal{D}} \mathop{\mathbf{E}}_{(S,S') \sim \mathcal{D}}d_{\mathrm{H}}(S,S'), \] where $\mathcal{D}$ is a distribution such that its marginal on the first and second coordinates are $X$ and $X'$, respectively. For a real-valued function $\beta$ on graphs, we say that the \emph{sensitivity} of a (randomized) algorithm $A$ that outputs a set of edges is at most $\beta$ if \[ d_{\mathrm{EM}}(A(G),A(G-e)) \leq \beta(G) \] holds for every $e \in E(G)$. Given a matching $M$ in a graph $G = (V, E)$, we call a vertex \emph{free} if it does not appear as the endpoint of any edge in $M$. A path $(v_1, v_2 ,\ldots,v_{2\ell+2})$ of length $2\ell+1$ is an \emph{augmenting path} if $v_1$ and $v_{2\ell+2}$ are free vertices and $(v_i,v_{i+1}) \in M$ for even $i$ and $(v_i, v_{i+1}) \in E \setminus M$ for odd $i$. \section{Randomized \texorpdfstring{$(1-\epsilon)$}{(1-epsilon)}-Approximation}\label{sec:randomized} In this section, we prove Theorem~\ref{thm:matching}. Our algorithm, which we describe in Section~\ref{subsec:algorithm-description}, is a slight modification of the multi-pass streaming algorithm due to McGregor~\cite{mcgregor2005finding}. We discuss its approximation guarantee and sensitivity in Sections~\ref{subsec:randomized-approximation-ratio} and~\ref{subsec:randomized-sensitivity}, respectively. Finally, we discuss applications to online matching with replacements in Section~\ref{subsec:online-matching}. \subsection{Algorithm Description}\label{subsec:algorithm-description} A key step of McGregor's algorithm is to find a large set of augmenting paths of a specified length in a batch manner using the \emph{layered graph}, given below. Given a graph $G=(V,E)$, a matching $M \subseteq E$, and a positive integer $\ell$, the layered graph $H = H(G)$ consists of $\ell+2$ layers $L_0,L_1,\ldots,L_{\ell+1}$, where $L_0 = L_{\ell+1} = V$ and $L_1 = L_2 = \cdots = L_\ell = V \times V$. For each vertex $v \in V$, we sample $i_v \in \{0,\ell+1\}$ uniformly at random independently from others. We say that the copy of $v$ in the $L_{i_v}$-th layer is \emph{active} and that the other copy is \emph{inactive}. For each edge $\{u,v\}\in M$, with probability half, we sample a value $i_{(u,v)} \in \{1,\ldots,\ell\}$ uniformly at random and set $i_{(v,u)} = \bot$, where $\bot$ is a special symbol, and with the remaining probability half, we sample a value $i_{(v,u)} \in \{1,\ldots,\ell\}$ uniformly at random and set $i_{(u,v)} = \bot$. For each edge $\{u,v\} \in E \setminus M$, we set $i_{(u,v)} = i_{(v,u)} = \bot$. We say that the copy of $(u,v)$ in the $L_{i_{(u,v)}}$-th is \emph{active} if $i_{(u,v)} \neq \bot$ and is \emph{inactive} otherwise. Intuitively, some orientation of each edge $\{u,v\}$ in the matching $M$ is assigned to a random internal layer in $H$ and edges of $G$ that are not in the matching are not initially assigned to any layer in $H$. For $i = 0,\ldots,\ell+1$, we denote by $\tilde{L}_i$ the set of active vertices in $L_i$. Let $L = \bigcup_{i=0}^{\ell+1}L_i$ be the vertex set of $H$, and let $\tilde{L} = \bigcup_{i=0}^{\ell+1}\tilde{L}_i$ be the set of active vertices in $H$. The edges in the layered graph $H$ are those between active vertices that can be a part of an augmenting path in $G$. More specifically, \begin{itemize} \itemsep=0pt \item We add an edge between $t \in \tilde{L}_0$ and $(u,v) \in \tilde{L}_1$ if $t$ is free in $M$ and $t$ is adjacent to $v$. \item We add an edge between $(u,v) \in \tilde{L}_\ell$ and $s \in \tilde{L}_{\ell+1}$ if $s$ is free in $M$ and $s$ is adjacent to $u$. \item We add an edge between $(u,v) \in \tilde{L}_i$ and $(u',v') \in \tilde{L}_{i+1}$ for $i \in [\ell-1]$ if $v$ is adjacent to $u'$. \end{itemize} Note that inactive vertices are isolated in $H$. \begin{figure*} \caption{Example of an active layered graph with respect to a matching $M$, in solid lines. The free vertex $5$ appears in $L_0$ and the free vertex $6$ appears in $L_3$. The augmenting path found by the layered graph is represented by a dashed purple line.} \label{fig:layered} \end{figure*} We introduce the following definition to handle augmenting paths for a matching $M$ in a graph $G$ via paths in its corresponding layered graph. \begin{definition} We say that a path $v_i,v_{i-1},\ldots,v_0$ with $v_j \in \tilde{L}_j\;(j \in \{0,1,\ldots,i\})$ is an \emph{$i$-path}. Note that an $(\ell + 1)$-path in $H$ corresponds to an augmenting path of length $2\ell + 1$ in $G$. \end{definition} The layered graph defined above is slightly different from the original one due to McGregor~\cite{mcgregor2005finding} in that he did not include inactive vertices in $H$, as they are irrelevant to find augmenting paths. However as we consider sensitivity of algorithms, it is convenient to fix the vertex set so that it is independent of the current matching $M$. We briefly define the randomized greedy subroutine $\textsc{RandomizedGreedy}$ on a graph $G=(V,E)$ as follows. The subroutine first chooses a random ordering $\pi$ over edges and then starting with an empty matching $M$, the procedure iteratively adds the $i$-th edge in the ordering $\pi$ to $M$ if the edge is not adjacent to any edge in $M$, until it has processed all edges. See Algorithm~\ref{alg:rand:greedy} for the full details. \begin{algorithm}[htb!] \caption{Randomized Greedy Algorithm}\label{alg:rand:greedy} \Procedure{\emph{\Call{RandomizedGreedy}{Graph $G=(V,E)$}}}{ Generate a permutation $\pi$ of $E$ uniformly at random\; Greedily add edges to a maximal matching $M$, in the order of $\pi$\; Output $M$\; } \end{algorithm} Algorithm~\ref{alg:augmenting-paths} shows our algorithm for finding a large set of augmenting paths of length $2\ell+1$ given a matching $M$ in a graph $G$. For a matching $M$ and a vertex $v$ belonging to an edge $e$ in $M$, let $\Gamma_M(v)$ denote the other endpoint of $e$. Similarly, for a vertex set $S$ such that each edge in $M$ uses at most one vertex in $S$, let $\Gamma_M(S)$ denote the set of other endpoints. For subsets $L$ and $R$ of adjacent layers in $H$, let $\Call{RandomizedGreedy}{L,R}$ denote the randomized greedy on the induced bipartite graph $H[L \cup R]$. \Call{FindPaths}{} tries to find a large set of vertex disjoint $i$-paths from $S \subseteq L_i$ to $L_0$. The result is stored as a tag function $t: V(H) \to V(H) \cup \{\mathsf{untagged}, \mathsf{dead\ end}\}$. Here, $t(v)$ is initialized to $\mathsf{untagged}$, and it will represent the next vertex in the $i$-path found. If we could not find any $i$-path starting from $v$, $t(v)$ is set to $\mathsf{dead end}$. The difference from McGregor's algorithm is that we run the loop in \Call{FindPaths}{} $1/\delta$ times instead of running it until $|M'| \leq \delta |M|$. This makes sure that we compute a maximal matching the same number of times no matter what $G$ and $M$ are, and it is more convenient when analyzing the sensitivity. Our algorithm for the maximum matching problem (Algorithm~\ref{alg:randomized-matching}) simply runs \Call{AugmentingPaths}{} sufficiently many times for various choice of $\ell$ and then keep applying the obtained augmenting paths. Before analyzing its approximation ratio and sensitivity, we analyze its running time. \begin{algorithm}[t!] \caption{Augmentation Algorithm}\label{alg:augmenting-paths} \Procedure{\emph{\Call{AugmentingPaths}{$G,M,\ell,\delta$}}}{ Construct a layered graph $H$ using $G$, $M$, and $\ell$\; $t(v) \leftarrow \mathsf{untagged}$ for every vertex $v$ in $H$\; $\Call{FindPaths}{H, L_{\ell+1}, \ell, \delta, t}$\; Convert $t$ to a set of augmenting paths in $G$\; Apply the augmenting paths to $M$. } \Procedure{\emph{\Call{FindPaths}{$H,S,i,\delta, t$}}}{ $M' \leftarrow \Call{RandomizedGreedy}{S, L_{i-1} \cap t^{-1}(\mathsf{untagged})}$\; $S' \leftarrow \Gamma_{M'}(S)$\; \If{$i=1$}{ \For{$u \in S$}{ \If{$u \in \Gamma_{M'}(L_0)$}{$t(u) \leftarrow \Gamma_{M'}(u)$} \If{$u \in S \setminus \Gamma_{M'}(L_0)$}{$t(u) \leftarrow \mathsf{dead\; end}$.} } \Return $t$. } \For{$\lceil 1/\delta \rceil$ times}{ \Call{FindPaths}{$H,S',i-1,\delta^2,t$}\; \For{$v \in S' \setminus t^{-1}(\mathsf{dead\;end})$}{ $t(\Gamma_{M'}(v)) \leftarrow v$. } $M' \leftarrow \Call{RandomizedGreedy}{S \cap t^{-1}(\mathsf{untagged}), L_{i-1} \cap t^{-1}(\mathsf{untagged}) }$\; $S' \leftarrow \Gamma_{M'}(S \cap t^{-1}(\mathsf{untagged}))$. } \For{$v \in S \cap t^{-1}(\mathsf{untagged})$}{ $t(v) \leftarrow \mathsf{dead\; end}$. } \Return } \end{algorithm} \begin{algorithm}[t!] \caption{Algorithm for maximum matching}\label{alg:randomized-matching} \Procedure{\emph{\Call{Matching}{$G,\epsilon$}}}{ Let $\pi$ be a random ordering of the edges of $G$\; Let $M$ be the greedy maximal matching of $G$ induced by $\pi$\label{line:randomized-matching-0}\; $k \gets\lceil \epsilon^{-1}+1\rceil$\; $r\gets 4k^2(16k+20)(k-1){(2k)}^k$\; \For{$\ell=1$ to $k$}{ \For{$i=1$ to $r$}{ $M'_{\ell,i} \leftarrow \Call{AugmentingPaths}{G,M,\ell,\frac{1}{r(2k+2)}}$\; $M \leftarrow M \oplus M'_{\ell,i}$.\label{line:randomized-matching-update} } } \Return $M$. } \end{algorithm} \begin{lemma}\label{lem:time-complexity} The total running time of Algorithm~\ref{alg:randomized-matching} is $O(n+m)K$, where $K={(1/\epsilon)}^{2^{O(1/\epsilon)}}$. \end{lemma} \begin{proof} Observe that the outer loop of Algorithm~\ref{alg:randomized-matching} runs for $r$ iterations and the inner loop runs for $k$ iterations, where $k=\lceil \epsilon^{-1}+1\rceil$ and $r=4k^2(16k+20)(k-1){(2k)}^k$. Each inner loop runs an instance of $\textsc{AugmentingPaths}$ with parameters $\ell\le k$ and $\delta=\frac{1}{r(2k+2)}$, which creates a layered graph with $O(\ell)$ layers in $O((n+m)k)$ time, and then calls $\textsc{FindPaths}$. For each time that $\textsc{FindPaths}$ is called, the value of $\delta$ is squared and the value of $\ell$ is decremented, starting at $\ell=k$ until $\ell=1$. Thus, the loop in $\textsc{FindPaths}$ is run at most $\frac{1}{\delta}=O\left({(r(2k+2))}^{2^k}\right)$ times and each loop uses time $O(n+m)$. Hence, the total runtime is $O(n+m)K$, where $K={(1/\epsilon)}^{2^{O(1/\epsilon)}}$. \end{proof} \subsection{Approximation Ratio}\label{subsec:randomized-approximation-ratio} In this section, we analyze the approximation ratio of Algorithm~\ref{alg:randomized-matching}. \begin{lemma}\cite{mcgregor2005finding} Suppose \Call{FindPaths}{$\cdot,\cdot,j,\cdot$} is called $\delta^{-2^{i-j+1}+1}$ times in the recursion for \Call{FindPaths}{$H,S,i,\delta$}. Then at most $2\delta|L_i|$ paths are removed from consideration as being $(i+1)$-paths. \end{lemma} Let $\mathcal{L}_i$ be the set of graphs whose vertices are partitioned into $i+2$ layers, $L_0,\ldots,L_{i+1}$ and whose edges are a subset of $\bigcup_{j=1}^{i+1}(L_{j} \times L_{j-1})$. Then we immediately have the following lemma, analogous to Lemma 2 in~\cite{mcgregor2005finding}: \begin{lemma} For a graph $G\in\mathcal{L}_i$, \Call{FindPaths}{$H,S,i,\delta$} finds at least $(\gamma-\delta)|M|$ of the $(i+1)$-paths among some maximal set of $(i+1)$-paths of size $\gamma|M|$. \end{lemma} We require the following structural property relating maximal and maximum matchings through the set of connected components in the symmetric difference. \begin{lemma}[Lemma 1 in~\cite{mcgregor2005finding}]\label{lem:alpha:bound} Let $M$ be a maximal matching and $M^*$ be a maximum matching. Let $\mathcal{C}$ be the set of connected components in $M^*\triangle M$. Let $\alpha_{\ell}$ be the constant so that $\alpha_\ell|M|$ is the number of connected components in $\mathcal{C}$ with $\ell$ edges from $M$, excluding those with $\ell$ edges from $M^*$. If $\max_{\ell \in[k]}\alpha_\ell \le\frac{1}{2k^2(k+1)}$, then $|M|\ge\frac{|M^*|}{1+1/k}$. \end{lemma} We also require the following result by \cite{mcgregor2005finding} bounding the number of augmenting paths found by \textsc{AugmentingPaths}. \begin{lemma}[Theorem 1 in~\cite{mcgregor2005finding}]\label{lem:round:augment} If $G$ has $\alpha_\ell|M|$ augmenting paths of length $2\ell+1$, then the number of augmenting paths of length $2\ell+1$ found by \Call{AugmentingPaths}{} is at least $(b_\ell\beta_\ell-\delta)|M|$, where $b_\ell=\frac{1}{2\ell+1}$ and $\beta_\ell\sim \mathcal{B}\left(\alpha_\ell|M|,\frac{1}{2{(2\ell)}^\ell}\right)$. \end{lemma} \noindent We now show that Algorithm~\ref{alg:randomized-matching} outputs a $(1-\epsilon)$-approximation to the maximum matching. \begin{theorem}\label{thm:randomized-approximation} Algorithm~\ref{alg:randomized-matching} finds a $(1-\epsilon)$-approximation to the maximum matching with probability at least $0.99$. \end{theorem} \begin{proof} We say the algorithm enters \emph{phase} $\ell$ when the number of layers in the layered graph has been incremented to $\ell$, i.e., each invocation of the outer for loop corresponds to a separate phase. We say the algorithm enters \emph{round} $i$ in phase $\ell$ after the subroutine \Call{AugmentingPaths}{} has completed $i-1$ iterations within phase $\ell$. Let $M_{\ell,i}$ be the matching $M$ prior to the call to \Call{AugmentingPaths}{} in round $i$ of phase $\ell$. Let $\alpha_{\ell,i}|M_{\ell,i}|$ be the number of length $2\ell+1$ augmenting paths of $M_{\ell,i}$. Thus by Lemma~\ref{lem:round:augment}, the subroutine \Call{AugmentingPaths}{} augments $M_{\ell,i}$ by at least $(b_{\ell}\beta_{\ell,i}-\delta)|M_{\ell,i}|$ edges in round $i$ of phase $\ell$, where $b_{\ell}=\frac{1}{2\ell+2}$, $\delta$ is a parameter that we choose later, and $\beta_{\ell,i}$ is a random variable distributed according to $\mathcal{B}\left(\alpha_{\ell,i}|M_{\ell,i}|,\frac{1}{2{(2\ell)}^\ell}\right)$. Let $M$ be the output matching. Then by Bernoulli's inequality, we have \begin{align*} \Pr\left[|M| \ge2|M_{1,1}|\right] & \geq \Pr\left[|M_{1,1}|\prod_{\ell\in[k],i\in[r]}\left(1+\max\left(0,b_{\ell}\beta_{\ell,i}-\delta\right)\right)\ge2|M_{1,1}|\right] \\ & \ge \Pr\left[\sum_{\ell\in[k]}\max_{i\in[r]}b_{\ell}\beta_{\ell,i}\ge2+r\delta\right]. \end{align*} We would like to analyze $\sum_{\ell\in[k]}\max_{i\in[r]}b_{\ell}\beta_{\ell,i}$, but the analysis is challenging due to dependencies between multiple rounds and phases. We thus define independent variables $X_1,\ldots,X_k$ and use a coupling argument. We define $A_{\ell}=\max_{i \in [r]}b_{\ell}\beta_{\ell,i}|M_{\ell,i}|$ to be an upper bound on the maximum number of augmented edges during phase $\ell$ of the algorithm. Suppose by way of contradiction that $\max_{i \in [r]}\alpha_{\ell,i}<\alpha_0:=\frac{1}{2k^2(k-1)}$ for each of the phases $1\le\ell\le k$. Then Lemma~\ref{lem:alpha:bound} would imply that at some point $M_{\ell,i}$ is sufficiently large. Thus we have $\max_{i \in [r]}\alpha_{\ell,i}\ge\alpha_0$. We have $A_{\ell}=\max_{i \in [r]}b_{\ell}\beta_{\ell,i}|M_{\ell,i}|$, $b_{\ell}=\frac{1}{2\ell+2}$, and $\beta_{\ell,i}|M_{\ell,i}|\sim \mathcal{B}\left(\alpha_{\ell,i}|M_{\ell,i}|,\frac{1}{2{(2\ell)}^\ell}\right)$. Now for $\ell\le k$, we have that $\frac{1}{2{(2\ell)}^\ell}\ge\frac{1}{2{(2k)}^k}$. Thus $\max_{i \in [r]}\alpha_{\ell,i}\ge\alpha_0$ and $|M_{\ell,i}|\ge|M_{\ell,1}|$ implies that the distribution of $A_{\ell}$ statistically dominates the distribution of $b_{\ell}\cdot\mathcal{B}\left(\alpha_{0}|M_{\ell,1}|,\frac{1}{2{(2k)}^k}\right)$. Hence if we define $X_{\ell}$ to be independent random variables distributed as $\mathcal{B}\left(\alpha_{0}|M_{\ell,1}|,\frac{1}{2{(2k)}^k}\right)$ for each $\ell\in[r]$, then the distribution of $A_{\ell}$ statistically dominates the distribution of $b_{\ell}\cdot X_{\ell}$. Thus, \begin{align*} \Pr\left[\sum_{\ell\in[k]}\max_{i\in[r]}b_{\ell}\beta_{\ell,i}\ge2+r\delta\right]&\ge\Pr\left[\sum_{\ell\in[k]}\max_{i\in[r]}b_{\ell}\beta_{\ell,i}|M_{\ell,i}|\ge(2+r\delta)|M_{k,r}|\right]\\ &=\Pr\left[\sum_{\ell\in[k]}A_{\ell}\ge(2+r\delta)|M_{k,r}|\right]\\ &\ge\Pr\left[\sum_{\ell\in[k]}X_{\ell}\ge\frac{4+2r\delta}{b_k}\cdot|M_{1,1}|\right], \end{align*} where the final inequality results from $M_{1,1}$ being a maximal matching and $b_{\ell}\ge b_k$ for $\ell\in[k]$. Now the variables $X_{\ell}$ are independent but not identically distributed. Nevertheless, we can write $Y=\sum_{\ell\in[k]}X_{\ell}$ and note that the distribution of $Y$ statistically dominates the distribution of $Z\sim\mathcal{B}\left(\alpha_0|M_{1,1}|r,\frac{1}{2{(2k)}^k}\right)$ since $|M_{\ell,i}|\ge|M_{1,1}|$ for all $\ell\in[k]$ and $i\in[r]$. Thus for $b_k=\frac{1}{2k+2}$ and $\delta=\frac{b_k}{r}$, \begin{align*} \Pr\left[\sum_{\ell\in[k]}X_{\ell}\ge\frac{4+2r\delta}{b_k}\cdot|M_{1,1}|\right]&\ge\Pr\left[Z\ge\frac{4+2b_k}{b_k}|M_{1,1}|\right]\\ &=\Pr[Z\ge|M_{1,1}|(8k+10)]. \end{align*} For $r=\frac{2{(2k)}^k(16k+20)}{\alpha_0}$, we have that $\mathop{\mathbf{E}}[Z]=(16k+20)|M_{1,1}|$. Thus from a simple Chernoff bound, \begin{align*} \Pr[Z\ge|M_{1,1}|(8k+10)]=1-\Pr\left[Z<\frac{\mathop{\mathbf{E}}[Z]}{2}\right]>1-e^{-2(16k+20)|M_{1,1}|}\ge 0.99. \end{align*} Putting things together, we have \begin{align*} \Pr\left[|M|\ge2|M_{1,1}|\right]\ge 0.99, \end{align*} which implies that there exists a maximum matching with more than double the number of edges of a maximal matching and is a contradiction. Therefore, our assumption that $\max_{i \in [r]}\alpha_{\ell,i}\ge\alpha_0:=\frac{1}{2k^2(k-1)}$ for each of the phases $1\le\ell\le k$ must have been invalid. However, if $\alpha_{\ell,i}<\frac{1}{2k^2(k-1)}$ for some $i\in[r]$ and $\ell\in[k]$, then by Lemma~\ref{lem:alpha:bound}, we have for $k=\lceil\epsilon^{-1}+1\rceil$ and sufficiently small $\epsilon>0$ that \[ |M_{\ell,i}|\ge\frac{|M^*|}{1+1/k}\ge\frac{|M^*|}{1+\epsilon}\ge(1-\epsilon)|M^*|, \] with probability at least $0.99$. Thus, Algorithm~\ref{alg:randomized-matching} outputs a $(1-\epsilon)$ approximation the maximum matching with probability at least $0.99$. \end{proof} \paragraph{Boosting the Success Probability.} To increase the probability of success to $1-p$ for any $p\in(0,1)$, a na\"{\i}ve approach would be to run $O\left(\log\frac{1}{p}\right)$ iterations of Algorithm~\ref{alg:randomized-matching} in parallel. However, the sensitivity analysis becomes considerably more challenging. Instead, we note that the $\frac{1}{e}$ probability of failure is actually a significant weakening of the $e^{-2(16k+20)|M_{1,1}|}$ probability of failure. Thus, increasing $k$ by a factor of $O\left(\log\frac{1}{p}\right)$ increases the probability of success to $1-p$. However, for subconstant $p$, it also substantially increases the asymptotic sensitivity of Algorithm~\ref{alg:randomized-matching}. \subsection{Sensitivity of the Randomized Greedy and Algorithm~\ref{alg:randomized-matching}}\label{subsec:randomized-sensitivity} To analyze the sensitivity of Algorithm~\ref{alg:randomized-matching}, we first analyze the sensitivity of the randomized greedy algorithm. \subsubsection{Sensitivity of the Randomized Greedy}\label{sec:greedy} In this section, we study the sensitivity of the randomized greedy with respect to vertex deletions. Recall that given a graph $G=(V,E)$, the randomized greedy works as follows. First, it chooses a random ordering $\pi$ over edges. Then starting with an empty matching $M$, it iteratively adds the $i$-th edge in the ordering $\pi$ to $M$ if it is not adjacent to any edge in $M$. The main result of this section is the following. \begin{theorem}\label{thm:randomized-greedy} Let $A$ be the randomized greedy for the maximum matching problem. Then, for any graph $G=(V,E)$ and a vertex $v \in V$, we have \[ d_{\mathrm{EM}}(A(G),A(G-v)) \leq 1. \] \end{theorem} We need to consider deleting vertices to analyze the sensitivity of our randomized $(1-\epsilon)$-approximation algorithm for the maximum matching problem in Section~\ref{sec:randomized}. Our analysis is a slight modification of a similar result for the maximal independent set problem~\cite{censor2016optimal}. Hence, we defer the proof to Appendix~\ref{apx:randomized-greedy}. \subsubsection{Sensitivity of Algorithm~\ref{alg:randomized-matching}} We first analyze the sensitivity of \Call{AugmentingPaths}{}. Let us fix the graphs $G=(V,E)$ and $G'=(V,E')$, and matchings $M \subseteq E$ and $M' \subseteq E'$, a positive integer $\ell$, and $\delta > 0$. Let $H$ and $H'$ be the layered graphs constructed using $G,M,\ell$ and $G',M',\ell$, respectively, and let $\tilde{L}$ and $\tilde{L}'$ be the set of active vertices in $H$ and $H'$, respectively. \begin{lemma}\label{lem:sensitivity-active-vertex-set} We have $d_{\mathrm{H}}(\tilde{L}, \tilde{L}') \leq 3d_{\mathrm{H}}(E, E') + 3d_{\mathrm{H}}(M,M')$. \end{lemma} \begin{proof} Each edge modification in the graph or the matching may cause activate/inactivate at most three vertices in the layered graph (two of them are in the first and last layers, and the remaining one is in one of the middle layers), and hence the lemma follows. \end{proof} For two tag functions $t,t'\colon V(H) \to V(H) \cup \{\mathsf{untagged}, \mathsf{dead end}\}$, we define $d_{\mathrm{H}}(t,t') = |\{v \in V(H) \mid t(v) \neq t'(v)\}|$. We will use symbols $t$ and $t'$ to denote tag functions for $H$ and $H'$, respectively. Note that the supposed domain of $t'$ is $V(H')$, but it is equal to $V(H)$. \begin{lemma}\label{lem:augmenting-paths-sensitivity} Let $\mathcal{P} = \Call{AugmentingPaths}{G,M,\ell,\delta}$ and $\mathcal{P}' = \Call{AugmentingPaths}{G',M',\ell,\delta}$. Then, we have \[ d_{\mathrm{EM}}\left(\mathcal{P}, \mathcal{P}'\right) \leq (d_{\mathrm{H}}(E, E') + d_{\mathrm{H}}(M, M')) \cdot 3^K, \] where $K = {(1/\epsilon)}^{2^{O(1/\epsilon)}}$. \end{lemma} \begin{proof} Let $A$ and $A'$ denote $\Call{AugmentingPaths}{G,M,\delta,\ell}$ and $\Call{AugmentingPaths}{G',M',\delta,\ell}$, respectively. Let $M_1,M_2,\ldots,M_K$ and $M'_1,M'_2,\ldots,M'_K$ be the sequences of matchings constructed during the process of $A$ and $A'$, respectively. Note that $A$ and $A'$ construct the same number of matchings, and that $K \leq {{(1/\epsilon)}^{2^{O(1/\epsilon)}}}$, which follows by a similar argument to that in the proof of Lemma~\ref{lem:time-complexity}. For $i \in [K]$, let $S_i$ and $S'_i$ be the vertex sets on which $M_i$ and $M'_i$, respectively, are constructed, that is, the vertex set passed on to $\Call{RandomizedGreedy}{}$, and let $t_i$ and $t'_i$ be the tag functions right before constructing $M_i$ and $M'_i$, respectively. By Lemma~\ref{lem:sensitivity-active-vertex-set}, we have $d_{\mathrm{EM}}(S_1, S'_1) \leq c$, where $c = 3d_{\mathrm{H}}(M, M') + 3d_{\mathrm{H}}(E, E')$. First, because each difference between $M_{i-1}$ and $M'_{i-1}$ increases the Hamming distance between $t_i$ and $t'_i$ by one, we have \begin{align*} d_{\mathrm{EM}}(t_i, t'_i) &\leq d_{\mathrm{EM}}(M_{i-1}, M'_{i-1}) + d_{\mathrm{EM}}(t_{i-1}, t'_{i-1}) \leq \cdots \\ &\leq \sum_{j=1}^{i-1}d_{\mathrm{EM}}(M_{j}, M'_{j}) + d_{\mathrm{EM}}(t_1, t'_1) = \sum_{j=1}^{i-1}d_{\mathrm{EM}}(M_{j}, M'_{j}). \end{align*} Then we have \[ d_{\mathrm{EM}}(S_{i}, S'_{i}) \leq d_{\mathrm{EM}}(M_{i-1}, M'_{i-1}) + d_{\mathrm{EM}}(t_i, t'_i) \leq 2\sum_{j=1}^{i-1} d_{\mathrm{EM}}(M_{i}, M'_{i}) \leq 2 \sum_{j=1}^{i-1}d_{\mathrm{EM}}(S_j, S'_j), \] where the last inequality is due to Theorem~\ref{thm:randomized-greedy}. Solving this recursion, we get \[ \sum_{j=1}^i d_{\mathrm{EM}}(S_j, S'_j) \leq c \cdot 3^{i-1}, \] and hence we have $d_{\mathrm{EM}}(S_K, S'_K) = 2 \cdot 3^{K-2} c$, and the claim follows. \end{proof} \noindent We now show that the sensitivity of Algorithm~\ref{alg:randomized-matching} is $O_{\epsilon}(1)$. \begin{theorem}\label{thm:randomized:sensitivity} The sensitivity of Algorithm~\ref{alg:randomized-matching} is at most $3^K$, where $K = {(1/\epsilon)}^{2^{O(1/\epsilon)}}$. \end{theorem} \begin{proof} Let $G = (V,E)$ be a graph and $G' = (V,E') = G - e$ for some $e \in E$. Let $M_0,M_1,\ldots,M_{kr}$ be the sequence of matchings we construct in Algorithm~\ref{alg:randomized-matching} on $G$, where $M_0$ is the matching constructed at Line~\ref{line:randomized-matching-0}, and $M_j$ is the matching constructed at Line~\ref{line:randomized-matching-0} in the round $i$ of the phase $\ell$ such that $j = (\ell-1) r + i$. We define $M'_0,M'_1,\ldots,M'_{kr}$ similarly using $G'$. Then, we have by Theorem~\ref{thm:randomized-greedy} \[ d_{\mathrm{EM}}(M_0,M'_0) \leq 1, \] and we have by Lemma~\ref{lem:augmenting-paths-sensitivity} \begin{align*} d_{\mathrm{EM}}(M_i,M'_i) & \leq d_{\mathrm{EM}}(M_{i-1},M'_{i-1}) + (d_{\mathrm{EM}}(E,E') + d_{\mathrm{EM}}(M_{i-1},M'_{i-1})) \cdot 3^K \\ & = d_{\mathrm{EM}}(M_{i-1},M'_{i-1}) (3^K + 1) + 3^K \end{align*} for $i \in [kr]$, where $K = {(1/\epsilon)}^{2^{O(1/\epsilon)}}$. Solving the recursion, we get \[ d_{\mathrm{EM}}(M_{kr},M'_{kr}) \leq 2 {\left(1 + 3^K\right)}^{kr} - 1, \] and we have the desired bound. \end{proof} \noindent The proof of Theorem~\ref{thm:matching} then follows from Theorem~\ref{thm:randomized-approximation} and Theorem~\ref{thm:randomized:sensitivity}. \paragraph{Sensitivity to Vertex Deletions.} We remark that Algorithm~\ref{alg:randomized-matching} also has sensitivity $O(3^K)$, for $K = {(1/\epsilon)}^{2^{O(1/\epsilon)}}$, to vertex deletions. Recall that Lemma~\ref{lem:sensitivity-active-vertex-set} crucially relies on each edge deletion changing at most three vertices in the layered graph. That is, due to the construction of the layered graph, each edge deletion changes at most two altered vertices in the first and last layers, and at most one altered vertex in one of the middle layers. This is because the first layer and the last layer encode the vertex set $V$, while each matched edge is assigned to one of the middle layers. Observe that when we delete a vertex $v$, at most one vertex in the vertex set $V$ is altered, so that the first and last layer of the layered graph each have one change. Moreover, at most one matched edge is incident to $v$, so at most one vertex in one of the middle layers is altered as well. Thus, at most three vertices in the layered graph are changed as a result of the vertex deletion, so the sensitivity of Algorithm~\ref{alg:randomized-matching} to vertex deletions is again $O(3^K)$, for $K = {(1/\epsilon)}^{2^{O(1/\epsilon)}}$. \subsection{Applications to Online Matching with Replacements}\label{subsec:online-matching} In this section, we show that Algorithm~\ref{alg:randomized-matching} can be repurposed to obtain an algorithm for the online matching problem with replacements. In the \emph{edge-arrival} model for the online matching problem with replacements, the edges $E$ of the graph $G=(V,E)$ arrive sequentially as a data stream, and the goal is to maintain or approximate a maximum matching across all times, while minimizing the number of total edges that are altered between successive outputs of the algorithm. Formally, let $E_i=(e_1,\ldots,e_i)$ be the subset of edges of the graph that have arrived by time $i$ and let $M^*_i$ be the maximum matching on $G_i=(V,E_i)$. Given a constant $c\le 1$, the goal of the online matching problem with replacements is to output a sequence of matchings $M_1,\ldots,M_{|E|}$ that minimizes $\sum_{i=1}^{|E|-1}d_{\mathrm{H}}(M_{i+1}, M_i)$ subject to the constraint $|M_i|\ge c|M^*_i|$, i.e., each matching $M_i$ is a $c$-approximation to the maximum matching at time $i$. The quantity $d_{\mathrm{H}}(M_{i+1}, M_i)$ is the number of replacements at time $i$ and the quantity $\sum_{i=1}^{|E|-1}d_{\mathrm{H}}(M_{i+1}, M_i)$ is the total number of replacements. The \emph{vertex-arrival} model is defined analogously, with the exception that the stream updates are a vertex $v_i$, along with all the edges adjacent to $v_i$. Bernstein~et~al.\cite{BernsteinHR19} gives an algorithm for online \emph{bipartite} matching with replacements in the vertex-arrival model that always outputs a maximum matching but has $O(n\log^2 n)$ total replacements. We show that our algorithm can be modified to achieve total replacements $O_{\epsilon}(n)$ and $(1-\epsilon)$-approximate maximum matchings for general graphs, i.e., not just bipartite graphs. The challenge to immediately applying Algorithm~\ref{alg:randomized-matching} to the online matching with replacements setting is that the guarantee of Theorem~\ref{thm:matching} is only in terms of earth-mover's distance. Thus, we cannot apply a black-box reduction to the online model because each time we call Algorithm~\ref{alg:randomized-matching}, we can obtain a completely different matching, depending on the randomness of the algorithm. For example, suppose Algorithm~\ref{alg:randomized-matching} guarantees that each time $i$ of the stream, there exist two maximal matchings $M_{i,1}$ and $M_{i,2}$ that are each $(1-\epsilon)$-approximations to the maximum matching, but $d_{\mathrm{H}}(M_{i,1}, M_{i,2})=\Omega(n)$. Moreover, suppose that at each time, Algorithm~\ref{alg:randomized-matching} outputs $M_{i,1}$ with probability $\frac{1}{2}$ and $M_{i,2}$ with probability $\frac{1}{2}$. If $d_{\mathrm{H}}(M_{i,1}, M_{i+1,1})=d_{\mathrm{H}}(M_{i,2}, M_{i+1,2})=O(1)$ at all times, then Algorithm~\ref{alg:randomized-matching} has $O(1)$ sensitivity at all times in the stream, but if $\tilde{M_i}$ is the matching output by the algorithm at each time $i$, we could potentially have $d_{\mathrm{H}}(\tilde{M_{i}}, \tilde{M_{i+1}})=\Omega(n)$ replacements, so that the total number of replacements is $\Omega(n^2)$. Instead, we open up the black-box of Algorithm~\ref{alg:randomized-matching} and show that we can achieve $O_{\epsilon}(n)$ total replacements by fixing components of the internal randomness of the algorithm across the duration of the stream. \begin{proof}[Proof of Theorem~\ref{thm:mm:recourse}] Recall that Algorithm~\ref{alg:randomized-matching} first fixes a random permutation $\pi$ of the edges in the subroutine $\textsc{RandomizedGreedy}$. Equivalently, we can fix a random permutation $\pi$ of $\binom{n}{2}$, which induces a consistent permutation of the edges across the entire stream. Let $A_{\pi}$ be the deterministic algorithm obtained from our randomized algorithm after sampling $\pi$ uniformly at random at the beginning of the stream and fixing the permutation $\pi$ of edges afterwards. Then whenever a new vertex arrives, we simply run $A_{\pi}$ on the current graph and return the solution. Since the expected number of replacements at each time is $O_{\epsilon}(1)$, then the expected total number of replacements is $O_{\epsilon}(n)$. Thus by Markov's inequality, the total number of replacements will be $O_{\epsilon}(n)$ with probability 0.99. \end{proof} \section{Deterministic Maximal Matching for Bounded-Degree Graphs}\label{sec:deterministic} In this section, we give a deterministic algorithm for computing a maximal matching that has low sensitivity on bounded-degree graphs. The main idea is to use deterministic local computation algorithms with a small number of probes to find a maximal matching. Our algorithm uses two main ingredients. The first ingredient is a deterministic LCA of~\cite{ColeV86} for $6^{\Delta}$-coloring a graph with maximum degree $\Delta$, using $O(\Delta\log^*n)$ probes. The second ingredient is a framework of~\cite{ParnasR07} that simulates local distributed algorithms using a deterministic LCA\@. In particular, we use the framework to simulate an algorithm that takes a coloring of a graph and outputs a maximal matching. We give the details for the local distributed algorithm in Algorithm~\ref{alg:det:mm:color}. To bound the sensitivity of the algorithm, it suffices to analyze the number of queries for which a deleted edge would be probed. Crucially, both the deterministic LCA of~\cite{ColeV86} and the framework of~\cite{ParnasR07} only probe edges (incident to vertices) within a small radius of the query. Thus, only a small number of queries will probe the edge that is altered, so that the output of the algorithm only has a small number of changes. We first require a deterministic LCA of~\cite{ColeV86} for $6^{\Delta}$-coloring a graph with degree $\Delta$, using a small number of probes within distance $O(\Delta\log^*n)$ of the query. We give the full details in Algorithm~\ref{alg:coloring}, which has the following guarantee. \begin{lemma}[\cite{ColeV86}]\label{lem:lca:coloring} There exists a deterministic LCA \textsc{ColoringLCA} for $6^{\Delta}$-coloring a graph with degree $\Delta$, using $O(\Delta\log^*n)$ probes. \end{lemma} \begin{algorithm}[t!] \caption{LCA Algorithm \textsc{ColoringLCA} for $6^\Delta$-coloring with $O(\Delta\log^* n)$ probes}\label{alg:coloring} \Procedure{\emph{\Call{FormForests}{$G,\Delta$}}}{ //Decompose graph into $\Delta$ oriented forests\; \For{$i=1$ to $\Delta$}{ Let $N_u(i)$ denote the $i$-th neighbor of $u$ according to the IDs of the vertices\; Let $E_i=\{(u,v):\mathsf{id}(u)<\mathsf{id}(v), v=N_u(i)\}$\; Let $G_i = (V, E_i)$ be the oriented tree, where the root has no out-going edges\; } } \Procedure{\emph{\Call{ColorForests}{$G_i,\Delta$}}}{ \For{$\Theta(\log^* n)$ rounds}{ \For{each nodes $u$}{ \If{$u$ is a root node}{ Set $\phi_u$ to $0$\;} \Else{ Let $v$ be the parent of $u$ in $G_i$\; Let $a_u$ be the index of the least significant bit with $\phi_u\neq\phi_v$\; Let $b_u$ be the value of the $a_u$-th bit of $u$\; $\phi_u\gets a_u\circ b_u$. } } } } \end{algorithm} We now describe a local distributed algorithm that takes a coloring of a graph and outputs a maximal matching. The algorithm iterates over all colors and adds any edge adjacent to a vertex of a particular color to the greedy matching if there is no other adjacent edge already present in the matching. We give the algorithm in full in Algorithm~\ref{alg:det:mm:color}. \begin{algorithm}[t!] \caption{Maximal Matching Algorithm \textsc{Coloring-to-MM}}\label{alg:det:mm:color} \Procedure{\emph{\Call{Coloring-to-MM}{$G$ colored with $c$ colors}}}{ $M\gets\emptyset$\; \For{color $i=1$ to $i=c$}{ \If{edge $(u,v)$ has either $\phi_u=i$ or $\phi_v=i$}{ Add $(u,v)$ to $M$ if no adjacent edge is in $M$. } } } \end{algorithm} Putting things together, we obtain a deterministic maximal matching algorithm in Algorithm~\ref{alg:det:mm:}. \begin{algorithm}[t!] \caption{Maximal Matching Algorithm}\label{alg:det:mm:} \Procedure{\emph{\Call{Coloring-to-MM}{Graph $G$}}}{ Coloring $G_C=\textsc{ColoringLCA}(G,\Delta)$\; Output $\textsc{Coloring-to-MM}(G_C)$\; } \end{algorithm} We next require the following framework of~\cite{ParnasR07} that simulates local distributed algorithms using a deterministic LCA. In particular, we will implement Algorithm~\ref{alg:det:mm:color}. \begin{lemma}[\cite{EvenMR15,ParnasR07}]\label{lem:lca:colortomm} Given access to an oracle that takes vertices of an underlying as queries and outputs a color for the queried vertex, there exists a deterministic LCA that can implement \textsc{Coloring-to-MM} using $\Delta^{O(c)}$ probes. \end{lemma} Given a \textsc{ColoringLCA} for $6^\Delta$-coloring, the LCA for maximal matching in Lemma~\ref{lem:lca:colortomm} uses the following idea. For a query edge $e$, we first call \textsc{ColoringLCA} for every vertex with distance roughly $6^\Delta$ from $e$. Parnas and Ron~\cite{ParnasR07} then shows it suffices to run Algorithm~\ref{alg:det:mm:color} locally on the graph of radius roughly $6^\Delta$ from $e$. We now show that our deterministic LCA based algorithm outputs a maximal matching with low worst case sensitivity for low-degree graphs. \begin{proof}[Proof of Theorem~\ref{thm:det:matching}] Consider running the deterministic LCA from Lemma~\ref{lem:lca:colortomm} that simulates \textsc{Coloring-to-MM} on a graph $G$ and a graph $G':=G-e$, for some $e \in E$. Let $S$ be the set of vertices that are assigned different colors in $G$ and $G'$ by \textsc{ColoringLCA}. First observe that $e$ is within distance $\Theta(\log^*n)$ from at most $\Delta^{\Theta(\log^*n)}$ other vertices. Hence, from Lemma~\ref{lem:lca:coloring} we have $|S|\le\Delta^{\Theta(\log^*n)}$. Moreover, each vertex $u\in S$ is within distance $O\left(6^\Delta\right)$ from at most $\Delta^{O\left(6^\Delta\right)}$ other vertices. Thus the total number of edges that differ between the matchings $M$ and $M'$ output by \textsc{Coloring-to-MM} for $G$ and $G'$ respectively is at most \[ \Delta^{\Theta(\log^*n)}\cdot\Delta^{O\left(6^\Delta\right)}=\Delta^{O\left(6^\Delta+\log^* n\right)}. \qedhere \] \end{proof} \noindent It is clear that an almost identical analysis goes through for vertex sensitivity. \section{Lower Bounds for Maximum Matching}\label{sec:lb} In this section, we show lower bounds for deterministic and randomized algorithms for the maximum matching problem. \subsection{Deterministic Lower Bound}\label{sec:det:lb} In this section, we prove Theorem~\ref{thm:lb-deterministic}, which claims that \emph{any} deterministic algorithm for the maximum matching problem has edge sensitivity $\Omega(\log^*n)$. Our proof relies on Ramsey's theorem. First we introduce some definitions. Let $Y$ be a finite set. We say that $X$ is a \emph{$k$-subset} of $Y$ if $X \subseteq Y$ and $|X|=k$. Let $Y^{(k)} =\{X \subseteq Y \mid |X|=k\}$ be the collection of all $k$-subsets of $Y$. A $c$-labeling of $Y^{(k)}$ is an arbitrary function $f : Y^{(k)} \to [c]$. Then we say that $X \subseteq Y$ is \emph{monochromatic} in $f$ if $f(A)= f(B)$ for all $A,B \in X^{(k)}$. Let $R_c(n; k)$ be the smallest integer $N$ such that the following holds: for any set $Y$ with at least $N$ elements, and for any $c$-labeling $f$ of $Y^{(k)}$, there is an $n$-subset of $Y$ that is monochromatic in $f$. If no such $N$ exists, $R_c(n; k) = \infty$. Define $\mathrm{twr}(k)$ as the tower of twos of height $k$, that is, $\mathrm{twr}(1) = 2$ and $\mathrm{twr}(k+1) = 2^{\mathrm{twr}(k)}$. We will use the following formulation of Ramsey's theorem. \begin{theorem}[Special case of Theorem~1 in~\cite{Erdos1952}]\label{thm:ramsey} For any positive integer $t$, $R_{2^t}(t+1; t) \leq \mathrm{twr}(O(t))$. \end{theorem} We now show that any deterministic constant-factor approximation algorithm for the maximum matching problem has edge sensitivity $\Omega(\log^* n)$. \begin{proof}[Proof of Theorem~\ref{thm:lb-deterministic}] Let $A$ be an arbitrary deterministic algorithm that outputs a maximal matching, let $t$ be a positive integer, which will be determined later. Let $\mathcal{G}$ be a class of graphs on the vertex set $[n]$ consisting of a cycle $v_1,\ldots,v_t$ with $v_1 < v_2 < \cdots < v_t$ and $n-t$ isolated vertices. Given a matching $M$ on the cycle $v_1,\ldots,v_t$, we encode it to an integer $0 \leq k \leq 2^t-1$ so that the $i$-th bit of $k$ is $1$ if and only if the edge $\{v_i, v_{i+1}\}$ belongs to $M$, where we regard $v_{t+1} = v_1$. Then, we can regard the algorithm $A$ as a function $f: \binom{[n]}{t} \to \{0,1,\ldots,2^t-1\}$, that is, given a set $\{v_1,\ldots,v_t\} \subseteq [n]$ with $v_1<v_2<\cdots<v_t$, we compute a matching on the cycle $v_1,\ldots,v_t$, and encode it to an integer. Then if $n \geq R_{2^t}(t+1; t)$, which holds when $t = O(\log^*n)$ by Theorem~\ref{thm:ramsey}, there exists a set $S = \{s_0,s_1,\ldots,s_{t}\} \subseteq [n]$ with $s_0 < s_1 < \cdots <s_t$ such that $f(T)$ is constant whenever $T \subseteq S$ with $|T|=t$. Let $G,G' \in \mathcal{G}$ be the graph with cycles $s_0,\ldots,s_{t-1}$ and $s_1,\ldots,s_{t}$, respectively, and let $M$ and $M'$ be the matching output by $A$ on $G$ and $G'$, respectively. As $M$ and $M'$ have the same encoding, $\{s_i,s_{i+1 \bmod t} \} \in M$ if and only if $\{s_{i+1},s_{i+2}\} \in M'$, where we regard $s_{t+2}=s_1$. Note that, however, if $\{s_i,s_{i+1 \bmod t} \} \in M$ then $\{s_{i+1 \bmod t},s_{i+2 \bmod t} \} \not \in M'$. It follows that $d_{\mathrm{H}}(M,M') = \Omega(|M|) = \Omega(t)$, where the last equality holds because $A$ has a constant approximation ratio. \end{proof} \subsection{Lower Bounds for Deterministic Greedy Algorithm}\label{subsec:lb-deterministic-greedy} As we have seen in Section~\ref{sec:greedy}, the randomized greedy algorithm has $O(1)$ sensitivity even for vertex deletion. Can we derandomize it without increasing the sensitivity? To make the question more precise, let $V$ be a set of $n$ vertices and $\pi$ be a permutation over $\binom{V}{2}$. Then, let $A_\pi$ denote the greedy algorithm such that, starting with an empty matching $M$, it iteratively adds the $i$-th edge with respect to $\pi$ to $M$ if and only if the edge does not share an endpoint with any edge in $M$. We now show that the answer to the question is negative. \begin{theorem}\label{thm:lb-greedy} For any permutation $\pi$ over $\binom{V}{2}$, the algorithm $A_\pi$ has sensitivity $\Omega(n)$. \end{theorem} \begin{proof} We say that an element $e$ of a poset \emph{covers} another element $e'$ if $e>e'$, where $>$ is the order relation of the poset, and there is no other element $e''$ such that $e>e''>e'$. Then, we construct a poset $P$ on the element set $\binom{V}{2}$ in which a pair $e \in \binom{V}{2}$ covers another pair $e' \in \binom{V}{2}$ if $\pi(e) > \pi(e')$ and $|e \cap e'| \geq 1$. Note that the size of any antichain in $P$ is at most $n/2$: A set of elements of size more than $n/2$ must have two elements $e,e'$ with $|e \cap e'| \geq 1$, which form a chain of length two. Hence, we need at least $\binom{n}{2} / (n/2) = n-1$ antichains to cover all the elements in $P$. Then by Mirsky's theorem, there exists a chain, say, $e_1,\ldots,e_{n-1}$, of size $n-1$ in $P$. From the construction of $P$, $e_1,\ldots,e_{n-1}$ forms a path of length $n-1$. Then, $A_\pi$ on the path $e_1,\ldots,e_{n-1}$ outputs edges with odd indices, whereas $A_\pi$ on the path $e_2,\ldots,e_{n-1}$ outputs edges with even indices, and hence the sensitivity of $A_\pi$ is $\Omega(n)$. \end{proof} \subsection{Lower Bounds for Randomized Algorithms}\label{subsec:lb-randomized} The following shows that sensitivity must increase as approximation ratio goes to one. \begin{theorem}\label{thm:lb-randomized} Let $\epsilon > 0$. Any (possibly randomized) $(1-\epsilon)$-approximation algorithm for the maximum matching problem has sensitivity $\Omega(1/\epsilon)$. \end{theorem} \begin{proof} For simplicity, we assume $1/10\epsilon$ is an even integer. Let $A$ be an arbitrary $(1-\epsilon)$-approximation algorithm for the maximum matching problem, and let $G$ be a graph consisting of a cycle of length $1/10\epsilon$ and $n-1/10\epsilon$ isolated vertices. Clearly $G$ has two disjoint maximum matchings, say, $M_1,M_2$, of size $1/20\epsilon$. Let $p_1$ and $p_2$ be the probability that $A$ on $G$ outputs $M_1$ and $M_2$, respectively. Then as $A$ has approximation ratio $1-\epsilon$, we have \[ p_1 \cdot \frac{1}{20\epsilon} + p_2 \cdot \frac{1}{20\epsilon} + (1-p_1-p_2) \cdot \left(\frac{1}{20\epsilon}-1\right) \geq \frac{1-\epsilon}{20\epsilon}. \] Hence, we have $p_1+p_2 \geq 19/20$, and it follows that at least one of $p_1 \geq 19/40$ and $p_2 \geq 19/40$ hold. Without loss of generality, we assume $p_1 \geq 19/40$. Let $G'$ be the graph obtained from $G$ by removing one edge in $M_1$. Then, $G'$ has a unique maximum matching $M_2$. Let $p'_2$ be the probability that $A$ on $G'$ outputs $M_2$. As $A$ has approximation ratio $1-\epsilon$, we have \[ p_2 \cdot \frac{1}{20\epsilon} + (1-p_2) \cdot \left( \frac{1}{20\epsilon}-1 \right) \geq \frac{1-\epsilon}{20\epsilon}, \] which implies $p'_2 \geq 19/20$. Hence, the sensitivity of $A$ is at least \[ \max\Bigl(\Pr[A(G) = M_1] - \Pr[A(G') \neq M_2],0\Bigr) \cdot d_{\mathrm{H}}(M_1,M_2) \geq \left(\frac{19}{40} - \frac{1}{20}\right) \cdot \frac{1}{10\epsilon} = \Omega\left(\frac{1}{\epsilon}\right). \qedhere \] \end{proof} \section{Weighted Sensitivity and Maximum Weighted Matching} \label{sec:weighted} In this section, we consider a generalization of sensitivity to weighted graphs, and show an approximation algorithm with low sensitivity for the maximum weighted matching problem. \subsection{Weighted Sensitivity} Given a weight function over the edges $w:E\to\mathbb{R}$ of a graph $G=(V,E)$ and two edge sets $S$ and $S'$, let \[d_{\mathrm{H}}^w(S,S') = \sum_{e\in S \triangle S'}w(E),\] where $\triangle$ again denotes the symmetric difference. For random edge sets $X$ and $X'$, we use $d_{\mathrm{EM}}^w(X,X')$ to denote the weighted earth mover's distance between $X$ and $X$ with respect to $w$, so that \[d_{\mathrm{EM}}^w(X,X') = \min_{\mathcal{D}} \mathop{\mathbf{E}}_{(S,S') \sim \mathcal{D}}d_{\mathrm{H}}^w(S,S'),\] be the weighted Hamming distance between $S$ and $S'$ with respect to $w$, where $\mathcal{D}$ is a distribution such that its marginal distributions on the first and second coordinates are $X$ and $X'$, respectively. For a real-valued function $\beta$ on graphs, we say that the \emph{weighted sensitivity} of an algorithm $A$ that outputs a set of edges is at most $\beta$ if for every graph $G=(V,E)$, a weight function $w:E \to \mathbb{R}$, and an edge $e\in E$, \[d_{\mathrm{EM}}^w(A(G),A(G-e)) \leq \beta(G).\] A priori, it is not clear whether the weighted sensitivity of an algorithm should correlate with the weight of removed edges. Thus we say that the \emph{normalized weighted sensitivity} is at most $\beta$ if \[ \frac{d_{\mathrm{EM}}^w(A(G),A(G-e))}{w(e)} \leq \beta(G).\] \subsection{Algorithm Description} We use a simple approach of partitioning the input by weight, finding a maximal matching on each partition, and finally forming a weighted matching by greedily adding edges from the maximal matchings, beginning with the matchings in the largest weight classes. The approach is known to give a $(4+\epsilon)$-approximation~\cite{BuryGMMSVZ19,CrouchS14} to the maximum weighted matching. However to bound the weighted sensitivity of our algorithm, we must choose $\epsilon$ carefully. Formal description of our algorithm is given in Algorithm~\ref{alg:weighted-matching}. It first defines subsets of edges $E_i$, where we assume the weight of each edge is polynomially bounded in $n$, so that $1\le w(e)\le n^c$ for some constant $c$. For a parameter $\alpha>1$, we define $E_i$ to be the subset of edges in $E$ with weight at least $\alpha^i$. Algorithm~\ref{alg:weighted-matching} first draws a random permutation $\pi$ of the edges and greedily forms a maximal matching $M_i$ on each set $E_i$ induced by $\pi$. It then greedily adds edges to a maximal matching, starting from the matching of the heaviest weight class and moving downward. That is, we initialize $M$ to be the empty set and greedily add edges of $M_i$ to $M$, starting with $i=O(\log_{\alpha} n^c)$ and decrementing $i$ after each iteration. \begin{algorithm}[t!] \caption{Algorithm for maximum weighted matching}\label{alg:weighted-matching} \Procedure{\emph{\Call{Matching}{$G,\epsilon,w$}}}{ Let $C$ be a sufficiently large constant and $\alpha>1$ be a trade-off parameter.\; Let $\pi$ be a random ordering of the edges of $G$\; For each $i=0$ to $i=C\log n$, let $E_i$ be the set of edges with weight at least $\alpha^i$\; Let $M_i$ be the greedy maximal matching of $E_i$ induced by $\pi$\; $M\gets\emptyset$\; \For{$i=C\log n$ to $0$}{ \For{$e\in M_i$}{ \If{$e$ is not adjacent to $M$}{ $M\gets M\cup\{e\}$\; } } } \Return $M$. } \end{algorithm} \begin{theorem}[\cite{BuryGMMSVZ19,CrouchS14}]\label{thm:weighted:approx} Algorithm~\ref{alg:weighted-matching} gives an $\frac{1}{4\alpha}$-approximation to the maximum weighted matching and uses runtime $O\left(m\log_{\alpha} n\right)$ on a graph with $m$ edges and $n$ vertices. \end{theorem} \subsection{Sensitivity Analysis} We first require the following key structural lemma that we use to prove Theorem~\ref{thm:randomized-greedy} in Appendix~\ref{apx:randomized-greedy}. \begin{lemma} \label{lem:deleted:change} In expectation, the deletion of an edge $e$ alters at most one edge in $M_i$, i.e., at most one edge in $M_i$ is inserted or deleted in expectation. (Informal, see Lemma~\ref{lem:expected:change}.) \end{lemma} The sensitivity analysis follows from the observation that the deletion of an edge $e$ can only affect the matchings $E_i$ for which $\alpha^i\le w(e)$. Moreover by Lemma~\ref{lem:deleted:change}, the deletion of edge $e$ affects at most one edge in $E_i$, in expectation. Thus in expectation, the deletion of $e$ affects at most two edges in $E_{i-1}$ in expectation and inductively, the deletion of $e$ affects at most $2^j$ edges in $E_{i-j}$ in expectation. On the other hand, the weight of each edge in $E_{i-j}$ is at most $\alpha^{i-j}$ so the weighted sensitivity is $\sum_j \alpha^{i-j}2^j$. Hence for $\alpha=2$, the weighted sensitivity is $O(2^i\log n)$ and for $\alpha>2$, the weighted sensitivity is $O(2^i)$. Similarly for normalized weighted sensitivity, we rescale by $\frac{1}{2^i}$ so that the normalized weighted sensitivity is $O(\log n)$ for $\alpha=2$ and $O(1)$ for $\alpha>2$. We now formalize this intuition. \begin{theorem}\label{thm:weighted:sensitivity} Suppose $1\le w(e)\le W\le n^c$ for some absolute constants $W,c>0$ for all $e\in E$. The weighted sensitivity of Algorithm~\ref{alg:weighted-matching} is $O(W\log n)$ for $\alpha=2$ and $O(W)$ for $\alpha>2$. The normalized weighted sensitivity of Algorithm~\ref{alg:weighted-matching} is $O(\log n)$ for $\alpha=2$ and $O(1)$ for $\alpha>2$. \end{theorem} \begin{proof} Let $e$ be an edge of weight $w(e)\in[2^i,2^{i+1}]$ for some integer $i\ge 0$ and suppose $e$ is removed from $G$. For $j\le i$, let $S_j$ be the set of edges in $E_j$ affected by the deletion of edge $e$, so that by Lemma~\ref{lem:deleted:change}, $\mathbb{E}[|S_i|]\le 1$. Then we have $\mathbb{E}[|S_i|\cup\{e\}]\le 2$ so that Lemma~\ref{lem:deleted:change} implies that $\mathbb{E}[|S_{i-1}|]\le 2$. Now suppose that for a fixed $j\le i$, we have $\mathbb{E}\left[\left|\{e\}\cup\bigcup_{k=j}^{i}S_{k}\right|\right]\le 2^{i-j}$. Then Lemma~\ref{lem:deleted:change} implies that $\mathbb{E}[|S_{j-1}|]\le 2^{i-j}$ so that \[\mathbb{E}\left[\left|\{e\}\cup\bigcup_{k=j-1}^{i}S_{k}\right|\right]\le 2^{i-j+1}.\] Hence by induction, we have $\mathbb{E}[|S_{j}|]\le 2^{i-j}$. Since each edge of $E_i$ has weight at most $\alpha^i$, then we have \[d_{\mathrm{EM}}(A(G),A(G - e))=\sum_{j=0}^i 2^{i-j}\alpha^j,\] where $A(G)$ represents the output of Algorithm~\ref{alg:weighted-matching} on $G$. Under the assumption that $1\le w(e)\le n^c$ for some absolute constant $c>0$ for all $e\in E$, then $i=O(\log n)$. Hence for $\alpha=2$, we have $\sum_{j=0}^i 2^{i-j}\alpha^j=O(2^i\log n)=O(W\log n)$ and for $\alpha>2$, we have $\sum_{j=0}^i 2^{i-j}\alpha^j=O(2^i)=O(W)$. Moreover, we have \[\overline{d_{\mathrm{EM}}}(A(G),A(G - e))\le\frac{1}{2^{i+1}}\sum_{j=0}^i 2^{i-j}\alpha^j,\] so that $\overline{d_{\mathrm{EM}}}(A(G),A(G - e))\le O(\log n)$ for $\alpha=2$ and $\overline{d_{\mathrm{EM}}}(A(G),A(G - e))=O(1)$ for $\alpha>2$. \end{proof} \noindent Together, Theorem~\ref{thm:weighted:approx} and Theorem~\ref{thm:weighted:sensitivity} give the full guarantees of Algorithm~\ref{alg:weighted-matching}. \begin{theorem} Let $G=(V,E)$ be a weighted graph with $w(e)\le W\le n^c$ for some constant $c>0$ and all $e\in E$. For a trade-off parameter $\alpha$, there exists an algorithm that outputs a $\frac{1}{4\alpha}$-approximation to the maximum weighted matching in $O(m\log_{\alpha} n)$ time. For $\alpha=2$, the algorithm has weighted sensitivity $O(W\log n)$ and normalized weighted sensitivity $O(\log n)$. For $\alpha>2$, the algorithm has weighted sensitivity $O(W)$ and normalized weighted sensitivity $O(1)$. \end{theorem} \noindent We again emphasize that the worst case weighted sensitivity of \emph{any} constant factor approximation algorithm to the maximum weighted matching problem is at least $\Omega(W)$. Recall that if an edge of weight $W=n^c$ is altered in a graph whose remaining edges have weight $1$, then any constant factor approximation to the maximum weighted matching must include the heavy edge for sufficiently large $c>0$, which incurs cost $\Omega(W)$ in the weighted sensitivity. Thus for $\alpha>2$, Algorithm~\ref{alg:weighted-matching} performs well with respect to both weighted sensitivity and normalized weighted sensitivity. \section{Conclusion and Open Questions} In this paper, we study the worst-case sensitivity for approximation algorithms for the maximum matching problem. We give a randomized $(1-\epsilon)$-approximation algorithm with worst-case sensitivity $O_{\epsilon}(1)$, which improves an algorithm of Varma and Yoshida that offers the same approximation guarantee, but only \emph{average} sensitivity $n^{O(1/(1+\epsilon^2))}$. We also give a deterministic $1/2$-approximation algorithm with sensitivity $\exp(O(\log^*n))$ for bounded-degree graphs. We introduced the concept of normalized weighted sensitivity for the maximum weighted matching problem and gave an algorithm with $O(1)$ normalized weighted sensitivity that outputs a $\frac{1}{4\alpha}$-approximation to the maximum weighted matching in $O(m\log_{\alpha} n)$ time, for a trade-off parameter $\alpha>2$. We believe there are many interesting open questions for future exploration. Since our work focuses on the maximum matching problem, we have not considered normalized weighted sensitivity for other graph problems. Even for maximum matching, there remains a large number of potential directions for future research. For example, there remains a large gap in the understanding of the behavior of the worst-case sensitivity of deterministic algorithms. Another line of study is constant factor approximation algorithms for maximum weighted matching with low sensitivity, rather than low normalized weighted sensitivity. Can we achieve $(1-\epsilon)$-approximation to the maximum weighted matching problem while still having low normalized weighted sensitivity? \appendix \section{Proof of Theorem~\ref{thm:randomized-greedy}}\label{apx:randomized-greedy} In this section, we formalize the proof of Theorem~\ref{thm:randomized-greedy}. The approach follows exactly the same structure as the~\cite{censor2016optimal}, who give an algorithm for maximal independent set in the dynamic distributed model. The only difference is that we maintain a maximal matching rather than a maximal independent set, so we must track the order of the edges rather than the order of the vertices in a given permutation. We offer the proof for completeness. In what follows, we fix a graph $G=(V,E)$, $v \in V$, and let $G'=G-v$. For an permutation $\pi$ over edges in $G$, let $A_\pi$ be the deterministic algorithm that, starting with an empty matching $M$, iteratively add edges to $M$ in the order $\pi$ if they do not intersect with $M$. Then, the randomized greedy $A$ can be seen as an algorithm that chooses a random permutation $\pi$ and then runs $A_\pi$. Let $M_\pi$ and $M'_\pi$ be the maximal matchings obtained by running $A_\pi$ on $G$ and $G'$, respectively. (Here we used $\pi$ as an permutation over edges in $G'$ by ignoring $e$ in $\pi$.) Let $M$ and $M'$ be the maximal matchings obtained by running $A$ on $G$ and $G'$, respectively. For an edge $e \in E$, we call $\pi(e)$ the \emph{rank} of $e$, and let $I_\pi(e)$ be the set of edges sharing endpoints with $e$ with smaller rank, that is, $I_\pi(e) = \{e' \in N_G(e) \mid \pi(e') < \pi(e)\}$. Note that the matching $M_\pi$ can be described by the following invariant: \begin{quote} An edge $e$ is in $M_\pi$ if and only if all of its neighbors $e' \in N_G(e) \cap I_\pi(e)$ are not in $M_\pi$. \end{quote} Our goal is to show that, in expectation over $\pi$, we need to modify at most one edge in $M_\pi$ so that the invariant is satisfied for the graph $G'$. Let $e_\pi \in E$ be the edge incident to $v$ with the smallest rank with respect to $\pi$. We define $S_\pi \subseteq E$ to intuitively be the set of edges in $G$ that need to be changed to maintain the invariant. Formally, we set $S_{\pi,0} = \{e_\pi\}$ if $e_\pi \in M$ and $S_{\pi,0} = \emptyset$ otherwise. Then for $i>0$, recursively set \[ S_{\pi,i} = \{e \in M_\pi \mid S_{\pi, i-1}\cap I_\pi(e)\neq\emptyset\}\cup \left\{e \not \in M_\pi \mid I_\pi(e)\cap M_\pi \subseteq\bigcup_{j=0}^{i-1} S_{\pi, j}\right\}. \] We then define $S_\pi = \bigcup S_{\pi, i}$ and show the following, from which Theorem~\ref{thm:randomized-greedy} immediately follows. \begin{lemma}\label{lem:expected:change} \[\mathop{\mathbf{E}}_\pi|S_\pi| \leq 1.\] \end{lemma} We define $S'_\pi \subseteq E$ to intuitively be the set of edges that must be changed to maintain the invariant if $e_\pi$ is moved to the beginning of $\pi$. That is, $S'_{\pi,0}=\{e_\pi\}$ and we define $S'_{\pi,i}$ using the same recursion as $S_{\pi,i}$, though the underlying permutation is now $\pi$ with $e_\pi$ moved to the beginning. We then define $S'_\pi=\bigcup S'_{\pi,i}$. The following is a counterpart of Lemma~2 in~\cite{censor2016optimal}. \begin{lemma}\label{lem:possible:changes} If $\pi(e_\pi ) \neq \min\{\pi(e) \mid e \in S'_\pi\}$, then $S_\pi = \emptyset$. Otherwise, $S_\pi \subseteq S'_\pi$. \end{lemma} \begin{proof} First, suppose that $\pi(e_\pi )\neq \min \{\pi(e) \mid e \in S'_\pi\}$. We show that the invariant still holds after the vertex deletion, and thus $S_\pi = \emptyset$. Consider the edge $e_{\min} \in E$, for which $\pi(e_{\min}) = \min \{\pi(e) \mid e \in S'_\pi\}$. Recall that $e_\pi \in S'_\pi$ by construction. Hence if $e_{\min}\neq e_\pi $, then $\pi(e_{\min})<\pi(e_\pi )$, which then implies $e_{\min}$ is not affected by $e_\pi $ in the original permutation $\pi$ and thus $e_{\min}\notin S_\pi$. Now suppose by way of contradiction that $e_{\min}\notin M_\pi$, then $e_{\min}$ has a neighboring edge $e'$ in $M_\pi$ such that $\pi(e')<\pi(e)$ since $M_\pi$ is maximal and was constructed greedily. Due to the minimality of $\pi(e_{\min})$, we must also have $e'\notin S'_\pi$, in which case $e_{\min}$ would not have been added to $S'_\pi$ at any step in the recursion, contradicting the definition of $e_{\min}$. Thus it follows that $e_\pi \in S'_\pi$. Due to the minimality of $\pi(e_{\min})$, it must be that $e_{\min} \in S'_{\pi,1}$, which implies that $e_{\min}$ intersects with $e_\pi $. But since $\pi(e_{\min})<\pi(e_\pi )$ and $e_{\min}$ intersects with $e_\pi $, then $e_\pi $ was not in $M_\pi$, and hence $S_\pi=\emptyset$. Suppose that $\pi(e_\pi )=\min \{\pi(e) \mid e \in S'_\pi\}$. We have nothing to show when $S_{\pi,0}=\emptyset$ because then $S_\pi =\emptyset\subseteq S'_\pi$. Suppose $S_{\pi,0}=\{e_\pi \}$. Then for each edge $e\in S'_{\pi,1}$, we have $\pi(e_\pi )<\pi(e)$ and thus $e\in S_{\pi,1}$. Moreover, each edge $e\notin S'_{\pi,1}$ has some neighboring edge $e'\in M_\pi$ such that $\pi(e')<\pi(e)$ and thus $e\notin S_{\pi,1}$. Hence, $S'_{\pi,1}=S_{\pi,1}$ and by induction, we have $S'_\pi=S_\pi$. \end{proof} For a permutation $\tau$ over $E$, we define $S'(\tau)=S'(G,G',\tau,e_\pi )$ as the set corresponding to $S'$ through the order of the edges induced by $\tau$. We denote by $\Pi_F$ the set of all permutations $\tau$ for which it holds that $S'(\tau) = F$. The proofs of Claims 4 and 5 in~\cite{censor2016optimal} can be directly used to show the following claims. Again, the only difference is that we track permutations over edges rather than vertices. Nevertheless, we give the proofs for completeness. \begin{claim}\label{claim:mm:state} Let $F \subseteq E$ be a set of edges, and let $\pi$ and $\sigma$ be two permutations such that $\pi|_F = \sigma|_F$ and $\pi|_{E \setminus F} = \sigma|_{E \setminus F}$. Assume $\pi \in \Pi_F$. We have that $E \setminus F \subseteq E \setminus S'(\sigma)$ and every $e \in E \setminus F$ has the same state, i.e., whether or not $E\in M$, according to $\pi$ and $\sigma$. \end{claim} \begin{proof} For $e\in E\setminus F$, we show that $e\in E\setminus S'(\sigma)$. Moreover, we prove by induction that the order of the edges in $E\setminus F$ induces the same state under $\pi$ and under $\sigma$. We first consider the base case, where $e\in E\setminus F$ has the smallest order, according to $\pi$ and $\sigma$. Suppose by way of contradiction, that $e$ intersects with some edge $e'\in F$. Since $e'\in F$ and $\pi\in \Pi_F$, then either before or after the graph update, we have $e'\in M_{\pi}$. But $e\notin F$, so then $e$ cannot be in $M_{\pi}$, which implies the existence of some edge in $I_{\pi}(e)\cap E\setminus F$ that is in $M_{\pi}$. However, this contradicts the minimality of $e\in E\setminus F$. Hence, all neighbors of $e$ are in $E\setminus F$. Because $\pi|_{E\setminus F}=\sigma|_{E\setminus F}$, then $e$ has smaller rank than all of its neighbors, according to $\sigma$. Since $e$ also has smaller rank than all of its neighbors, according to $\pi$, then the matching $M$ has the same state upon the edge $e$ in both $\pi$ and $\sigma$. Thus, $e\notin S'(\sigma)$ because $e\notin S'(\pi)$, which completes the base case. To show the inductive step, consider a fixed edge $e\in E\setminus F$ and suppose the statement holds for all edges $e'\in E\setminus F\cap I_{\pi}(e)$. We separate the analysis into cases, depending on whether $e$ is incident to any edges in $F$. If $e$ is not incident to any edges in $F$, then either $e\in M_{\pi}$ or $e\notin M_{\pi}$. First suppose $e\notin M_{\pi}$, so that some edge $z\in I_{\pi}\cap E\setminus F$ is in the matching $M_{\pi}$ induced by $\pi$. Since $z\in E\setminus S'(\sigma)$ by the inductive hypothesis, then $z$ is also in the matching $M_{\sigma}$ induced by $\sigma$. Because $\pi|_{E\setminus F}=\sigma|_{E\setminus F}$, then $e\notin M_{\sigma}$ as well so that $e$ has the same state according to $\pi$ and $\sigma$. Similarly, if $e\in M_{\pi}$, then $w\notin M_{\pi}$ for any $w\in I_{\pi}(e)$. Moreover, for any $w\in I_{\sigma}(e)$, we have by assumption that as a neighbor of $e$, $w\notin F$. Thus $I_{\sigma}(e)\subseteq I_{\pi}(e)$ since $\pi_{E\setminus F}=\sigma_{E\setminus F}$. By the inductive hypothesis, $w\in E\setminus S'(\sigma)$, so that $w\in M_{\sigma}$. Hence, $e\in E\setminus S'(\sigma)$ and $e\in M_{\pi}$, so that its state is the same under $\pi$ and $\sigma$, as desired. On the other hand, if $e$ is incident to some $w\in F$, then either before or after the update, we have $w\in M_{\pi}$, since $\pi\in\Pi_F$. But $e\notin F$, so then $e\notin M_{\pi}$ and thus there exists $z\in I_{\pi}(e)\setminus F$ with $z\in M_{\pi}$. By the inductive hypothesis, we have $z\in E\setminus S'(\sigma)$ and $z\in M_{\sigma}$. Thus $\pi|_{E\setminus F}=\sigma|_{E\setminus F}$ implies $e\notin M_{\sigma}$ and $e\in E\setminus S'(\sigma)$, which completes the induction. \end{proof} \begin{claim}\label{claim:mm:subset} Let $F \subseteq E$ be a set of edges, and let $\pi$ and $\sigma$ be two permutations such that $\pi|_F = \sigma|_F$ and $\pi|_{V \setminus F} = \sigma|_{V \setminus F}$. Assume $\pi\in \Pi_F$. We have that $F\subseteq S'(\sigma)$. \end{claim} \begin{proof} Let $e_\pi \in F$ be some fixed edge. We use a similar strategy to show by induction on the order of the edges in $F$ according to $\pi$, with the modification that $e_\pi $ is the first edge in the permutation, that for each edge $e\in F$, we also have $e\in S'(\sigma)$. We first consider $e_\pi $ as the base case. Since $e_\pi \in F$ and $\pi\in\Pi_F$, then $e_\pi \in S'(\pi)$ and similarly, $e_\pi \in S'(\sigma)$. Now for the inductive step, let $e\in F$ be an edge such that the statement holds for all edges in $F$ with smaller rank than $e$, according to $\pi$. Because $e\in F$ and $e\neq e_\pi $, then there exists $w\in I_{\pi}(e)\cap F$. Since $\pi|_F=\sigma|_F$ and $e\in F$, then by the inductive hypothesis, we have that $w\in S'(\sigma)$ and in particular $I_{\sigma}(e)\cap S'(\sigma)\neq\emptyset$. Let $\phi\in I_{\sigma}(e)$, which is non-empty since $w\in I_{\pi}(e)$ and $\pi|_F=\sigma|_F$. Now if $\phi\in F$, then since $e\in F$, we must have from our inductive hypothesis that $\phi\in S'(\sigma)$. If $\phi\notin F$, then $\phi\notin M_{\pi}$ in order for $e\in F$. By Claim~\ref{claim:mm:state}, we thus have $\phi\in E\setminus S'(\sigma)$ and $\phi\notin M_{\sigma}$. Hence, all neighbors of $e$ in $I_{\sigma}(e)$ are either in $S'(\sigma)$ or not in $M_{\sigma}$. Since $I_{\sigma}\cap S'(\sigma)\neq\emptyset$, then $e\in S'(\sigma)$. \end{proof} These claims combined imply that if $\pi|_F = \sigma|_F$ and $\pi|_{V\setminus F} = \sigma|_{V\setminus F}$ then $\sigma \in \Pi_F$ if and only if $\pi \in \Pi_F$. The following proof of Lemma~\ref{lem:prob:min} is almost exactly the same as that of Lemma 3 in~\cite{censor2016optimal}, with the focus on edges in a maximal matching rather than vertices in a maximal independent set. \begin{lemma}\label{lem:prob:min} For any set of edges $F \subseteq E$, it holds that \[ \Pr\Bigl[\pi(e_\pi) = \min\{\pi(e) \mid e \in F\} \mid S' = F\Bigr] = \frac{1}{|F|}. \] \end{lemma} \begin{proof} Let $\tau$ be a fixed permutation. Let $\sigma^+$ be a permutation on $F\setminus\{e_\pi\}$ and $\sigma^-$ be a permutation on $E\setminus F$. Let $p_{\sigma^+,\sigma^-}:=\Pr[\pi(e_\pi)\le\pi(e)\; \forall \phi\in F\mid\pi|_{F\setminus\{e_\pi\}}=\sigma^+\,\wedge\,\pi|_{E\setminus F}=\sigma^-]$ denote the probability that $e_\pi$ has smaller rank than all edges $e \in F$ induced by a permutation $\pi$ that preserves $\sigma^+$ and $\sigma^-$. We first claim that for any pair of permutations $\sigma^+_1,\sigma^-_1$ and $\sigma^+_2,\sigma^-_2$ on $F\setminus\{e_\pi\}$ and $E\setminus F$, respectively, then the permutation ${(\sigma^+_1)}^{-1}\sigma^+_2$ on $F\setminus\{e_\pi\}$ and the permutation ${(\sigma^-_1)}^{-1}\sigma^-_2$ on $E\setminus F$ are invariant on the property $\pi(e_\pi)\le\pi(e)$ for all $e \in F$. Thus, $p_{\sigma^+_1,\sigma^-_1}=p_{\sigma^+_2,\sigma^-_2}$. Since $e_\pi\in F$, then we have $\Pr[\pi(e_\pi)\le\pi(e)\,\forall e \in F]=\frac{1}{|F|}$ and hence, \begin{align*} \frac{1}{|F|}&=\Pr[\pi(e_\pi)\le\pi(e)\,\forall e\in F]=\sum_{\tau^+,\tau^-}p_{\tau^+,\tau^-}\Pr\left[\pi|_{F\setminus\{e_\pi\}}=\tau^+\,\wedge\pi|_{E\setminus F}=\tau^-\right]\\ &=\sum_{\tau^+,\tau^-}p_{\sigma^+,\sigma^-}\Pr\left[\pi|_{F\setminus\{e_\pi\}}=\tau^+\,\wedge\pi|_{E\setminus F}=\tau^-\right]=p_{\sigma^+,\sigma^-}. \end{align*} By Claim~\ref{claim:mm:state} and Claim~\ref{claim:mm:subset}, there exists a set of $t$ pairs of permutations $\{(\sigma^+_1,\sigma^-_1),\ldots,(\sigma^+_t,\sigma^-_t)\}$ on $F\setminus\{e_\pi\}$ and $E\setminus F$, respectively, such that $\Pi_F=\{\pi\mid\exists i, \pi|_{F\setminus\{e_\pi\}}=\sigma^+_i\wedge\pi|_{E\setminus F}=\sigma^-_i\}$ for every set $F\subseteq E$. Thus, \begin{align*} \Pr[\pi(e_\pi)&\le\pi(e)\,\forall e\in F]=\sum_{i=1}^t p_{\sigma^+_i,\sigma^-_i}\Pr\left[\pi|_{F\setminus\{e_\pi\}}=\sigma^+_i\wedge \pi|_{E\setminus F}=\sigma^-_i \mid \pi\in\Pi_F\right]\\ &= \frac{1}{|F|}\sum_{i=1}^t\Pr\left[\pi|_{F\setminus\{e_\pi\}}=\sigma^+_i\wedge \pi|_{E\setminus F}=\sigma^-_i \mid \pi\in\Pi_F\right]=\frac{1}{|F|}. \end{align*} In other words, $\Pr[\pi(e_\pi) = \min\{\pi(e) \mid e \in F\} \mid S' = F] =\frac{1}{|F|}$, since $\Pi_F$ is the set of all permutations $\tau$ for which it holds that $S'(\tau) = F$. \end{proof} The proof of Lemma~\ref{lem:expected:change} follows from Lemma~\ref{lem:possible:changes} and Lemma~\ref{lem:prob:min}. \end{document}
arXiv
\begin{document} \title{Multivariate Nonparametric Estimation of the Pickands Dependence Function using Bernstein Polynomials} \author{ G. Marcon, S. A. Padoan, P. Naveau, P. Muliere and J. Segers \footnote{Marcon is Post-doc at university of Pavia, Italy, E-mail: [email protected]. Muliere and Padoan work at the Department of Decision Sciences, Bocconi University of Milan, via Roentgen 1, 20136 Milano, Italy. E-mail: [email protected], [email protected]. Naveau is a CNRS researcher at the Laboratoire des Sciences du Climat et l'Environnement, Gif-sur-Yvette, France. E-mail: [email protected]. Segers is a Professor at the Universit\'{e} catholique de Louvain, Institut de statistique, biostatistique et sciences actuarielles, Voie du Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium. E-mail: [email protected].} } \maketitle \begin{abstract} Many applications in risk analysis, especially in environmental sciences, require the estimation of the dependence among multivariate maxima. A way to do this is by inferring the Pickands dependence function of the underlying extreme-value copula. A nonparametric estimator is constructed as the sample equivalent of a multivariate extension of the madogram. Shape constraints on the family of Pickands dependence functions are taken into account by means of a representation in terms of a specific type of Bernstein polynomials. The large-sample theory of the estimator is developed and its finite-sample performance is evaluated with a simulation study. The approach is illustrated by analyzing clusters consisting of seven weather stations that have recorded weekly maxima of hourly rainfall in France from 1993 to 2011. \\ \noindent Keywords: Bernstein polynomials, Extremal dependence, Extreme-value copula, Heavy rainfall, Nonparametric estimation, Multivariate max-stable distribution, Pickands dependence function. \end{abstract} \section{Introduction and background}\label{sec:intro} In recent years, inference methods for assessing the extremal dependence have been in increasing demand. This is especially due to growing requests for multivariate analyses of extreme values in the fields of environmental and economic sciences. The dimension of the random vector under study is often greater than two. For example, Figure \ref{fig:map} displays a map of clusters containing seven weather stations in France each; see \citeN{bernard13} for details on the construction of the clusters. \begin{figure} \caption{ Analysis of French weekly precipitation maxima in the period 1993--2011. Clusters of 49 weather stations and their estimated extremal coefficients in dimension $d=7$ obtained with the projected version of the madogram estimator, see Section \ref{sec:data} for details. } \label{fig:map} \end{figure} The data consist of weekly maxima of hourly rainfall recorded at each station\footnote{Data provided by M\'et\'eo--France and published within the \textsf{R} package \textsf{ClusterMax}, freely available from the homepage of Philippe Naveau, \url{http://www.lsce.ipsl.fr/Pisp/philippe.naveau/}.}. It would be of interest to hydrologists to infer the dependence within each of the seven-dimensional vectors of component-wise maxima and to compare the dependence structures among clusters. Such an endeavor represents the main motivation of this work. Let ${\boldsymbol{X}}=(X_1,\ldots,X_d)$ be a $d$-dimensional random vector of maxima that follows a multivariate max-stable distribution $G$; for more background on univariate and multivariate extreme-value theory, see for instance \shortciteN{beirlant+g+s+t04}, \citeN{dehaan+f06}, or \citeN{falk+h+r10}. The margins of $G$, denoted by $F_i(x)={\mathbb P}\{X_i\leq x\}$, for all $x\in {\mathbb R}$ and $i=1,\ldots,d$, are univariate max-stable distributions. The joint distribution takes the form \begin{equation} \label{eq:G(x)} G({\boldsymbol{x}}) = C \bigl( F_1(x_1), \ldots, F_d(x_d) \bigr), \qquad {\boldsymbol{x}} \in {\mathbb R}^d, \end{equation} where $C$ is an extreme-value copula: \begin{equation} \label{eq:ev_copula} C(u_1, \ldots, u_d) = \exp \bigl( - \ell( - \log u_1, \ldots, - \log u_d) \bigr), \qquad {\boldsymbol{u}} \in (0, 1]^d, \end{equation} with $\ell : [0, \infty)^d \to [0, \infty)$ the so-called stable tail dependence function. The latter function is homogeneous of order one and is therefore determined by its restriction on the unit simplex, the restriction itself being called the Pickands dependence function, denoted here by $A$. Formally, we have \begin{equation} \label{eq:ellA} \ell( {\boldsymbol{z}} ) = (z_1 + \cdots + z_d) \, A( {\boldsymbol{w}} ), \qquad {\boldsymbol{z}} \in [0, \infty)^d, \end{equation} where $w_i = z_i / (z_1 + \cdots + z_d)$ for $i = 1, \ldots, d-1$ and $w_d = 1 - w_1 - \cdots - w_{d-1}$. We view $A$ as a function defined on the $(d-1)$-dimensional unit simplex \begin{equation} \label{eq:simplex} \mathcal{S}_{{d-1}} := \left\{ (w_1,\ldots, w_{ d-1}) \in [0,1]^{ d-1}: \sum_{i=1}^{ d-1} w_i \leq 1 \right\}. \end{equation} \iffalse \begin{equation} \label{eq:G(x)} G({\boldsymbol{x}}) = \exp\{-V({\boldsymbol{x}})\},\qquad V({\boldsymbol{x}}) = \left( \frac{1}{x_1}+ \ldots + \frac{1}{x_d} \right) A(\textbf{w}), \end{equation} where ${\boldsymbol{x}}=(x_1,\ldots,x_d)$ and ${\boldsymbol{w}}=(w_1,\ldots,w_{d-1})$ with $w_i = x_i/(x_1 + \cdots + x_d)$ for $i=1,\dots,d$. Specifically, the function $V:(0,\infty)^d\rightarrow [0,\infty)$, named the exponent (dependence) function is continuous, convex, homogeneous of order $-1$, i.e. $V(a{\boldsymbol{x}})=a^{-1}V({\boldsymbol{x}})$ for any $a>0$, bounded by $\max(x_1,\ldots,x_d)\leq V({\boldsymbol{x}}) \leq x_1+\cdots +x_d$, and for all positive $x$ it must satisfy $V(x,0,\ldots,0)=\cdots=V(0,\ldots,0,x)=x$ (for more details see \citeANP{falk+h+r10}, \citeyearNP{falk+h+r10}, Ch. 4; \citeANP{dehaan+f06}, \citeyearNP{dehaan+f06}, Ch. 6). Distribution \eqref{eq:G(x)} with uniform margins can be written as $G({\boldsymbol{x}})=C\{F_1(x_1),\ldots,F_d(x_d)\}$ where $C$ is the extreme-value copula \begin{equation}\label{eq:ev_copula} C({\boldsymbol{u}})=\exp[-V\{-(\log u_1)^{-1},\ldots,-(\log u_d)^{-1}\}],\qquad {\boldsymbol{u}}\in [0,1)^d. \end{equation} The homogeneity property of $V$ means that it can be rewritten through the Pickands dependence function $A$ \cite{pickands81}. Simplifying, the Pickands dependence function can be seen as a function defined on the space \begin{equation} \label{eq:simplex} \mathcal{S}_{{d-1}} := \left\{ (w_1,\ldots, w_{ d-1}) \in [0,1]^{ d-1}: \sum_{i=1}^{ d-1} w_i \leq 1 \right\}, \end{equation} as we can always define a component through the others, e.g. $w_d=1-w_1-\ldots-w_{d-1}$. The Pickands dependence function inherits the properties of the exponent function. \fi Let $\mathcal{A}$ be the family of functions $A: \mathcal{S}_{{d-1}} \rightarrow [1/d,1]$ that satisfy the following conditions: \begin{enumerate} \item[(C1)] $A({\boldsymbol{w}})$ is convex, i.e., $A(a{\boldsymbol{w}}_1+(1-a){\boldsymbol{w}}_2)\leq aA({\boldsymbol{w}}_1)+(1-a)A({\boldsymbol{w}}_2)$, for $a\in[0,1]$ and ${\boldsymbol{w}}_1,{\boldsymbol{w}}_2\in \mathcal{S}_{{d-1}}$; \item[(C2)] $A({\boldsymbol{w}})$ has lower and upper bounds $$ 1/d\leq \max\left(w_1,\ldots,w_{ d-1},w_d \right) \leq A({\boldsymbol{w}}) \leq 1, $$ for any ${\boldsymbol{w}} = (w_1, \ldots, w_{ d-1}) \in \mathcal{S}_{{d-1}}$ with $w_d=1-w_1-\ldots-w_{d-1}$; \end{enumerate} Any Pickands dependence function belongs to the class $\mathcal{A}$ (\citeANP{falk+h+r10}, \citeyearNP{falk+h+r10}, Ch.\ 4). The converse is not true, however; see \shortciteANP{beirlant+g+s+t04} (\citeyearNP{beirlant+g+s+t04}, p.\ 257) for a counterexample. A characterization of the class of stable tail dependence functions has been given in \citeN{ressel2013}. In condition (C2), the lower and upper bounds represent the cases of complete dependence and independence, respectively. Many parametric models have been introduced for modelling the extremal dependence for a variety of applications, with summaries to be found in \citeN{kotz2000} and \citeN{padoan13}. However, such finite-dimensional parametric models can never cover the full class of Pickands dependence functions. For this reason, several nonparametric estimators of the Pickands dependence function have been proposed: see for instance \citeN{pickands81}, \shortciteN{cap+f+g97}, \citeN{hall+t00}, \shortciteN{zhang+w+p08}, \citeN{genest2009rank}, \shortciteN{bucher2011}, \citeANP{gudend+s11} (\citeyearNP{gudend+s11}, \citeyearNP{gudend+s12}), and \shortciteN{berghaus2013}. All of these estimators require further adjustments to ensure they are genuine Pickands dependence functions. Given an independent random sample from a multivariate distribution with continuous margins and whose copula is an extreme-value copula, we propose a nonparametric estimator for its Pickands dependence function. In the bivariate case, a fast-to-compute and easy-to-interpret estimator based on a type of madogram was introduced by \shortciteN{naveau+g+c+d}. It has two drawbacks, however: it was only defined for the bivariate case and it is not necessarily a Pickands dependence function itself. Our first contribution is to propose a new type of madogram in the multivariate setting, see also \shortciteN{Fonseca13}. A second contribution is to regularise the estimator by projecting it onto the space $\mathcal{A}$, imposing the necessary constraints (C1)--(C2). To do so, we make use of Bernstein polynomials. We admit that the resulting estimator still need not be a Pickands dependence function. Still, simulation results show that imposing (C1)--(C2) already greatly improves the estimation accuracy. Many regularization strategies have already been considered in the literature. In the bivariate case, \citeN{pickands81} suggested the use of the greatest convex minorant. \shortciteN{smith+t+y90} proposed to modify a pilote estimator using kernel methods, while \citeN{hall+t00} advocated constrained smoothing splines. However, as discussed in \shortciteN{fil+g+s08}, the impact of these adjustments on the asymptotic properties of the estimator changes from one case to another, while a general result is unknown. The projection estimator approach developed in \shortciteN{fil+g+s08} and \citeN{gudend+s12} provides a general framework based on projections of a pilote estimate onto an increasing sequence of finite-dimensional subsets $\mathcal{A}_k\subseteq \mathcal{A}$. The approximation space they proposed consists of piecewise linear functions, yielding computational challenges in higher dimensions. To bypass these computational hurdles, our strategy is to replace piecewise linear functions by Bernstein polynomials (\citeANP{lorentz53}, \citeyearNP{lorentz53}; \citeANP{sauer91}, \citeyearNP{sauer91}). In virtue of their optimal shape restriction properties (\citeANP{carnicer+p93}, \citeyearNP{carnicer+p93}), Bernstein polynomials are suitable for nonparametric curve estimation (e.g. \citeNP{petrone99}; \shortciteNP{chang+h+w+y05}) and shape-preserving regression \cite{wang+g12}. We provide the asymptotic theory for our estimator and we demonstrate its practical use in dimension seven, which seems to be higher than what has been possible hitherto with nonparametric methods. The estimation uncertainty can be assessed through a resampling procedure. Throughout the paper we use the following notation. Given $\mathcal{X}\subset {\mathbb R}^n$ and $n\in {\mathbb N}$, let $\ell^{\infty}(\mathcal{X})$ denote the spaces of bounded real-valued functions on $\mathcal{X}$. For $f:\mathcal{X} \rightarrow {\mathbb R}$, let $\|f\|_{\infty}=\sup_{{\boldsymbol{x}} \in \mathcal{X}} |f({\boldsymbol{x}})|$. The arrows ``$\stackrel{\mathrm{a.s.}}{\longrightarrow}$'', ``$\Rightarrow$'', and ``$\leadsto$'' denote almost sure convergence, convergence in distribution of random vectors (see \citeNP{vaart98}, Ch.\ 2) and weak convergence of functions in $\ell^{\infty}(\mathcal{X})$ (see \citeNP{vaart98}, Ch.\ 18--19), respectively. Let $L^2(\mathcal{X})$ denote the Hilbert space of square-integrable functions $f : \mathcal{X} \to {\mathbb R}$, with $\mathcal{X}$ equipped with $n$-dimensional Lebesgue measure; the $L^2$-norm is denoted by $\|f\|_2=(\int_{\mathcal{X}} f^2({\boldsymbol{x}}) \, \mathrm{d}{\boldsymbol{x}})^{1/2}$. For analytical reasons, we view the unit simplex $\mathcal{S}_{{d-1}}$ as a subset of $\mathbb{R}^{d-1}$, see \eqref{eq:simplex}, although geometrically, it is perhaps more natural to consider it as a subset of $\mathbb{R}^d$. A similar convention applies to our use of the multi-index ${\boldsymbol\alpha}$ in Section~\ref{sec:bern}. The paper is organised as follows. In Section~\ref{sec:nonparest}, we introduce our multivariate nonparametric madogram estimator and we discuss its properties. In Section~\ref{sec:bern}, we describe the projection method based on the Bernstein polynomials. In Section~\ref{sec:num}, we investigate the finite-sample performance of our estimation method by means of Monte Carlo simulations. Finally, we apply our approach to French weekly maxima of hourly rainfall in Section~\ref{sec:data}. All proofs are deferred to the appendices. \section{Madogram estimator} \label{sec:nonparest} Let ${\boldsymbol{X}}$ be a random vector with continuous marginal distribution functions $F_1, \ldots, F_d$ and whose copula $C$ is an extreme-value copula with stable tail dependence function $\ell$ and Pickands dependence function $A$; see above. \iffalse Let ${\boldsymbol{X}}_m$, $m=1,\ldots,n$, be independent and identically distributed (i.i.d.) replicates of ${\boldsymbol{X}}$. For comparison purposes, we briefly discuss some well known estimators; see \citeN{gudend+s11} for a review. Assume for the moment that the marginal distributions $F_1, \ldots, F_d$ are known; later on, we will estimate them by the empirical distribution functions. The random variables $Y_{m,i}=-\log F_i(X_{m,i})$ are unit exponentially distributed, for $m=1,\ldots,n$ and $i=1,\ldots,d$. Define for each $m=1,\ldots,n$, \begin{equation}\label{eq:stat} \tilde{Y}_m({\boldsymbol{w}})=\bigwedge_{i=1, \dots,d}\left(\frac{Y_{m,i}}{w_i}\right), \qquad {\boldsymbol{w}}\in\mathcal{S}_{{d-1}}, \end{equation} The multivariate Pickands statistic (\citeNP{pickands81}) is defined by \begin{equation}\label{eq:P} A_n^{P}({\boldsymbol{w}})=n \Big/ \sum_{m=1}^n\tilde{Y}_m({\boldsymbol{w}}). \end{equation} Closely related to \eqref{eq:P} is the multivariate version of the estimator described in \citeN{hall+t00}: \begin{equation}\label{eq:HT} A_n^{HT}({\boldsymbol{w}})=A_n^{P}({\boldsymbol{w}})/A_n^{P}({\boldsymbol{e}}_i), \end{equation} where ${\boldsymbol{e}}_i=(0,\ldots,0,1,0,\ldots,0)$ for $i=1,\dots,d-1$. In the bivariate case the function $A_n^{HT}$ satisfies condition $\text{C2}$, but not $\text{C1}$. In the multivariate case also $\text{C2}$ has not been shown. Lastly, the multivariate version of the statistic proposed by \citeN{cap+f+g97}, referred to as CFG for brevity, is \begin{equation}\label{eq:CFG} A_n^{CFG}({\boldsymbol{w}})=\exp \left( -\frac 1 n \sum_{m=1}^n\log \tilde{Y}_m({\boldsymbol{w}})-\gamma \right), \end{equation} where $\gamma=0.5772\ldots$ is the Euler--Mascheroni constant. Also the function $A_n^{CFG}$ does not satisfy the conditions (C1)--(C2). Asymptotic properties of these statistics have been derived by \citeN{gudend+s11}. Replacing in \eqref{eq:stat} the marginal distributions $F_1,\ldots,F_d$ by the empirical distribution functions \begin{equation}\label{eq:empirical} F_{n,i}(x)=\frac{1}{n}\sum_{m=1}^n \mathds{1}(X_{m,i}\leq x),\qquad i =1,\ldots,d, \end{equation} where $\mathds{1}(E)$ is the indicator function of the event $E$, then \eqref{eq:P}--\eqref{eq:CFG} provide the definitions of the P, HT and CFG estimators that we denote by $\widehat{A}_n^{P}$, $\widehat{A}_n^{HT}$ and $\widehat{A}_n^{CFG}$ respectively. For these estimators, the weak convergence result has been shown by \citeN{gudend+s12}. \fi Our estimator is based on the sample version of the multivariate madogram, extending \shortciteN{naveau+g+c+d}, see also \shortciteN{Fonseca13}. \begin{defi}\label{def:multimado} For ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, the multivariate ${\boldsymbol{w}}$-madogram, denoted by $\nu({\boldsymbol{w}})$, is defined as the expected distance between the componentwise maximum and the componentwise mean of the variables $F^{1/w_1}_{1}(X_{1}), \dots, F^{1/w_d}_{d}(X_{d})$, that is, \begin{equation}\label{eq:multimd} \nu({\boldsymbol{w}}) = {\mathbb E} \left[ \bigvee_{i=1}^d\left \lbrace F^{1/w_i}_{i}\left(X_{i}\right) \right\rbrace - \frac{1}{d}\sum_{i=1}^dF^{1/w_i}_{i}\left(X_{i}\right) \right]. \end{equation} For $w_i = 0$ and $0 < u < 1$, we put $u^{1/w_i} = 0$ by convention. \end{defi} \begin{prop}\label{prop:multimado} If the random vector ${\boldsymbol{X}}$ has continuous margins and extreme-value copula with Pickands dependence function $A$, then, for all ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, \begin{eqnarray} \nonumber \nu({\boldsymbol{w}}) &=& \frac{A({\boldsymbol{w}})}{1 + A({\boldsymbol{w}})} - c({\boldsymbol{w}}), \\ \label{eq:Amd} A({\boldsymbol{w}}) &=& \frac{\nu({\boldsymbol{w}}) + c({\boldsymbol{w}})}{1 - \nu({\boldsymbol{w}}) - c({\boldsymbol{w}})}, \end{eqnarray} where $c({\boldsymbol{w}}) = d^{-1} \sum_{i=1}^d w_i / (1 + w_i)$. \end{prop} The madogram can be interpreted as the $L_1$ distance between the maximum and the average of the random variables $F_1^{1/w_1}(X_1), \ldots, F_d^{1/w_d}(X_d)$. If $w_1 = \ldots = w_d = 1/d$, then the $L_1$ distance is zero if and only if all components $F_i(X_i)$ are equal with probability one, that is, in case of complete dependence. \iffalse \begin{rem}\label{rem:l1dist} One advantage of the madogram as defined in \eqref{eq:multimd} is that it can be interpreted as an L1-distance. Specifically, let $\rho(u,v)={\mathbb E} |u-v|$, where $$ u=\bigvee_{i=1}^d\left \lbrace F^{1/w_i}_{i}\left(X_{i}\right) \right\rbrace,\qquad v=\frac 1 d \sum_{i=1,\dots,d}F^{1/w_i}_{i}\left(X_{i}\right). $$ Then, $\rho(u,v)\geq0$ measures the distance between the couple $(u,v)$, that is the maximum and the mean of the elements $\{F^{1/w_1}_{1}(X_{1}), \dots, F^{1/w_d}_{d}(X_{d})\}$. As a consequence, if all components of ${\boldsymbol{X}}$ are equal (in probability), then the distance is null and the converse is also true. In other words, \eqref{eq:multimd} tells us how far away ${\boldsymbol{X}}$ is from the complete dependence case. \end{rem} \fi In the bivariate case, Definition~\ref{def:multimado} is slightly different from the one proposed by \shortciteN{naveau+g+c+d}. Here, we use the vector $\big(F^{1/w_1}_1\left(X_1\right),F^{1/w_2}_2\left(X_2\right)\big)$ instead of $\big(F^{w_1}_1(X_1),F^{w_2}_2(X_2)\big)$. This new version has the advantage that the sample equivalent of \eqref{eq:Amd} will automatically satisfy condition~(C2). Assume first that the marginal distributions $F_1, \ldots, F_d$ are known; below, we will estimate them by the empirical distribution functions. Equation~(\ref{eq:multimd}) suggests the statistic \begin{equation}\label{eq:empmado} \nu_n({\boldsymbol{w}})=\frac{1}{n}\sum_{m=1}^n \left( \bigvee_{i=1}^d\left\lbrace F^{1/w_i}_{i}\left(X_{m,i}\right)\right\rbrace - \frac{1}{d}\sum_{i=1}^d F^{1/w_i}_{i}\left(X_{m,i}\right) \right). \end{equation} The Pickands dependence function can then be estimated through \begin{equation}\label{eq:mado} A_n^{\mathrm{MD}}({\boldsymbol{w}})=\frac{\nu_n({\boldsymbol{w}})+c({\boldsymbol{w}})}{1-\nu_n({\boldsymbol{w}})-c({\boldsymbol{w}})}, \qquad {\boldsymbol{w}}\in \mathcal{S}_{{d-1}}. \end{equation} Next, we estimate the unknown marginal distributions $F_1,\ldots,F_d$ by the empirical distribution functions \begin{equation}\label{eq:empirical} F_{n,i}(x)=\frac{1}{n}\sum_{m=1}^n \mathds{1}(X_{m,i}\leq x),\qquad i =1,\ldots,d, \end{equation} where $\mathds{1}(E)$ is the indicator function of the event $E$. Replacing $F_i$ by $F_{n,i}$ in Equation~\eqref{eq:empmado} yields our nonparametric estimators $\widehat{\nu}_n$ and $\widehat{A}_n^{\mathrm{MD}}$ of the multivariate madogram and of the Pickands dependence function, respectively: \begin{align*} \widehat{\nu}_n({\boldsymbol{w}}) &= \frac{1}{n}\sum_{m=1}^n \left( \bigvee_{i=1}^d \left\lbrace F_{n,i}^{1/w_i}(X_{m,i}) \right\rbrace - \frac{1}{d}\sum_{i=1}^d F_{n,i}^{1/w_i}(X_{m,i}) \right), \\ \widehat{A}_n^{\mathrm{MD}}({\boldsymbol{w}}) &= \frac{\widehat{\nu}_n({\boldsymbol{w}}) + c({\boldsymbol{w}})}{1 - \widehat{\nu}_n({\boldsymbol{w}}) - c({\boldsymbol{w}})}. \end{align*} Other estimators of the margins could be inserted as well. However, the use of the empirical distribution functions requires minimal assumptions and yields an estimator for $A$ which is invariant under monotone transformations. The next theorem summarizes the asymptotic properties related to $A_n^{\mathrm{MD}}$ and $\widehat{A}_n^{\mathrm{MD}}$. The asymptotic normality requires a smoothness condition on the extreme-value copula $C$, see Example~5.3 in \citeN{seger12}. \begin{cond}\label{cond:smooth} For every $i\in\{1,\ldots,d\}$, the partial derivative of $C$ with respect to $u_i$ exists and is continuous on the set $\{{\boldsymbol{u}}\in[0,1]^d: 0< u_i<1\}$. \end{cond} Let $\mathbb{D}$ be a $C$-Brownian bridge, that is, a zero-mean Gaussian process on $[0,1]^d$ with continuous sample paths and with covariance function given by \begin{equation}\label{eq:covariance} \operatorname{Cov}(\mathbb{D}({\boldsymbol{u}}),\mathbb{D}({\boldsymbol{v}}))=C({\boldsymbol{u}}\wedge{\boldsymbol{v}})-C({\boldsymbol{u}}) \, C({\boldsymbol{v}}),\qquad {\boldsymbol{u}},{\boldsymbol{v}}\in[0,1]^d, \end{equation} where the minimum is considered componentwise. Further, provided Condition~\ref{cond:smooth} is satisfied, define the Gaussian process $\widehat{\mathbb{D}}$ on $[0, 1]^d$ by \begin{equation}\label{eq:cop_proc} \widehat{\mathbb{D}}({\boldsymbol{u}})=\mathbb{D}({\boldsymbol{u}})-\sum_{i=1}^d \frac{\partial C}{\partial u_i}({\boldsymbol{u}}) \, \mathbb{D}(1,\ldots,1,u_i,1,\ldots,1),\quad {\boldsymbol{u}}\in[0,1]^d. \end{equation} \begin{theo}\label{prop:prop_multimado} Let ${\boldsymbol{X}}_1, \ldots, {\boldsymbol{X}}_n$ be independent and identically distributed random vectors whose common distribution has continuous margins and extreme-value copula $C$ with Pickands dependence function $A$. Then: \begin{itemize} \item[a)] $ \norm{A_n^{\mathrm{MD}} - A }_\infty \stackrel{\mathrm{a.s.}}{\longrightarrow} 0$ as $n \to \infty $ and in $\ell^{\infty}(\mathcal{S}_{{d-1}})$, as $n \to \infty$, \begin{multline*} \sqrt{n}(A_n^{\mathrm{MD}}-A)\leadsto \\ \left((1+A({\boldsymbol{w}}))^2\frac{1}{d}\sum_{i=1}^d \int_0^1 \bigl( \mathbb{D}(1,\ldots,1,x^{w_i},1,\ldots,1)-\mathbb{D}(x^{w_1},\ldots,x^{w_d}) \bigr) \, \mathrm{d} x\right)_{{\boldsymbol{w}} \in \mathcal{S}_{{d-1}}}; \end{multline*} \item[b)] $ \norm{\widehat{A}_n^{\mathrm{MD}} - A }_\infty \stackrel{\mathrm{a.s.}}{\longrightarrow} 0$ as $n \to \infty. $ Moreover, if Condition~\ref{cond:smooth} is satisfied, then, in $\ell^{\infty}(\mathcal{S}_{{d-1}})$, as $n \to \infty$, \begin{equation*}\label{eq:wc_pick} \sqrt{n}(\widehat{A}_n^{\mathrm{MD}}-A)\leadsto \left(-(1+A({\boldsymbol{w}}))^2\int_0^1 \widehat{\mathbb{D}}(x^{w_1},\ldots,x^{w_d}) \, \mathrm{d} x\right)_{{\boldsymbol{w}} \in \mathcal{S}_{{d-1}}}. \end{equation*} \end{itemize} \end{theo} The two conditions (C1)--(C2) are not necessarily satisfied by $\widehat{A}_n^{\mathrm{MD}}$. To ensure both conditions, we propose a projection method based on Bernstein polynomials. \section{Estimation based on Bernstein polynomials} \label{sec:bern} \subsection{Bernstein polynomials on the simplex} \label{subsec:bern} Multivariate Bernstein polynomials, defined on a cube or on a simplex, have been widely discussed in mathematics and statistics, see for example \citeN{Ditzian86} and \citeN{petrone04}. Here our focus is on approximating a bounded function $f$ on the simplex $\mathcal{S}_{{d-1}}$. In the univariate case, the shape features of the original function are preserved by its Bernstein approximation. For higher dimensions, shape properties like convexity may no longer be retained. The Bernstein--B\'{e}zier polynomials (\citeNP{sauer91}) solve this issue and preserve various shape properties (\citeNP{li11}, \citeNP{lai93}). Fix the dimension $d \ge 2$. For positive integer $k$, let $\Gamma_k$ be the set of multi-indices ${\boldsymbol\alpha} = (\alpha_1, \ldots, \alpha_{d-1}) \in \{0, 1, \ldots, k\}^{d-1}$ such that $\alpha_1 + \cdots + \alpha_{d-1} \le k$. The cardinality of $\Gamma_k$ is equal to the number of multi-indices ${\boldsymbol\alpha} \in \{0, 1, \ldots, k\}^d$ such that $\alpha_1 + \cdots + \alpha_d = k$; just set $\alpha_d = k - \alpha_1 - \cdots - \alpha_{d-1}$. Replacing each $\alpha_j$ by $\alpha_j + 1$, we find that the number of such multi-indices is also equal to the number of compositions of the integer $k+d$ into $d$ positive integer parts. The number of such compositions is equal to \begin{equation} \label{eq:p} p_k = \binom{k+d-1}{d-1}, \end{equation} and so is the cardinality of $\Gamma_k$. Define the Bernstein basis polynomial $b_{{\boldsymbol\alpha}}(\,\cdot\,;k)$ on $\mathcal{S}_{{d-1}}$ of degree $k$ by \begin{eqnarray} \label{eq:bp} b_{{\boldsymbol\alpha}}( {\boldsymbol{w}}; k) = \binom{k}{{\boldsymbol\alpha}}{\boldsymbol{w}}^{{\boldsymbol\alpha}}, \qquad {\boldsymbol{w}}\in \mathcal{S}_{{d-1}} \end{eqnarray} where $$ \binom{k}{{\boldsymbol\alpha}} = \frac{k!}{\alpha_{1}! \ldots \alpha_{d}!},\qquad {\boldsymbol{w}}^{{\boldsymbol\alpha}} = w_1^{\alpha_1} \cdots w_d^{\alpha_d}. $$ The $k$-th degree Bernstein polynomial associated to $A$ is defined as \begin{equation}\label{eq:polyrap} B_A( {\boldsymbol{w}};k) = \sum_{{\boldsymbol\alpha} \in \Gamma_k} A( {\boldsymbol\alpha}/k) b_{{\boldsymbol\alpha}}( {\boldsymbol{w}}; k), \qquad {\boldsymbol{w}} \in \mathcal{S}_{{d-1}}. \end{equation} \begin{prop} \label{prop:conv_bapp} For every $A \in \mathcal{A}$ and every $k=1,2,\ldots$, $$ \sup_{{\boldsymbol{w}} \in \mathcal{S}_{{d-1}}} \abs{ B_A({\boldsymbol{w}};k) - A({\boldsymbol{w}}) } \leq \frac{d}{2\sqrt{k}}. $$ \end{prop} The family of Bernstein--B\'ezier polynomials of degree $k$ is defined as the set $$ \mathcal{B}_k = \left\{ \sum_{{\boldsymbol\alpha} \in \Gamma_k} \beta_{{\boldsymbol\alpha}} \, b_{{\boldsymbol\alpha}}( \, \cdot \, ; k ) : {\boldsymbol\beta} \in [0,1]^{p_k} \right\}. $$ For ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, let ${\boldsymbol{b}}_k( {\boldsymbol{w}} )$ be the row vector $( b_{{\boldsymbol\alpha}}( {\boldsymbol{w}}; k ), {\boldsymbol\alpha} \in \{0, 1, \ldots, k\}^d: \alpha_1 + \cdots + \alpha_d = k)$. In matrix notation, we have $\sum_{{\boldsymbol\alpha} \in \Gamma_k} \beta_{{\boldsymbol\alpha}} \, b_{{\boldsymbol\alpha}}( {\boldsymbol{w}} ; k ) = {\boldsymbol{b}}_k( {\boldsymbol{w}} ) \, {\boldsymbol\beta}$, where ${\boldsymbol\beta}$ is viewed as a column vector. \iffalse Let $k \in {\mathbb N}$ be the order of the Bernstein--B\'{e}zier polynomial and let $J=\{0,\ldots,k\}$ be the index set. Given a vector ${\boldsymbol{x}}$ of dimension $d$, we write $\Sigma_{{\boldsymbol{x}}}:=\sum_{i=1}^{d}x_i$. Let ${\boldsymbol\alpha}_l=\left(j_i\right)_{i=1,\dots, d-1}$, where each $j_i\in J$, be an index of $d-1$ digits. Thus, for a given index ${\boldsymbol\alpha}_l$, $\Sigma_{{\boldsymbol\alpha}_l}$ defines the sum of its digits. For any $k$ and $d-1$, each ${\boldsymbol\alpha}_l$ is an ordered selection with repetition of elements in $J$. Let ${\boldsymbol\alpha}=\{{\boldsymbol\alpha}_l \}_{l \in L}$ denote the set of all such vectors, with index set $L=\{1,\ldots,D_{k,d-1}\}$ of cardinality $D_{k,d-1}=(k+1)^{d-1}$. Let $L_k=\{l \in L: \Sigma_{{\boldsymbol\alpha}_l}\leq k\} \subset L$ be the subset of indices the sum of whose elements is less than or equal to $k$. For $l \in L_k$, we define the $l^{th}$ Bernstein basis polynomial of degree $k$ as the following continuous function with values in $[0,1]$, $$ b_l({\boldsymbol{w}};k):= \binom{k}{{\boldsymbol\alpha}_l} \;{\boldsymbol{w}}^{{\boldsymbol\alpha}_l} \;(1-\Sigma_{{\boldsymbol{w}}})^{k-\Sigma_{{\boldsymbol\alpha}_l}},\qquad {\boldsymbol{w}} \in \mathcal{S}_{{d-1}} $$ where $$ \binom{k}{{\boldsymbol\alpha}_l} = \frac{k!}{{\boldsymbol\alpha}_l! (k-\Sigma_{{\boldsymbol\alpha}_l})!}, \qquad {\boldsymbol\alpha}_l! = j_1! \cdots j_{d-1}\,!, \qquad {\boldsymbol{w}}^{{\boldsymbol\alpha}_l} = \prod_{i=1}^{d-1} w_i^{j_i}. $$ Therefore, the Bernstein--B\'{e}zier polynomial representation of the function $f$ is given by \begin{equation} \label{eq:bp} B_f({\boldsymbol{w}};k) = \sum_{l \in L_k} \beta_l \, b_l ({\boldsymbol{w}};k), \qquad {\boldsymbol{w}} \in \mathcal{S}_{{d-1}}, \end{equation} where $\beta_l \in {\mathbb R}$, $l\in L_k$, are $f$-dependent coefficients. The number of coefficients involved in \eqref{eq:bp} is denoted by $p$ and it is determined as follows. \begin{prop} \label{prop:p} Given $d-1\in{\mathbb N}$ and a polynomial degree $k \in {\mathbb N}$, the number of coefficients in \eqref{eq:bp} is equal to $p$, where \begin{equation}\label{eq:p} p= \binom{k+d-1}{d-1}. \end{equation} \end{prop} This can be seen as follows. We have $L_k=\{l \in L: \Sigma_{{\boldsymbol\alpha}_l}=0\} \cup \cdots \cup \{l \in L: \Sigma_{{\boldsymbol\alpha}_l}=k\}$ and therefore \eqref{eq:bp} becomes $$ B_f({\boldsymbol{w}};k) = \sum_{l \in L: \Sigma_{{\boldsymbol\alpha}_l}=0} \beta_l \, b_l ({\boldsymbol{w}};k)+ \cdots + \sum_{l \in L: \Sigma_{{\boldsymbol\alpha}_l}=k} \beta_l \, b_l ({\boldsymbol{w}};k), \quad {\boldsymbol{w}} \in \mathcal{S}_{{d-1}}. $$ For $j \in J$, the expression of the $j^{th}$ addend is equivalent to the multinomial formula. By the multinomial theorem (\citeNP{feller50}), the number of coefficients is $$ \binom{j+d-2}{d-2} $$ and hence the sum of these $k$ terms is equal to \eqref{eq:p}. Although $p$ depends on $k$ and $d-1$, we will not write such dependence explicitly. The polynomial \eqref{eq:bp} can be expressed in matrix form, $B_f({\boldsymbol{w}};k)={\boldsymbol{b}}_k({\boldsymbol{w}}){\boldsymbol\beta}_k$, for any $k=1,2,\ldots$, where ${\boldsymbol{b}}_k({\boldsymbol{w}})$ and ${\boldsymbol\beta}_k$ are the $p$-dimensional row and column vectors of polynomial bases and coefficients, respectively. \begin{rem} If $d=2$, then $\alpha_l=\left(j\right)$, $j \in J$ and $\Sigma_{\alpha_l}=j$. Therefore the $l^{th}$ Bernstein basis polynomial of degree $k$ simplifies to $$ b_l(w;k) = \binom{k}{j} w^j (1-w)^{k-j}, \qquad w\in [0,1] $$ and the polynomial representation of $f$ becomes simply $$ B_f(w;k) = \sum_{j=0}^k \beta_j b_j (w;k), \qquad w\in [0,1] $$ \end{rem} Additionally, if $f({\boldsymbol{w}})\in \mathcal{C}(\mathcal{S}_{{d-1}})$, then $B_f({\boldsymbol{w}},k)$ converges uniformly to $f$ as $k$ goes to infinity and $\|B_f({\boldsymbol{w}},k)-f({\boldsymbol{w}})\|_\infty=O(k^{-1})$ \cite{li11}. \fi \subsection{Shape-preserving estimator}\label{subsec:est} In this section, we describe how to use Bernstein--B\'{e}zier polynomials to obtain a projection estimator (\shortciteNP{fil+g+s08}) that satisfies (C1)--(C2). Given a pilot estimator, say $\widehat{A}_n$, the idea is to seek approximate solutions to the constrained optimization problem $$ \widetilde{A}_n=\operatornamewithlimits{\arg\min}_{A\in\mathcal{A}} \norm{ \widehat{A}_n-A }_2. $$ There is no closed-form solution to the above equation, and so an approximation based on the sieves method is explored. Consider a sequence $\mathcal{A}_k\subseteq \mathcal{A}$ of constrained multivariate Bernstein--B\'{e}zier polynomial families on $\mathcal{S}_{{d-1}}$ given by \begin{equation} \label{eq:sievespace} \mathcal{A}_k = \left\{ {\boldsymbol{w}} \mapsto B({\boldsymbol{w}};k) = {\boldsymbol{b}}_k({\boldsymbol{w}}){\boldsymbol\beta}_k: {\boldsymbol\beta}_k \in [0,1]^{p_k} \text{ such that } {\boldsymbol{R}}_k{\boldsymbol\beta}_k \geq {\boldsymbol{r}}_k \right\}. \end{equation} Here, ${\boldsymbol{R}}_k=[{\boldsymbol{R}}^{(1)}_k,{\boldsymbol{R}}^{(2)}_k,{\boldsymbol{R}}^{(3)}_k]^\top$ and ${\boldsymbol{r}}_k=[{\boldsymbol{r}}^{(1)}_k,{\boldsymbol{r}}^{(2)}_k,{\boldsymbol{r}}^{(3)}_k]^\top$ are a $(q\times p_k)$ full row rank matrix and a $(q\times 1)$ vector respectively such that the constraint ${\boldsymbol{R}}_k{\boldsymbol\beta}_k \geq {\boldsymbol{r}}_k$ on the coefficient vector ${\boldsymbol\beta}_k$ ensures that each member of $\mathcal{A}$ satisfies (C1)--(C2). Details for deriving the block matrices and vectors of constraints are provided below. \begin{itemize} \item[\text{R1})] A sufficient condition to guarantee that the function ${\boldsymbol{w}} \mapsto B({\boldsymbol{w}};k)$ on $\mathcal{S}_{{d-1}}$ is convex is that its Hessian matrix be positive semi-definite. In order to enforce the latter, we resort by applying Theorem 1 in \citeN{lai93}. First, for $s\neq r\in\{0,\ldots,d-1\}$ and two vectors ${\boldsymbol{v}}_r$ and ${\boldsymbol{v}}_s$, where ${\boldsymbol{v}}_r={\bf 0}$ if $r=0$ and ${\boldsymbol{v}}_r={\boldsymbol{e}}_r$ if $r>0$ with ${\boldsymbol{e}}_r$ the canonical unit vector (same for ${\boldsymbol{v}}_s$), the directional derivative of $B$ with respect to the direction $\overrightarrow{{\boldsymbol{v}}_r {\boldsymbol{v}}_s}$ is $$ D_{{\boldsymbol{v}}_s-{\boldsymbol{v}}_r} B({\boldsymbol{w}};k)=k \sum_{{\boldsymbol\alpha}\in\Gamma_{k-1}}\Delta_{s,r}\beta_{{\boldsymbol\alpha}}b_{{\boldsymbol\alpha}} ({\boldsymbol{w}};k-1), \quad {\boldsymbol{w}} \in \mathcal{S}_{{d-1}} $$ where $ \Delta_{s,r}\beta_{{\boldsymbol\alpha}}=(\beta_{{\boldsymbol\alpha}+{\boldsymbol{v}}_s}-\beta_{{\boldsymbol\alpha}+{\boldsymbol{v}}_r}). $ Second, the second directional derivative of $B$ with respect to the directions $\overrightarrow{{\boldsymbol{v}}_r {\boldsymbol{v}}_s}$ and $\overrightarrow{{\boldsymbol{v}}_r {\boldsymbol{v}}_t}$ is $$ D'_{{\boldsymbol{v}}_s-{\boldsymbol{v}}_r,{\boldsymbol{v}}_t-{\boldsymbol{v}}_r}B({\boldsymbol{w}};k)=k(k-1) \sum_{{\boldsymbol\alpha}\in\Gamma_{k-2}} \Delta_{t,r}\Delta_{s,r}\beta_{{\boldsymbol\alpha}}\,b_{{\boldsymbol\alpha}} ({\boldsymbol{w}};k-2),\quad {\boldsymbol{w}}\in\mathcal{S}_{{d-1}}. $$ Then, the Hessian matrix of $B({\boldsymbol{w}};k)$, ${\boldsymbol{w}}\in\mathcal{S}_{{d-1}}$, is $ H_{B}=[ D'_{{\boldsymbol{v}}_s,{\boldsymbol{v}}_t} B({\boldsymbol{w}};k)]_{s,t \in \{1,\ldots, d-1\},r=0}, $ and it can be written as $$ H_{B}=k(k-1)\sum_{{\boldsymbol\alpha}\in\Gamma_{k-2}}\Sigma_{\boldsymbol\alpha}\,b_{{\boldsymbol\alpha}} ({\boldsymbol{w}};k-2),\quad {\boldsymbol{w}}\in\mathcal{S}_{{d-1}}, $$ where, for all ${\boldsymbol\alpha}\in\Gamma_{k-2}$, $\Sigma_{\boldsymbol\alpha}$ is a symmetric $(d-1) \times (d-1)$ matrix given by $$ \Sigma_{\boldsymbol\alpha}= \begin{pmatrix} \Delta^2_{1,0}\beta_{{\boldsymbol\alpha}}& \Delta_{1,0}\Delta_{2,0}\beta_{{\boldsymbol\alpha}} & \cdots& \cdots& \Delta_{1,0}\Delta_{d-1,0}\beta_{{\boldsymbol\alpha}}\\ &\Delta^2_{2,0}\beta_{{\boldsymbol\alpha}} & \Delta_{2,0}\Delta_{3,0}\beta_{{\boldsymbol\alpha}} & \cdots& \Delta_{2,0}\Delta_{d-1,0}\beta_{{\boldsymbol\alpha}}\\ &&\vdots&\vdots&\vdots\\ &&&&\Delta^2_{d-1,0}\beta_{{\boldsymbol\alpha}} \end{pmatrix}. $$ By the weak diagonal dominance criterion \cite{lai93} in order to guarantee that $\Sigma_{\boldsymbol\alpha}$ is positive semi-definite, it is sufficient to check, for all ${\boldsymbol\alpha}\in\Gamma_{k-2}$ and $i\in \{1,\ldots,d-1\}$, the conditions $$ \Delta_{i,0}^2 \beta_{{\boldsymbol\alpha}} - \sum_{\substack{j\neq i}} |\Delta_{i,0} \Delta_{j,0} \beta_{{\boldsymbol\alpha}}|\geq 0. $$ Such conditions produce constraints that are more severe than necessary. The above conditions can be synthesized in matrix form as ${\boldsymbol{R}}^{(1)}_k {\boldsymbol\beta}_k \geq {\boldsymbol{r}}^{(1)}_k$ where ${\boldsymbol{R}}^{(1)}_k$ is a $(p_{k-2}(d-1)2^{d-2} \times p_k)$ matrix and ${\boldsymbol{r}}^{(1)}_k$ is the corresponding null vector. For example, with $d=3$ and $k=3$, $$ {\boldsymbol{R}}^{(1)}_3= {\scriptsize \left( \begin{array}{rrrrrrrrrr} 0 & 1 & 0 & 0 & -1 & -1 & 0 & 1 & 0 & 0 \\ 2 & -1 & 0 & 0 & -3 & 1 & 0 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 & 1 & -1 & 0 & 0 & 0 & 0 \\ 2 & -3 & 1 & 0 & -1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & -1 & -1 & 0 & 1 & 0 \\ 0 & 2 & -1 & 0 & 0 & -3 & 1 & 0 & 1 & 0 \\ 0 & 0 & -1 & 1 & 0 & 1 & -1 & 0 & 0 & 0 \\ 0 & 2 & -3 & 1 & 0 & -1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & -1 & -1 & 1 \\ 0 & 0 & 0 & 0 & 2 & -1 & 0 & -3 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & -1 & 1 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 & 2 & -3 & 1 & -1 & 1 & 0 \end{array} \right), }\quad {\boldsymbol{r}}^{(1)}_3= {\scriptsize \begin{pmatrix} 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \end{pmatrix}. }$$ A consequence of this approach is that \item[\text{R2})] $B$ satisfies the upper bound condition in (C2) if $\beta_{{\boldsymbol\alpha}}=1$ for the set of coefficients $\left\{\beta_{{\boldsymbol\alpha}}: {{\boldsymbol\alpha}} = {\bf 0} \text{ or } {{\boldsymbol\alpha}} = k \, {\boldsymbol{e}}_i, \, \forall i=1,\dots, d-1 \right\}$. Thus, the $(2d\times p_k)$ matrix and $2d$-dimensional vector of restrictions are equal to $$ {\boldsymbol{R}}^{(2)}_k= {\scriptsize \left( \begin{array}{rrrrrrrr} 1& 0&\cdots&0&\cdots&0&\cdots&0\\ -1& 0&\cdots&0&\cdots&0&\cdots&0\\ 0& 0&\cdots&1&\cdots&0&\cdots&0\\ 0& 0&\cdots&-1&\cdots&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0& 0&\cdots&0&\cdots&1&\cdots&0\\ 0& 0&\cdots&0&\cdots&-1&\cdots&0\\ \end{array} \right), } \quad {\boldsymbol{r}}^{(2)}_k= {\scriptsize \left( \begin{array}{r} 1\\ -1\\ 1\\ -1\\ \vdots\\ 1\\ -1 \end{array} \right). } $$ \item[\text{R3})] $B$ satisfies the lower bound condition in (C2) if the restrictions R1)-R2) hold and the following constraints are fulfilled. Specifically, for all $(i,j)\in \{0,\ldots,d-1\}^2$, $i\neq j$, the first directional derivatives with respect to $\overrightarrow{{\boldsymbol{v}}_i {\boldsymbol{v}}_j}$, evaluated at the vertices of the simplex, are compared with the first directional derivatives of the planes $z_0=1$, $z_1=w_1$, $z_2=w_2$, $\ldots$, $z_{d}=1-w_1-w_2-\cdots-w_{d-1}$, with respect to the same directions. So, it is sufficient to check the conditions $$ D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_j} B({\boldsymbol{v}}_j;k) \geq-1, \quad \forall\;(i,j) \in \{0,\ldots,d-1\}^2,\,i\neq j. $$ As a consequence, it is sufficient to check the conditions $ \beta_{{\boldsymbol\alpha}} > 1 - 1/k $ for the set of coefficients $ \{\beta_{{\boldsymbol\alpha}}: {\boldsymbol\alpha} = {\boldsymbol{e}}_i \text{ or } {\boldsymbol\alpha} = (k-1) {\boldsymbol{e}}_i \text{ or } {\boldsymbol\alpha} = (k-1) {\boldsymbol{e}}_i + {\boldsymbol{e}}_j, \; \forall j\neq i=1,\dots, d-1 \}. $ This can be synthesized in matrix form as ${\boldsymbol{R}}^{(3)}_k {\boldsymbol\beta}_k \geq {\boldsymbol{r}}^{(3)}_k$ where ${\boldsymbol{R}}^{(3)}_k$ is a $(d(d-1)\times p_k)$ matrix and ${\boldsymbol{r}}^{(3)}_k$ is the corresponding vector of $1-1/k$ vaules. For example, when $d=3$ and $k=3$, the constraint matrix is the following: $$ {\boldsymbol{R}}^{(3)}_3= {\scriptsize \left( \begin{array}{rrrrrrrrrr} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \end{array} \right), }\quad {\boldsymbol{r}}^{(3)}_3= {\scriptsize \begin{pmatrix} 1-1/k\\ 1-1/k\\ 1-1/k\\ 1-1/k\\ 1-1/k\\ 1-1/k\\ \end{pmatrix}. }$$ \end{itemize} The use of the third restriction is justified by the following result. \begin{prop}\label{prop:lower} Let $B_A$ be the polynomial \eqref{eq:polyrap}. Assume that $B_A$ is convex on the simplex and $B_A({\boldsymbol{v}}_j;k)=1$ for all $j\in\{0,\ldots,d-1\}$. Then, for all ${\boldsymbol{w}}\in\mathcal{S}_{{d-1}}$ $$ B_A({\boldsymbol{w}};k)\geq\max(w_1,\ldots,w_d) \quad \iff \quad D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_j}B_A({\boldsymbol{v}}_j;k)\geq -1, $$ for all $(i,j)\in\{0,\ldots,d-1\}^2$, $i\neq j$. \end{prop} Recall that the approximate projection estimator of $A$ based on a pilot estimator $\widehat{A}_n$ is given by the solution to the optimization problem \begin{equation} \label{eq:approx} \widetilde{A}_{n,k}=\operatornamewithlimits{\arg\min}_{B \in\mathcal{A}_k} \norm{ \widehat{A}_n-B}_2. \end{equation} In case the pilot estimator is the madogram estimator $\widehat{A}_n^{\mathrm{MD}}$, the corresponding projection estimator is denoted by $\widetilde{A}_{n,k}^{\mathrm{MD}}$. In practice, the estimator $\widetilde{A}_{n,k}$ is evaluated on a finite set of points $\{{\boldsymbol{w}}_q : q=1,\ldots,Q\}$, with $Q\in {\mathbb N}$ and ${\boldsymbol{w}}_q\in\mathcal{S}_{{d-1}}$. The discretized version of the above solution is given by \begin{equation}\label{eq:BP-MD} \widetilde{A}_{n,k}({\boldsymbol{w}}_q)={\boldsymbol{b}}_k({\boldsymbol{w}}_q)\widehat{{\boldsymbol\beta}}_k,\quad {\boldsymbol{w}}_q\in\mathcal{S}_{{d-1}},\quad q=1,\ldots,Q, \end{equation} where $\widehat{{\boldsymbol\beta}}_k$ is the minimizer of the constrained least-squares problem $$ \widehat{{\boldsymbol\beta}}_k=\operatornamewithlimits{\arg\min}_{{\boldsymbol\beta}_k \in [0,1]^{p_k} : {\boldsymbol{R}}_k{\boldsymbol\beta}_k \geq {\boldsymbol{r}}_k} \frac{1}{Q}\sum_{q=1}^Q \bigl( {\boldsymbol{b}}_k({\boldsymbol{w}}_q){\boldsymbol\beta}_k-\widehat{A}_n({\boldsymbol{w}}_q) \bigr)^2. $$ This is a quadratic programming problem, whose solution is \begin{equation}\label{eq:beta} \widehat{{\boldsymbol\beta}}_k = {\boldsymbol\beta}'_k - ({\boldsymbol{b}}_k^\top {\boldsymbol{b}}_k)^{-1} {\boldsymbol{r}}_k^\top {\thick\gamma}, \end{equation} where ${\thick\gamma}$ is a vector of Lagrange multipliers and ${\boldsymbol\beta}'_k = ({\boldsymbol{b}}^\top_k {\boldsymbol{b}})^{-1} {\boldsymbol{b}}^\top \widehat{A}_n$ is the unconstrained least squares estimator. The vectors $\widehat{{\boldsymbol\beta}}_k$ and ${\thick\gamma}$ can be efficiently computed with an iterative quadratic programming algorithm (e.g. \citeNP{goldfarb+i83}). A high resolution of \eqref{eq:BP-MD} is obtained with increasing values of $Q$. Numerical experiments showed that a close approximation of the true Pickands dependence function is already reached with moderate values of $Q$. However, $Q$ should not be seen as an additional parameter of the projection estimator. The solution \eqref{eq:BP-MD} provides better approximations of the true Pickands dependence function for increasing sample sizes $n$ and polynomial degrees $k$. In order to state the asymptotic distribution of the projection estimator, the following result is required. \begin{prop}\label{prop:nested} $\mathcal{A}_k$, $k=1,2,\ldots$ is a nested sequence in $\mathcal{A}$. Furthermore, if $A\in\mathcal{A}$ satisfies the condition \begin{equation}\label{eq:wddA} \Delta_{i,0}^2A({\boldsymbol\alpha}/k)-\sum_{j\neq i} |\Delta_{i,0}\Delta_{j,0}A({\boldsymbol\alpha}/k)|\geq 0, \quad \forall\,k, {\boldsymbol\alpha} \in \Gamma_{k-2}, i\in\{1,\ldots,d-1\}, \end{equation} then there exist polynomials $A_k \in \mathcal{A}_k$ such that $\lim_{k \to \infty} \sup_{{\boldsymbol{w}} \in \mathcal{S}_{{d-1}}} \abs{ A_k( {\boldsymbol{w}} ) - A( {\boldsymbol{w}} ) } =~0$. \end{prop} The asymptotic distribution of the Bernstein projection estimator based on our multivariate madogram estimator $\widehat{A}_n^{\mathrm{MD}}$ is established in the following proposition. \begin{prop}\label{prop:prop_bernproj} Assume that the polynomial's degree, $k_n$, increases with the sample size $n$ in such a way that $k_n/n\rightarrow \infty$ as $n\rightarrow \infty$. If the Pickands dependence function $A$ satisfies the condition \eqref{eq:wddA}, then, for some Gaussian process $Z$, $$ \sqrt{n}(\widetilde{A}_{n,k_n}^{\mathrm{MD}}-A) \leadsto \operatornamewithlimits{\arg\min}_{Z'\in T_\mathcal{A}(A)}\|Z'-Z\|_2,\quad n\rightarrow \infty, $$ in $L^2(\mathcal{S}_{{d-1}})$, where $T_\mathcal{A}(A)$ is the tangent cone of $\mathcal{A}$ at $A$, given by the set of limits of all the sequences $a_n(A_n-A)$, $a_n\geq0$ and $A_n\in \mathcal{A}$. \end{prop} It remains an open problem to establish the asymptotic behaviour of the projection estimator without condition~\eqref{eq:wddA}. Moreover, if the Pickands dependence function is sufficiently smooth, we conjecture that the approximation rate in Proposition~\ref{prop:conv_bapp} can be improved, leading a slower growth rate needed for the degree of the Bernstein polynomial in Proposition~\ref{prop:prop_bernproj}. The simulation results in Section~\ref{sec:num} confirm that polynomial degrees $k$ much lower than $n$ are already sufficient to achieve good results. Finally, note that Proposition~\ref{prop:prop_bernproj} and, in fact, everything else in this section applies to any estimator of the Pickands dependence function which satisfies a suitable functional central limit theorem. We are grateful to an anonymous Referee for having pointed this out. \subsection{Confidence bands}\label{subsec:comp} We construct confidence bands using a resampling method. For ${\boldsymbol{w}}\in\mathcal{S}_{{d-1}}$ and $0<\tilde{\alpha}<1$, the bootstrap $(1-\tilde{\alpha})$ pointwise confidence band, based on the estimates $\widetilde{A}^{*(r)}_{n,k}({\boldsymbol{w}})$, $r=1,2,\ldots$, obtained from the bootstrapped sample ${\boldsymbol{X}}^{(r)}_n=({\boldsymbol{X}}^{(r)}_1,\ldots,{\boldsymbol{X}}^{(r)}_d)$, has the drawback that the lower and upper limits of the band are rarely convex and continuous. To bypass this hurdle, we followed the strategy to work with the estimated Bernstein polynomials' coefficients themselves. Specifically, let $\widehat{{\boldsymbol\beta}}^{*(r)}_k$ be the Bernstein polynomials' coefficient estimator based on the bootstrap sample ${\boldsymbol{X}}_n^{(r)}$, $r=1,2,\ldots$, we define a bootstrap simultaneous $(1-\tilde{\alpha})$ confidence band specifying the lower $\widetilde{A}^{L}_{n,k}({\boldsymbol{w}})$ and upper $\widetilde{A}^{U}_{n,k}({\boldsymbol{w}})$ limits as \begin{equation} \label{eq:confidentIntervals} \left[\sum_{{\boldsymbol\alpha} \in \Gamma_k} \widehat{\beta}^{*\lceil r(\tilde{\alpha}/2) \rceil}_{{\boldsymbol\alpha}} b_{{\boldsymbol\alpha}} ({\boldsymbol{w}};k); \; \sum_{{\boldsymbol\alpha} \in \Gamma_k} \widehat{\beta}^{*\lceil r(1-\tilde{\alpha}/2)\rceil}_{{\boldsymbol\alpha}} b_{{\boldsymbol\alpha}} ({\boldsymbol{w}};k) \right], \quad {\boldsymbol{w}}\in\mathcal{S}_{{d-1}}, \end{equation} where $\widehat{\beta}^{*\lceil r (\tilde{\alpha}/2)\rceil}_{{\boldsymbol\alpha}}$ and $\widehat{\beta}^{*\lceil r(1-\tilde{\alpha}/2)\rceil}_{{\boldsymbol\alpha}}$, for all ${\boldsymbol\alpha}\in\Gamma_k$, correspond to the $\lceil r(\tilde{\alpha}/2)\rceil$ and $\lceil r(1-\tilde{\alpha}/2)\rceil$ ordered statistics respectively and $b_{{\boldsymbol\alpha}} ({\boldsymbol{w}};k)$ is the Bernstein basis polynomial of degree $k$, see \eqref{eq:bp}. Although this approach does not guarantee convex confidence bands, it works very well in our simulations, where we find that the convexity is violated only when dependence is weak. Another possibility, that can be considered, is to bootstrap bands for unconstrained estimators and then apply projection to the lower and upper bound, as pointed out by an anonymous Referee. Our specific simulations results indicate that our method performs slightly better than this valuable alternative. \section{Simulations}\label{sec:num} To visually illustrate the gain in implementing our Bernstein-B\'ezier projection approach, Figure \ref{fig:comparison} compares the madogram (MD) estimator $\widehat{A}^{\mathrm{MD}}_{n}$ defined by (\ref{eq:mado}) with its Bernstein-B\'ezier-projection (BP) version defined by (\ref{eq:BP-MD}) for the special case of the symmetric logistic model (SL, \citeNP{tawn90}) with $d=3$ and $\alpha' = 0.3$. For all sample sizes ($n=20,50,100$), an improvement can be observed by comparing the estimated contour lines (dotted) in the top and bottom panels. This is particularly true for a small sample size like $n=20$, the corrected version providing smoother and more realistic contour lines. \begin{figure} \caption{ Estimates (dashed lines) of the Pickands dependence function obtained with the MD estimator (top row) and its BP version (bottom row) with polynomial degree $k=14$. The solid line is the true Pickands dependence function. Each column represents a different sample size. } \label{fig:comparison} \end{figure} To guarantee a good approximation of $A$ with $\widetilde{A}_{n,k}$, Proposition \ref{prop:prop_bernproj} suggested to set a large polynomial degree $k$ for large sample sizes, see also \shortciteN{fil+g+s08}, \citeN{gudend+s11}, \citeN{gudend+s12}. But computational time limits restrict the choice of $k$. Figure \ref{fig:comparisonK} explores this issue for the logistic model with $\alpha'=0.3$ and $n=100$. As expected from the theory, the choice of $k$ is not anecdotical. A shift in the contour lines appears for the small value $k=5$, see the left panel of Figure \ref{fig:comparisonK}. This undesirable feature disappears for a moderate value of $k$, see the right panel with $k=14$. \begin{figure} \caption{Same as Figure \ref{fig:comparison} but with $n=100$ and three different values of the polynomial degree. } \label{fig:comparisonK} \end{figure} To go beyond these visual checks, we also compute a Monte-Carlo approximation of the mean integrated squared error \begin{equation*} \label{eq:mise} \mbox{MISE}(\widehat{A}_n,A) = {\mathbb E}\left\{ \int_{\mathcal{S}_{{d-1}}} \left(\widehat{A}_n({\boldsymbol{w}})-A({\boldsymbol{w}})\right)^2 \mathrm{d}{\boldsymbol{w}}\right\}, \end{equation*} for a variety of setups. The approximate MISE is obtained by repeating $1000$ times a given inference method for three different sample sizes $n=50,100, 200$. Different dependence strength of the logistic model has been explored setting the parameter $\alpha'$ between $0.3$ (strong dependence) and 1 (independence). Table \ref{tab:MISE} compares four non-parametric estimators introduced in Section \ref{sec:nonparest}: the madogram estimator (MD), the \citeN{pickands81} estimator (P), the multivariate version of the \citeN{hall+t00} estimator (HT), and finally the multivariate extension of the \citeN{cap+f+g97} estimator. For comparison purposes we have also considered the weighted and endpoint-corrected versions of the P and CFG estimators as discussed in \citeN{gudend+s12}, denoted by Pw and CFGw respectively. \begin{table}[t!] \begin{center} {\footnotesize \begin{tabular}{ccccccc} \toprule Sample size $n$& Estimator & \multicolumn{5}{c}{Parameter $\alpha'$}\\ & & $0.3$ & $0.5$ & $0.7$ & $0.9$ &$1$\\ \midrule $50$ & P & $ 4.25\times 10^{-4} $ & $ 8.06\times 10^{-4} $ & $ 1.47\times 10^{-3} $ & $ 2.45\times 10^{-3} $ & $ 2.50\times 10^{-3} $\\ & Pw & $ 1.45\times 10^{-4} $ & $ 5.13\times 10^{-4} $ & $ 1.26\times 10^{-3} $ & $ 2.53\times 10^{-3} $ & $ 2.81\times 10^{-3} $\\ & CFG & $ 2.36\times 10^{-4} $ & $ 6.92\times 10^{-4} $ & $ 1.87\times 10^{-3} $ & $ 4.07\times 10^{-3} $ & $ 5.02\times 10^{-3} $\\ & CFGw & $ 9.17\times 10^{-5} $ & $ 4.45\times 10^{-4} $ & $ 1.24\times 10^{-3} $ & $ 2.66\times 10^{-3} $ & $ 3.07\times 10^{-3} $\\ & HT & $ 2.64\times 10^{-4} $ & $ 8.54\times 10^{-4} $ & $ 2.59\times 10^{-3} $ & $ 5.13\times 10^{-3} $ & $ 5.65\times 10^{-3} $\\ & MD & $ 1.80\times 10^{-4} $ & $ 8.66\times 10^{-4} $ & $ 1.91\times 10^{-3} $ & $ 3.02\times 10^{-3} $ & $ 2.87\times 10^{-3} $\\ \midrule $100$ & P & $ 1.53\times 10^{-4} $ & $ 3.16\times 10^{-4} $ & $ 6.98\times 10^{-4} $ & $ 1.20\times 10^{-3} $ & $ 1.39\times 10^{-3} $\\ & Pw & $ 6.36\times 10^{-5} $ & $ 2.38\times 10^{-4} $ & $ 6.51\times 10^{-4} $ & $ 1.25\times 10^{-3} $ & $ 1.51\times 10^{-3} $\\ & CFG & $9.54\times 10^{-5} $ & $3.27\times 10^{-4} $ & $8.66\times 10^{-4} $ & $1.78\times 10^{-3} $ & $2.15\times 10^{-3} $\\ & CFGw & $4.32\times 10^{-5} $ & $2.21\times 10^{-4} $ & $6.35\times 10^{-4} $ & $1.24\times 10^{-3} $ & $1.39\times 10^{-3} $\\ & HT & $ 2.61\times 10^{-4} $ & $ 7.66\times 10^{-4} $ & $ 2.16\times 10^{-3} $ & $ 4.24\times 10^{-3} $ & $ 5.27\times 10^{-3} $\\ & MD & $ 7.02\times 10^{-5} $ & $ 3.18\times 10^{-4} $ & $ 7.91\times 10^{-4} $ & $ 1.19\times 10^{-3} $ & $ 1.09\times 10^{-3} $\\ \midrule $200$ & P & $ 5.87\times 10^{-5} $ & $1.54\times 10^{-4} $ & $ 3.40\times 10^{-4} $ & $ 6.25\times 10^{-4} $ & $ 7.24\times 10^{-4} $\\ & Pw & $ 3.01\times 10^{-5} $ & $1.31\times 10^{-4} $ & $ 3.28\times 10^{-4} $ & $ 6.60\times 10^{-4} $ & $ 7.59\times 10^{-4} $\\ & CFG & $ 3.87\times 10^{-5} $ & $ 1.58\times 10^{-4} $ & $ 4.00\times 10^{-4} $ & $ 8.31\times 10^{-4} $ & $ 8.52\times 10^{-4} $\\ & CFGw & $ 2.12\times 10^{-5} $ & $ 1.23\times 10^{-4} $ & $ 3.24\times 10^{-4} $ & $ 6.36\times 10^{-4} $ & $ 5.90\times 10^{-4} $\\ & HT & $ 2.55\times 10^{-4} $ & $ 7.31\times 10^{-4} $ & $ 2.05\times 10^{-3} $ & $ 3.82\times 10^{-3} $ & $ 5.85\times 10^{-3} $\\ & MD & $ 3.17\times 10^{-5} $ & $ 1.58\times 10^{-4} $ & $ 3.70\times 10^{-4} $ & $ 5.81\times 10^{-4} $ & $ 4.91\times 10^{-4} $\\ \bottomrule \end{tabular}} \caption{MISE of four estimators of the Pickands dependence function, and some weighted version, based on a trivariate symmetric logistic dependence model for different parameter values and sample sizes.} \label{tab:MISE} \end{center} \end{table} We can see that the MD estimator provided the best results if compared with the other classical non-parametric estimators. Taking into account also the weighted versions, it turns out that the CFGw estimator performs the best, especially for small sample sizes ($n=50$). With a medium dependence ($\alpha'=0.5, 0.7$) the estimators provide similar results. With a weak dependence or in the independence case ($\alpha'=0.9,1$), the MD estimator still provides the best results, especially for small and moderate sample sizes ($n=50,100$). \begin{table}[t!] \begin{center} {\footnotesize \begin{tabular}{ccc} \toprule & \multicolumn{2}{c}{Projection method}\\ & Bernstein-B\'ezier & Discrete spectral measure\\ \midrule & \multicolumn{2}{c}{\% Improvement}\\ \begin{tabular}{ccc} &\\ $n$ & $\alpha'$ & $k$\\ \midrule 50 & 0.3 & 23\\ & 0.5 & 20\\ & 0.7 & 16\\ & 0.9 & 6\\ & 1 & 3\\ \midrule 100 & 0.3 & 23\\ & 0.5 & 20\\ & 0.7 & 16\\ & 0.9 & 6\\ & 1 & 3\\ \midrule 200 & 0.3 & 23\\ & 0.5 & 20\\ & 0.7 & 16\\ & 0.9 & 6\\ & 1 & 3\\ \end{tabular} & \begin{tabular}{cccccc} \multicolumn{4}{c}{Estimator}\\ P & Pw& CFG & CFGw & HT & MD\\ \midrule 18.11 & 13.34 & 76.84& 18.97 & 51.53 & 8.50\\ 8.19 & 5.44 & 13.98 & 1.46 & 12.52 & 2.22\\ 15.60 & 11.01 & 4.43& 2.10 & 9.05 & 6.48\\ 44.70 & 25.92 & 3.98 & 6.51 & 16.93 & 48.72\\ 69.95 & 34.53 & 4.92 & 9.04 & 34.68 & 93.60\\ \midrule 16.59 & 13.36 & 59.75 & 13.43 & 45.45 & 7.41\\ 5.85 & 3.83 & 7.59 & 0.63 & 9.78 & 1.23\\ 9.89 & 8.15 & 2.21 & 0.95 & 6.42 & 2.74\\ 34.95 & 23.98 & 3.48 & 6.50 & 8.33 & 26.72\\ 68.00 & 39.35 & 5.93 & 11.50 & 19.22 & 87.46\\ \midrule 15.16 & 10.63 & 37.73 & 5.66 & 44.72 & 5.05\\ 3.06 & 2.51 & 3.80 & 0.41 & 9.06 & 0.13\\ 5.70 & 5.22 & 0.90 & 0 & 5.60 & 0.76\\ 25.22 & 20.48 & 3.43 & 6.07 & 5.53 & 13.39\\ 69.17 & 46.32 & 8.63 & 16.06 & 10.88 & 81.99\\ \end{tabular} & \begin{tabular}{cc} \multicolumn{2}{c}{Estimator}\\ Pw & CFGw\\ \midrule 2.14 & 0.82\\ 5.51 & 1.17\\ 11.03 & 3.17\\ 22.39 & 4.37\\ 29.07 & 4.89\\ \midrule 1.27 & 0.40\\ 3.52 & 0.84\\ 7.51 & 1.37\\ 23.13 & 4.07\\ 36.10 & 8.11\\ \midrule 0.60 & 0\\ 2.10 & 0\\ 4.85 & 0.88\\ 18.52 & 3.28\\ 40.99 & 11.89\\ \end{tabular} \\ \bottomrule \end{tabular}} \caption{Percentage improvement of the MISE gained with the projection method.} \label{tab:improve} \end{center} \end{table} Table \ref{tab:improve} shows how an initial estimate of the Pickands dependence function improves using the projection method. The improvement is computed by $$ \frac{MISE_N - MISE_P}{MISE_N} \times 100, $$ and is reported in columns 3--6, where $MISE_N$ and $MISE_P$ are the MISE obtained with a non-parametric estimator and its projection respectively. As before, MISE provides a Monte-Carlo approximation of $\mbox{MISE}(\widehat{A}_n,A)$ obtained with 1000 random samples. The true dependence structure is still the symmetric logistic model. $\alpha'$ denotes the model parameter, and $n$ and $k$ are the sample size and the polynomial degree respectively. Estimates obtained with the initial non-parametric are regularized using the BP method. The order of the polynomial exploited is an ``optimal" value of $k$, that is the $k$ value chosen in a such way that the MISE does not decrease significantly for larger values of $k$. It turns out that with a weak dependence a small value of $k$ is enough, conversely with a strong dependence a large value of $k$ is needed. This makes sense if we view a dependence structure as an added complexity, especially with respect to the independence case, the simplest possible model. In such a framework, the polynomial degree has to be higher to capture this extra information. The improvements obtained with the classical estimators, sorted from largest to smallest, are: MD, CFG, P and HT. As expected, with Pw and CFGw the improvements are the smallest. For each estimator, the improvements sorted from largest to smallest, are obtained with: independence ($\alpha'=1$), strong dependence ($\alpha'=0.3$), weak dependence ($\alpha'=0.9$) and medium dependence ($\alpha'=0.5,0.7$). These results are compared with those provided in \citeN{gudend+s12} that are obtained with the discrete spectral measure projection method proposed by the same authors (see columns 7,8). We can conclude that overall the BP method provides a better percentage improvement. To explore the validity of our procedure to derive a bootstrap pointwise and simultaneous $(1-\tilde{\alpha})$ confidence band described in Section \ref{subsec:comp}, Table \ref{tab:covprob} displays $95\%$ coverage probabilities from 1000 independent samples and $r=500$ bootstrap resampling. The parametric setup is identical to the one used in Table \ref{tab:improve} but with fixed sample size equal to $n=100$. Overall, excluded the independence case, the simultaneous method \eqref{eq:confidentIntervals} outperforms the pointwise method, since the coverage probabilities are always larger. \begin{table}[t!] \begin{center} {\footnotesize \begin{tabular}{llccccc} \toprule Estimator & Confident bands' type & \multicolumn{5}{c}{Parameter $\alpha'$}\\ & & $0.3$ & $0.5$ & $0.7$ & $0.9$ &$1$\\ \midrule BP-P & Pointwise & 41.53 & 35.92 & 50.66 & 72.23 & 83.05 \\ & Simultaneous & 73.34 & 69.13 & 68.79 & 75.11 & 84.82 \\ \midrule BP-CFG & Pointwise & 26.89 & 42.65 & 42.60 & 57.68 & 57.30 \\ & Simultaneous & 62.24 & 61.92 & 60.67 & 66.54 & 57.32\\ \midrule BP-HT & Pointwise & 29.33 & 28.92 & 49.38 & 65.20 & 10.42 \\ & Simultaneous & 51.26 & 54.22 & 60.91 & 81.33 & 10.68 \\ \midrule BP-MD & Pointwise & 54.63 & 70.15 & 66.13 & 73.43 & 94.63 \\ & Simultaneous & 76.40 & 80.48 & 80.26 & 81.36 & 94.65 \\ \bottomrule \end{tabular}} \caption{$95\%$ coverage probabilities of the BP method with four non-parametric estimators for the symmetric logistic model.} \label{tab:covprob} \end{center} \end{table} \begin{figure} \caption{{ Estimates of Pickands dependence function for $d=3$ (light grey shade) and bootstrap variability bands (dark grey shade) for the SL, AL, HR, EST (left-right) models with strong, mild and weak dependence (top-bottom)}} \label{fig:triv} \end{figure} To close this small simulation study\footnote{The case $d=2$ has also been considered. The results have been omitted for brevity, since they arrive at the same conclusion. Tables like Table \ref{tab:MISE}, \ref{tab:improve} and \ref{tab:covprob} are available upon request for the HR and EST families and brings the same overall message.}, we extend the class of parametric families to the asymmetric logistic (AL, \citeNP{tawn90}) with $\theta=0.6$, $\phi=0.3$, $\psi=0$, the H\"{u}sler--Reiss model (HR, \citeNP{husler+r89}) with three cases ($\gamma_1=0.8,\gamma_2=0.3,\gamma_3=0.7$), ($\gamma_1=0.49, \gamma_2=0.51, \gamma_3=0.03$), ($\gamma_1=0.24,\gamma_2=0.23,\gamma_3=0.11$) and the extremal skew-$t$ (EST, \citeNP{padoan11}) with three setups ($\alpha^{*}=7,-10,1$, $\nu=3$, $\omega=0.9$), ($\alpha^{*}=-2,9,-15$, $\nu=2$, $\omega=0.9$), ($\alpha^{*}=-0.5,-0.5,-0.5$, $\nu=3$, $\omega=0.9$). Figure \ref{fig:triv} shows that, for all these cases, the lower and upper limits of the variability bands are always convex functions and they always contain the true Pickands dependence function. The variability bands of weaker dependence structures are typically wider than those of stronger dependence structures. The same is true for asymmetric versus symmetric dependence structures. \section{Weekly maxima of hourly rainfall in France}\label{sec:data} Coming back to Figure \ref{fig:map} introduced in Section \ref{sec:intro}, our goal here is to measure the dependence within each cluster of size $d=7$. The clusters were obtained by running the algorithm proposed by \shortciteN{bernard13} on weekly maxima of hourly rainfall recorded in the Fall season from 1993 to 2011, i.e., $n=228$ for each station. In the first place, the aim of clustering was to describe the dependence of locations, with homogeneous climatology characteristics within a cluster and heterogeneous characteristics between clusters. Climatologically, extreme precipitation that affects the Mediterranean coast in the fall is caused by the interaction of southern and mountains winds coming from the Pyr\'{e}n\'{e}es, C\'{e}vennes and Alps regions. In the north of France, heavy rainfall is often produced by mid-latitude perturbations in Brittany or in the north of France and Paris. It can be checked that extremes within clusters are indeed strongly dependent. For each cluster, we compute our Bernstein projection estimator based on the madogram and fixed the polynomial's order $k$ equal to 7. To summarize this seven-dimensional dependence structure, we take advantage of the \emph{extremal coefficient} \cite{smith90} defined by $$ \theta=d\,A(1/d,\ldots,1/d). $$ It satisfies the condition $1\leq\theta\leq d$, where the lower and upper bounds represent the cases of complete dependence and independence among the extremes, respectively. In each cluster, the extremal coefficient is estimated using the equation $\widehat{\theta}~=~7\,\widetilde{A}^{\mathrm{MD}}_{n,k}(1/7,\ldots,1/7)$, so that $\widehat{\theta}$ always belongs to the interval $[1,7]$. The range of the estimated coefficients is between $3.5$, indicating strong dependence, and $4.6$, indicating medium dependence. \label{fig:biv-extrindBiv} \begin{figure} \caption{French precipitation data. Left: pairwise extremal coefficients as a function of distance between weather stations. Right: estimates of Pickands dependence functions for four pairs of stations at decreasing distances (black: raw madogram estimator; gray: Bernstein projection madogram estimator).} \end{figure} As climatologically expected, we can detect in~Figure \ref{fig:map} a latitudinal gradient in the estimated extremal coefficients. They are smaller in the northern regions and higher in the south. This can be explained by westerly fronts above $46^\circ$ latitude that affect large regions, whereas extreme precipitation in the south is more likely to be driven by localised convective storms with weak spatial dependence structures. Finally, in the center of the country, away from the coasts, there is the highest degree of dependence among extremes, as they are the result of the meeting between different densities of air masses. For all possible pairs of locations we have estimated the bivariate Pickands dependence function using the madogram estimator and its Bernstein projection. The left-hand panel of Figure \ref{fig:biv-extrindBiv} shows the pairwise extremal coefficients versus the Euclidean distance between sites, computed through the estimated Pickands dependence functions. We have $\widehat{\theta}\leq 1.5$ for the locations that are less than 200 km far apart, meaning that the extremes are strongly or at least mildly dependent, while for sites more than 200 km far apart, we have $\widehat{\theta}>1.5$, meaning that the extremes at most mildly dependent up to independent. The graph also shows the benefits of the projection method: after projection, the extremal coefficients fall within the admissible range $[1,2]$, whereas they can be larger than $2$ without the projection method. The right-hand plot of Figure~\ref{fig:biv-extrindBiv} shows four examples of estimated Pickands dependence functions obtained with pairs of sites whose distances are 979.8, 505.9, 390.1 and 158.1 km, respectively (top-left to bottom-right panels). The madogram estimator provides estimates (black lines) that are not convex functions and hence are not Pickands dependence functions themselves. Contrarily, the estimates (gray lines) obtained with the projection estimator are valid Pickands dependence functions. \section{Computational Details}\label{sec:comp} Simulations and data analysis were performed using the \textsf{R} package \texttt{ExtremalDep} (\url{https://r-forge.r-project.org/R/?group\_id=1998}) \section{Proofs}\label{sec:appA} For ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, define the function $\nu_{{\boldsymbol{w}}}: [0,1]^d\rightarrow [0,1]$ by \begin{equation}\label{eq:mado_function} \nu_{{\boldsymbol{w}}}({\boldsymbol{u}})=\bigvee_{i=1}^d(u_i^{1/w_i})-\frac{1}{d}\sum_{i=1}^{d} u_i^{1/w_i}, \quad {\boldsymbol{u}}\in[0,1]^d, \end{equation} where, by convention, $u^{1/w} = 0$ whenever $w = 0$ and $u \in [0, 1]$. \begin{lem} \label{lem:int} For any cumulative distribution function $H$ on $[0, 1]^d$ and for any ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, we have \[ \int_{[0, 1]^d} \nu_{{\boldsymbol{w}}}({\boldsymbol{u}}) \, \mathrm{d} H({\boldsymbol{u}}) = \frac{1}{d} \sum_{i=1}^d \int_0^1 H(1, \ldots, 1, x^{w_i}, 1, \ldots, 1) \, \mathrm{d} x - \int_0^1 H(x^{w_1}, \ldots, x^{w_d}) \, \mathrm{d} x. \] \end{lem} \begin{proof} Fix ${\boldsymbol{w}}\in\mathcal{S}_{{d-1}}$. For every ${\boldsymbol{u}}\in [0,1]^d$ we have \begin{equation*} \begin{split} \bigvee_{i=1}^du_i^{1/w_i} &=1-\int_0^1\mathds{1}(\forall i=1,\ldots,d:u_i^{1/w_i}\leq x)dx\\ &=1-\int_0^1\mathds{1}(\forall i=1,\ldots,d:u_i\leq x^{w_i})dx \end{split} \end{equation*} and $$ \frac{1}{d}\sum_{i=1}^d u_i^{1/w_i}=1-\frac{1}{d}\sum_{i=1}^d\int_0^1\mathds{1}(u_i\leq x^{w_i})dx. $$ Subtracting both expressions and integrating over $H$ yields \begin{equation*} \begin{split} \int_{[0, 1]^d} \nu_{{\boldsymbol{w}}}({\boldsymbol{u}}) \, \mathrm{d} H({\boldsymbol{u}}) &= \frac{1}{d} \sum_{i=1}^d \int_{[0, 1]^d} \int_0^1 \mathds{1}(u_i \le x^{w_i}) \, \mathrm{d} x \, \mathrm{d} H({\boldsymbol{u}}) \\ &\qquad \mbox{} - \int_{[0, 1]^d} \int_0^1 \mathds{1}(\forall i = 1, \ldots, d: u_i \le x^{w_i}) \, \mathrm{d} x \, \mathrm{d} H({\boldsymbol{u}}). \end{split} \end{equation*} Applying Fubini's theorem to both double integrals yields the stated formula. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:multimado}] The marginal distribution functions being continuous, the copula $C$ is the joint distribution function of the random vector $(F_1(X_1), \ldots, F_d(X_d))$. For ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, the multivariate ${\boldsymbol{w}}$-madogram can thus be written as \[ \nu({\boldsymbol{w}}) = \int_{[0, 1]^d} \nu_{{\boldsymbol{w}}}({\boldsymbol{u}}) \, \mathrm{d} C({\boldsymbol{u}}). \] Next, apply Lemma~\ref{lem:int}. Since $C$ is an extreme-value copula with Pickands dependence function $A$, we find, after some elementary calculations using \eqref{eq:ev_copula} and \eqref{eq:ellA}, \[ C(x^{w_1}, \ldots, x^{w_d}) = x^{A({\boldsymbol{w}})} \] for all $x \in (0, 1)$. We obtain \begin{eqnarray} \label{eq:C2nu} \nu({\boldsymbol{w}}) &=& \frac{1}{d} \sum_{i=1}^d \int_0^1 C(1, \ldots, 1, x^{w_i}, 1, \ldots, 1) \, \mathrm{d} x - \int_0^1 C(x^{w_1}, \ldots, x^{w_d}) \, \mathrm{d} x \\ \nonumber &=& \frac{1}{d} \sum_{i=1}^d \int_0^1 x^{w_i} \, \mathrm{d} x - \int_0^1 x^{A({\boldsymbol{w}})} \, \mathrm{d} x, \end{eqnarray} yielding the first formula stated in the proposition. Solve for $A({\boldsymbol{w}})$ to obtain \eqref{eq:Amd}. Since $\nu({\boldsymbol{w}}) + c({\boldsymbol{w}}) = A({\boldsymbol{w}}) / (1 + A({\boldsymbol{w}}))$, necessarily $\nu({\boldsymbol{w}}) + c({\boldsymbol{w}}) < 1$, so that the right-hand side of \eqref{eq:Amd} is well-defined. \end{proof} \begin{proof}[Proof of Theorem~\ref{prop:prop_multimado}] The proof proceeds by expressing the statistics and empirical ${\boldsymbol{w}}$-madogram $\nu_n( {\boldsymbol{w}} )$ and $\widehat{\nu}_n( {\boldsymbol{w}} )$ in terms of the empirical distribution and empirical copula and exploiting known results thereon. For $i = 1, \ldots, d$ and $j = 1, \ldots, n$, let \begin{equation*} \begin{split} {\boldsymbol{U}}_j&= (U_{j,1},\ldots,U_{j,d}),\quad U_{j,i}=F_{i}(X_{j,i}),\\ \widehat{{\boldsymbol{U}}}_j &= (\widehat{U}_{j,1},\ldots,\widehat{U}_{j,d}), \quad \widehat{U}_{j,i}= F_{n,i}(X_{j,i}) = \frac{1}{n}\sum_{m=1}^n\mathds{1}(X_{m,i}\leq X_{j,i}). \end{split} \end{equation*} Recall $\nu_{{\boldsymbol{w}}}$ in \eqref{eq:mado_function}. The statistics and empirical ${\boldsymbol{w}}$-madogram are equal to $$ \nu_n( {\boldsymbol{w}} ) = \frac{1}{n} \sum_{m=1}^n \nu_{{\boldsymbol{w}}}({\boldsymbol{U}}_m) = \int_{[0, 1]^d} \nu_{{\boldsymbol{w}}}( {\boldsymbol{u}} ) \, \mathrm{d} C_n({\boldsymbol{u}}),\quad \widehat{\nu}_n( {\boldsymbol{w}} ) = \int_{[0, 1]^d} \nu_{{\boldsymbol{w}}}( {\boldsymbol{u}} ) \, \mathrm{d} \hat{C}_n({\boldsymbol{u}}), $$ respectively, where $C_n$ and $\widehat{C}_n$ are the empirical distribution and copula: $$ C_n({\boldsymbol{u}}) = \frac{1}{n} \sum_{m=1}^n \mathds{1}({\boldsymbol{U}}_m \le {\boldsymbol{u}} ), \quad \widehat{C}_n({\boldsymbol{u}}) = \frac{1}{n} \sum_{m=1}^n \mathds{1}( \widehat{{\boldsymbol{U}}}_m \le {\boldsymbol{u}} ), \qquad {\boldsymbol{u}} \in [0, 1]^d, $$ (component-wise inequalities). By Lemma~\ref{lem:int} we obtain \begin{equation}\label{eq:Cn2nun} \nu_n({\boldsymbol{w}}) = \frac{1}{d} \sum_{i=1}^d \int_0^1 C_n(1, \ldots, 1, x^{w_i}, 1, \ldots, 1) \, \mathrm{d} x - \int_0^1 C_n(x^{w_1}, \ldots, x^{w_d}) \, \mathrm{d} x, \end{equation} and a similar expression is attained for $\widehat{\nu}_n({\boldsymbol{w}})$ but with $C_n$ replaced by $\widehat{C}_n$. Comparing the latter equation with \eqref{eq:C2nu} yields $$ \norm{\nu_n - \nu }_\infty \le 2 \norm{C_n - C }_\infty. $$ Standard empirical process arguments yield uniform strong consistency of the empirical copula (\citeNP{deheuvels91}). We come to a similar inequality for $\widehat{\nu}_n$. Uniform strong consistency of $A_n$ and $\widehat{A}_n$ follows. Next, consider the empirical processes $$ \mathbb{D}_n=\sqrt{n}(C_n-C), \qquad \widehat{\mathbb{D}}_n=\sqrt{n}(\widehat{C}_n-C). $$ Combining Equations~\eqref{eq:C2nu} and \eqref{eq:Cn2nun} we obtain $$ \sqrt{n} \bigl(\nu_n({\boldsymbol{w}}) - \nu({\boldsymbol{w}}) \bigr) =\frac{1}{d}\sum_{i=1}^d \int_0^1\mathbb{D}_n(1,\ldots,1,x^{w_i},1,\ldots,1) \mathrm{d} x -\int_0^1\mathbb{D}_n(x^{w_1},\ldots,x^{w_d})\mathrm{d} x $$ and clearly a similar expression is obtained for $\sqrt{n} \bigl( \widehat{\nu}_n({\boldsymbol{w}}) - \nu({\boldsymbol{w}}) \bigr)$ but replacing $\mathbb{D}_n$ with $\widehat{\mathbb{D}}_n$. Now, two related results: in the space $\ell^{\infty}([0,1]^d)$ equipped with the supremum norm, $\mathbb{D}_n\leadsto\mathbb{D}$, as $n\rightarrow\infty$, where $\mathbb{D}$ is a C-Brownian bridge, and if Condition \ref{cond:smooth} holds, then $\widehat{\mathbb{D}}_n\leadsto\widehat{\mathbb{D}}$, as $n\rightarrow\infty$, where $\widehat{\mathbb{D}}$ is the Gaussian process defined in \eqref{eq:cop_proc}. The map \begin{equation*} \phi : \ell^{\infty}([0,1]^d) \to \ell^{\infty}(\mathcal{S}_{{d-1}}) : f \mapsto \phi(f) \end{equation*} defined by $$ (\phi(f))({\boldsymbol{w}}) = \frac{1}{d}\sum_{i=1}^d \int_0^1 f(1,\ldots,1,x^{w_i},1,\ldots,1) \, \mathrm{d} x - \int_0^1 f(x^{w_1},\ldots,x^{w_d}) \, \mathrm{d} x $$ is linear and bounded, and therefore continuous. The continuous mapping theorem then implies $$ \sqrt{n}(\nu_n-\nu)=\phi(\mathbb{D}_n)\leadsto\phi(\mathbb{D}), \quad\sqrt{n}(\widehat{\nu}_n-\nu)=\phi(\widehat{\mathbb{D}}_n)\leadsto\phi(\widehat{\mathbb{D}}),\quad n\rightarrow\infty, $$ in $\ell^{\infty}(\mathcal{S}_{{d-1}})$. The Gaussian process $\widehat{\mathbb{D}}$ satisfies $$ {\mathbb P}\{\forall\, i=1,\ldots,d :\forall\, u \in[0,1]: \widehat{\mathbb{D}}(1,\ldots,1,u,1,\dots,1)=0\}=1. $$ This property follows from the continuity of its sample paths and by the form of the covariance function \eqref{eq:covariance}. We find, for ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, $$ (\phi(\widehat{\mathbb{D}}))({\boldsymbol{w}}) = - \int_0^1 \widehat{\mathbb{D}}(x^{w_1}, \ldots, x^{w_d}) \, \mathrm{d} x. $$ Finally, apply the functional delta method (\citeNP{vaart98}, Ch.\ 20) to arrive at the conclusion. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:conv_bapp}] We have $|B_A({\boldsymbol{w}};k)-A({\boldsymbol{w}})|\leq{\mathbb E} |A({\bf Y}_k/k)-A({\boldsymbol{w}})|$, where ${\bf Y}_k=(Y_{k,i};i=1,\ldots,d)$ is a multinomial random vector with $k$ trials, $d$ possible outcomes, and success probabilities $w_1,\ldots,w_d$. Any function $A \in \mathcal{A}$ is Lipschitz-1, so that $|B_A({\boldsymbol{w}};k)-A({\boldsymbol{w}})|\leq\sum_{i=1}^d{\mathbb E}|Y_{k,i}/k-w_i|$. By the Cauchy--Schwarz inequality and the fact that the random variables $Y_{i,k}$ are binomially distributed, it follows that $|B_A({\boldsymbol{w}};k)-A({\boldsymbol{w}})|\leq\sum_{i=1}^d({\mathbb E}(Y_{k,i}/k-w_i)^2)^{1/2} \le d/(2\sqrt{k})$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:lower}] On the one hand we have that if $B_A({\boldsymbol{w}};k) \geq \max(w_1, \ldots, w_d)$, then $D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_j} B_A({\boldsymbol{v}}_j;k) \geq -1$. Indeed, $\max(w_1, \ldots, w_d)$ is the intersection of the planes $z_{0} = 1-w_1-w_2-\cdots-w_{d-1}$, $z_1=w_1$, $\ldots$, $z_{d-1}=w_{d-1}$, then by the assumption $$ B_A({\boldsymbol{v}}_j;k) \geq z_j, \qquad j=0,1,\ldots,d-1. $$ The directional derivatives of $B_A$ calculate for ${\boldsymbol{v}}_j$, $j=0,1,\ldots,d-1,$ are equal to \begin{equation} \label{eq:directderiv} D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_j} B({\boldsymbol{v}}_j;k)= \begin{cases} D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_0;k) & \mbox{ if } i \neq 0 = j\\ - D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) & \mbox{ if } i = 0 \neq j\\ D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) - D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) & \mbox{ if } i\neq 0 \neq j, i\neq j\\ \end{cases} \end{equation} which are bounded from below by $-1$. Then, considering the directional derivatives on both sides of the above inequality we obtain $$ D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_j} B_A({\boldsymbol{v}}_j;k) \geq -1,\quad \forall\;i,j = 0,1,\ldots,d-1, \, i\neq j, $$ and hence the result. On the other hand if $D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_j}B_A({\boldsymbol{v}}_j;k)\geq -1$, $j=0,\ldots,d-1$ then $B_A({\boldsymbol{w}};k)\geq\max(w_1,\ldots,w_d)$. Since $B_A$ lies above the tangent plane \begin{equation} \label{eq: tangentplane} B_A({\boldsymbol{w}};k) \geq B_A({\boldsymbol{w}}';k) + ({\boldsymbol{w}}'-{\boldsymbol{w}})^\top\nabla B_A({\boldsymbol{w}}';k), \qquad \forall \, {\boldsymbol{w}},{\boldsymbol{w}}' \in \mathcal{S}_{{d-1}}. \end{equation} by the convexity assumption, then evaluating this inequality for ${\boldsymbol{w}}'={\boldsymbol{v}}_j$ for $j \in \{ 0,1,\ldots,d-1\}$ we obtain the desired result $B_A({\boldsymbol{w}};k) \geq w_j$ for all ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$. Indeed, considering \eqref{eq: tangentplane} at ${\boldsymbol{w}}'={\boldsymbol{v}}_0$ we find, for ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, $$ B_A({\boldsymbol{w}};k) \geq 1 + {\boldsymbol{w}}^\top \nabla B_A({\boldsymbol{v}}_0;k) = 1 + \sum_{i=1}^{d-1} w_i \, D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_0;k) \geq 1 + \sum_{i=1}^{d-1} w_i \, (-1) = w_d $$ where $w_d = 1-w_1-\cdots-w_{d-1}$, as required. Furthermore, considering \eqref{eq: tangentplane} at ${\boldsymbol{w}}'={\boldsymbol{v}}_j$ for $j\in \{1,\ldots,d-1\}$ we find for ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, \begin{eqnarray} \nonumber B_A({\boldsymbol{w}};k) &\geq& 1 + ({\boldsymbol{w}}-{\boldsymbol{v}}_j)^\top \nabla B_A({\boldsymbol{v}}_j;k) \\ \nonumber &=& 1 + (w_j-1)D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) + \sum_{\substack{i=1\\i\neq j}}^{d-1} w_i \, D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) \\ \nonumber &\geq& 1 + (w_j-1)D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) + \sum_{\substack{i=1\\i\neq j}}^{d-1} w_i \, \bigg( D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) -1 \bigg) \\ \nonumber &=& 1 + (w_j-1)D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) + (1-w_j-w_d) \, \bigg( D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) -1 \bigg) \\ \nonumber &=& w_j + w_d \, \bigg(1-D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) \bigg)\geq w_j \nonumber \end{eqnarray} given that $D_{{\boldsymbol{v}}_i-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) \geq D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) -1$ and $1-D_{{\boldsymbol{v}}_j-{\boldsymbol{v}}_0} B({\boldsymbol{v}}_j;k) \geq 0$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:nested}] Firstly, the polynomials in $\mathcal{A}_k$ are nested (e.g., \citeANP{wang+g12}, \citeyearNP{wang+g12}; \citeANP{farin86}, \citeyearNP{farin86}). By the degree-raising property we have $$ B({\boldsymbol{w}};k)=\sum_{{\boldsymbol\alpha} \in \Gamma_k} \beta_{{\boldsymbol\alpha}} b_{{\boldsymbol\alpha}}({\boldsymbol{w}};k)=\sum_{{\boldsymbol\alpha} \in \Gamma_{k+1}} \tilde{\beta}_{{\boldsymbol\alpha}} b_{{\boldsymbol\alpha}}({\boldsymbol{w}};k+1)=\tilde{B}({\boldsymbol{w}};k+1) $$ where \begin{equation} \label{eq:beta_tilde} \tilde{\beta}_{{\boldsymbol\alpha}} = \sum_{h=1}^d \frac{\alpha_{h}}{k+1}\beta_{{\boldsymbol\alpha} - {\boldsymbol{v}}_{h-1}}. \end{equation} We need to show that the coefficients $\tilde{\beta}_{{\boldsymbol\alpha}}$ satisfy the constraints R1)-R2)-R3). For the case R1) we need to check that $$ \Delta_{i,0}^2 \tilde{\beta}_{{\boldsymbol\alpha}} - \sum_{\substack{j\neq i}} |\Delta_{i,0} \Delta_{j,0} \tilde{\beta}_{{\boldsymbol\alpha}}|\geq 0,\quad \forall {\boldsymbol\alpha} \in \Gamma_{(k+1)-2},\;i=1,\ldots,d-1. $$ This can be rewritten as $$ \Delta_{i,0}^2 \tilde{\beta}_{{\boldsymbol\alpha}} - \sum_{\substack{j\neq i}} (-1)^{I_{s,t}}\Delta_{i,0} \Delta_{j,0} \tilde{\beta}_{{\boldsymbol\alpha}}\geq 0, $$ where $I_{s,t}$ is the set of all the possible combinations with repetition of the set $\{1,2\}$ in sequences of $d-2$ terms, $s=1,\ldots,d-2$ and $t=1,\ldots,2^{d-2}$. Using the relation in \eqref{eq:beta_tilde} we have $$ \tilde{\beta}_{{\boldsymbol\alpha}} = \sum_{h=1}^d \frac{\alpha_{h}}{k+1} \beta_{{\boldsymbol\alpha}-{\boldsymbol{v}}_{h-1}} = \sum_{h=1}^{d-1} \frac{\alpha_{h}}{k+1} \beta_{{\boldsymbol\alpha}-{\boldsymbol{e}}_h} + \frac{\alpha_{d}}{k+1} \beta_{{\boldsymbol\alpha}}. $$ Then, we obtain \begin{eqnarray} \nonumber \Delta_{i,0}^2 \tilde{\beta}_{{\boldsymbol\alpha}} - \sum_{\substack{j\neq i}} (-1)^{I_{s,t}}\Delta_{i,0} \Delta_{j,0} \tilde{\beta}_{{\boldsymbol\alpha}} \nonumber &=& \Delta_{i,0}^2 \left\{ \sum_{h=1}^{d-1} \frac{\alpha_{h}}{k+1} \beta_{{\boldsymbol\alpha}-{\boldsymbol{e}}_h} + \frac{\alpha_{d}}{k+1} \beta_{{\boldsymbol\alpha}} \right\}\\ \nonumber &-& \sum_{\substack{j\neq i}} (-1)^{I_{s,t}} \Delta_{i,0} \Delta_{j,0} \left\{ \sum_{h=1}^{d-1} \frac{\alpha_{h}}{k+1} \beta_{{\boldsymbol\alpha}-{\boldsymbol{e}}_h} + \frac{\alpha_{d}}{k+1} \beta_{{\boldsymbol\alpha}} \right\}\\ \nonumber &=& \sum_{h=1}^{d-1} \frac{\alpha_{h}}{k+1} \left\{ \Delta_{i,0}^2 \beta_{{\boldsymbol\alpha}-{\boldsymbol{e}}_h} - \sum_{\substack{j\neq i}} (-1)^{I_{s,t}} \Delta_{i,0} \Delta_{j,0} \beta_{{\boldsymbol\alpha}-{\boldsymbol{e}}_h} \right\} \\ \nonumber &+& \sum_{h=1}^{d-1} \frac{\alpha_{d} -1}{k+1} \left\{ \Delta_{i,0}^2 \beta_{{\boldsymbol\alpha}} - \sum_{\substack{j\neq i}} (-1)^{I_{s,t}} \Delta_{i,0} \Delta_{j,0} \beta_{{\boldsymbol\alpha}} \right\} \geq 0,\\ \nonumber \end{eqnarray} and hence the result. For the case R2), using \eqref{eq:beta_tilde}, it is immediate to verify for the set $ \{ \tilde{\beta}_{{\boldsymbol\alpha}}, {\boldsymbol\alpha} \in \Gamma_{k+1} : {\boldsymbol\alpha} = {\bf 0} \text{ or } {\boldsymbol\alpha} = (k+1) \, {\boldsymbol{e}}_i, \, \forall i=1,\dots, d-1 \} $ that $\tilde{\beta}_{{\boldsymbol\alpha}}=\beta_{{\boldsymbol\alpha}}=1$. Finally, for the case R3) we need to check that $ 1 - 1/(k+1) < \tilde{\beta}_{{\boldsymbol\alpha}}, $ where $ \{ \tilde{\beta}_{{\boldsymbol\alpha}}, {\boldsymbol\alpha} \in \Gamma_{k+1} : {\boldsymbol\alpha} = {\boldsymbol{e}}_i \text{ or } {\boldsymbol\alpha} = k {\boldsymbol{e}}_i \text{ or } {\boldsymbol\alpha}_l = k {\boldsymbol{e}}_i + {\boldsymbol{e}}_j, \; \forall j\neq i=1,\dots, d-1 \}. $ By definition we have $$ \tilde{\beta}_{{\boldsymbol{e}}_i} = \frac{k}{k+1}\beta_{{\boldsymbol{e}}_i} + \frac{1}{k+1}, \; \tilde{\beta}_{k{\boldsymbol{e}}_i} = \frac{k}{k+1}\beta_{(k-1){\boldsymbol{e}}_i} + \frac{1}{k+1},\; \tilde{\beta}_{k{\boldsymbol{e}}_i+{\boldsymbol{e}}_j} = \frac{k}{k+1}\beta_{(k-1){\boldsymbol{e}}_i+{\boldsymbol{e}}_j} + \frac{1}{k+1}. $$ Substituting $\tilde{\beta}_{{\boldsymbol\alpha}}$, with ${\boldsymbol\alpha} = {\boldsymbol{e}}_i, {\boldsymbol\alpha} = k {\boldsymbol{e}}_i, {\boldsymbol\alpha} = k {\boldsymbol{e}}_i+{\boldsymbol{e}}_j$, in the previous inequality we obtain \begin{eqnarray} \nonumber \tilde{\beta}_{{\boldsymbol\alpha}} &\geq& 1-\frac{1}{k+1} \\ \nonumber \frac{1}{k+1} \left\{ 1+k\beta_{{\boldsymbol\alpha}-{\boldsymbol{v}}_{i-1}} \right\} &\geq & \frac{1}{k+1} \left\{ 1+k\left(1-\frac{1}{k}\right) \right\} \\ \nonumber \beta_{{\boldsymbol\alpha}-{\boldsymbol{v}}_{i-1}}&\geq & 1 - \frac{1}{k} \end{eqnarray} for $i=1,\ldots,d-1$, and hence the result. Thus the first statement is proven.\par Secondly, let $A$ be a Pickands dependence function and consider the Bernstein polynomial \[ A_k( {\boldsymbol{w}} ) = \sum_{{\boldsymbol\alpha} \in \Gamma_k} A( {\boldsymbol\alpha} / k ) \, b_{{\boldsymbol\alpha}}( {\boldsymbol{w}} ; k ), \] that is, $A_k = B_A( \,\cdot\,; k )$ as in \eqref{eq:polyrap}. Constraint R1) holds by assumption~\eqref{eq:wddA}. Since $\max(w_1, \ldots, w_d) \le A( {\boldsymbol{w}} ) \le 1$ for all ${\boldsymbol{w}} \in \mathcal{S}_{{d-1}}$, the constraints in R2) and R3) are satisfied too. Finally, we have uniform convergence $A_k \to A$ by Proposition~\ref{prop:conv_bapp}. \iffalse $\cup_{k=1}^\infty \mathcal{A}_k$ is dense in $\mathcal{A}$, where the latter is the set of functions satisfying conditions (C1)-(C2). Now, if $\mathcal{A}=\{{\boldsymbol{w}} \mapsto A({\boldsymbol{w}}):2A(({\boldsymbol{w}}+{\boldsymbol{w}}')/2)\leq A({\boldsymbol{w}})+A({\boldsymbol{w}}'),\,\forall\, {\boldsymbol{w}},{\boldsymbol{w}}'\in\mathcal{S}_{{d-1}}\}$, that is $A$ is convex in $\mathcal{S}_{{d-1}}$, and also $\eqref{eq:wddA}$, then taking $\beta_{{\boldsymbol\alpha}}=A({\boldsymbol\alpha}/k)$ for all $k$ and ${\boldsymbol\alpha}\in\Gamma_{k-2}$ provides the condition $\Delta_{i,0}^2 \beta_{{\boldsymbol\alpha}} - \sum_{\substack{j\neq i}} |\Delta_{i,0} \Delta_{j,0} \beta_{{\boldsymbol\alpha}}| \geq 0$ which corresponds to the constrain R1. Next, since $\mathcal{A}=\{{\boldsymbol{w}} \mapsto A({\boldsymbol{w}}):A({\bf 0})=A({\boldsymbol{e}}_j)=1 , \,\forall\, j=1,\ldots,d\}$, for the subset $\{ {\boldsymbol\alpha} = {\bf 0} \text{ and } {\boldsymbol\alpha} = k \, {\boldsymbol{e}}_i, \, \forall i=1,\dots, d-1 \}$ we have that $\beta_{{\boldsymbol\alpha}}=A \left( {\boldsymbol\alpha}/k \right)=1$ holds. Finally, since $\mathcal{A}=\{{\boldsymbol{w}} \mapsto A({\boldsymbol{w}}):A({\boldsymbol{w}})\geq \max\left(w_1,\ldots,w_d \right),\forall\, {\boldsymbol{w}}\in\mathcal{S}_{{d-1}}\}$, then $A({\boldsymbol\alpha}/k)\geq \max\left( {\boldsymbol\alpha}/k \right)$. In particular, $$ A\left( \frac{{\boldsymbol\alpha}}{k} \right) \geq \max\left\{ 0,\ldots,0,\frac{1}{k},1-\frac{1}{k} \right\} = 1- \frac{1}{k} $$ when ${\boldsymbol\alpha} = {\boldsymbol{e}}_i$ and ${\boldsymbol\alpha} = (k-1) \,{\boldsymbol{e}}_i$ and ${\boldsymbol\alpha} = (k-1)\, {\boldsymbol{e}}_i + {\boldsymbol{e}}_j$, for all $j\neq i=1,\dots, d-1$, and for $k \geq 2$. Notice that they are the same conditions on the polynomial's coefficients, i.e. $\beta_{{\boldsymbol\alpha}}\geq1-1/k$, for the same subset of ${\boldsymbol\alpha}$. \\ Therefore, $B({\boldsymbol{w}};k) = \sum_{{\boldsymbol\alpha} \in \Gamma_k} A\left({\boldsymbol\alpha}/k\right) b_{{\boldsymbol\alpha}}( {\boldsymbol{w}}; k) = \sum_{{\boldsymbol\alpha} \in \Gamma_k} \hat{\beta}_{{\boldsymbol\alpha}} b_{{\boldsymbol\alpha}}( {\boldsymbol{w}}; k) \in \mathcal{A}_k \subset \bigcup_{k=1}^\infty \mathcal{A}_k$, then by Stone-Weirstrass approximation theorem the result follows. \fi \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:prop_bernproj}] Consider the projection of the madogram estimator on the full space $\mathcal{A}$ (rather than on the subspace $\mathcal{A}_k$): \[ \widetilde{A}_n^{\mathrm{MD}} = \operatornamewithlimits{\arg\min}_{B \in\mathcal{A}} \norm{ \widehat{A}_n^{\mathrm{MD}}-B}_2. \] From Theorem \ref{prop:prop_multimado} it follows that $\sqrt{n}(\widehat{A}_n^{\mathrm{MD}}-A)\leadsto Z$ in $L^2(\mathcal{S}_{{d-1}})$ as $n\rightarrow\infty$ where $Z$ is a Gaussian process. Theorem~1 in \shortciteN{fil+g+s08} then implies that \[ \sqrt{n} ( \widetilde{A}_{n}^{\mathrm{MD}} - A ) \leadsto \operatornamewithlimits{\arg\min}_{Z'\in T_\mathcal{A}(A)}\|Z'-Z\|_2,\quad n\rightarrow \infty. \] It remains to show that we can replace $\widetilde{A}_n^{\mathrm{MD}}$ by $\widetilde{A}_{n,k_n}^{\mathrm{MD}}$. It suffices to show that \[ \norm{ \widetilde{A}_{n,k_n}^{\mathrm{MD}} - \widetilde{A}_{n}^{\mathrm{MD}} }_2 = o_p(n^{-1/2}), \qquad n \to \infty. \] By the first inequality in Lemma~1 in \shortciteN{fil+g+s08} with, in their notation, $\mathcal{F} = \mathcal{A}$ and $\mathcal{G} = \mathcal{A}_k$, we find that \[ \norm{ \widetilde{A}_{n,k_n}^{\mathrm{MD}} - \widetilde{A}_{n}^{\mathrm{MD}} }_2 \le [ \delta_{k_n} ( 2 \norm{ \widehat{A}_{n}^{\mathrm{MD}} - \widetilde{A}_{n}^{\mathrm{MD}} }_2 + \delta_{k_n} ) ]^{1/2}, \] where $\delta_{k_n}$ is bounded by the $L_2$ Hausdorff distance between $\mathcal{A}$ and $\mathcal{A}_{k_n}$. Proposition~\ref{prop:conv_bapp} yields $\delta_{k_n} = O(k_n^{-1/2})$, which is $o(n^{-1/2})$ by the assumption on $k_n$. Furthermore, since $A \in \mathcal{A}$, we find, by definition of the projection estimator, \[ \norm{ \widehat{A}_{n}^{\mathrm{MD}} - \widetilde{A}_{n}^{\mathrm{MD}} }_2 \le \norm{ \widehat{A}_{n}^{\mathrm{MD}} - A }_2 = O_p(n^{-1/2}), \qquad n \to \infty. \] Combine these relations to complete the proof. \end{proof} \end{document}
arXiv
AnkPlex: algorithmic structure for refinement of near-native ankyrin-protein docking Tanchanok Wisitponchai1,2, Watshara Shoombuatong3, Vannajan Sanghiran Lee4,5, Kuntida Kitidee2,6 & Chatchai Tayapiwatana1,2 Computational analysis of protein-protein interaction provided the crucial information to increase the binding affinity without a change in basic conformation. Several docking programs were used to predict the near-native poses of the protein-protein complex in 10 top-rankings. The universal criteria for discriminating the near-native pose are not available since there are several classes of recognition protein. Currently, the explicit criteria for identifying the near-native pose of ankyrin-protein complexes (APKs) have not been reported yet. In this study, we established an ensemble computational model for discriminating the near-native docking pose of APKs named "AnkPlex". A dataset of APKs was generated from seven X-ray APKs, which consisted of 3 internal domains, using the reliable docking tool ZDOCK. The dataset was composed of 669 and 44,334 near-native and non-near-native poses, respectively, and it was used to generate eleven informative features. Subsequently, a re-scoring rank was generated by AnkPlex using a combination of a decision tree algorithm and logistic regression. AnkPlex achieved superior efficiency with ≥1 near-native complexes in the 10 top-rankings for nine X-ray complexes compared to ZDOCK, which only obtained six X-ray complexes. In addition, feature analysis demonstrated that the van der Waals feature was the dominant near-native pose out of the potential ankyrin-protein docking poses. The AnkPlex model achieved a success at predicting near-native docking poses and led to the discovery of informative characteristics that could further improve our understanding of the ankyrin-protein complex. Our computational study could be useful for predicting the near-native poses of binding proteins and desired targets, especially for ankyrin-protein complexes. The AnkPlex web server is freely accessible at http://ankplex.ams.cmu.ac.th. Generally, antibodies have several applications in therapies and diagnostics due to the fact that they can be designed to have high affinity with a targeted protein [1, 2]. Due to the complications involved in generating specific antibodies and their large size, alternative scaffolds have been developed to overcome these limitations. One of those novel scaffolds is comprised of Designed Ankyrin Repeat Proteins (DARPins). These ankyrin-proteins have been used more frequently in medical applications [3–5] because of their stability and high affinity for protein targets [6–8]. Moreover, modification of the residues at the variable part of ankyrin allows for increased binding affinity towards the target protein without changes in the basic protein conformation [9]. The high affinity of ankyrin-proteins could be achieved due to random modifications at variable residues in vitro [9] and in silico prediction of the residues based on the structure of 3-dimensional (3D) complexes [4, 10]. The 3D protein complexes could be determined by X-ray crystallography or NMR spectroscopy, yet few 3D structures of ankyrin-protein complexes have been reported. Most of the structures were monomeric structures or genomics surveys. Therefore, a computational approach, called protein–protein docking, can be used to generate protein complex structures because there were no available reports on the protein complex. Protein–protein docking is a well-known method for generating protein–protein complexes (poses) using computational methods. The challenging task of identifying the exact bound state of a pair of proteins must consider the following factors: (i) there are several potential ways that a pair of proteins can interact, (ii) the flexibility of the protein, and (iii) changes in the protein conformation after binding [11]. Currently, several software programs have been developed, such as Gramm-X, DOT, ClusPro, and ZDOCK, that provide a rational complex for a pair of proteins, [12]. The ZDOCK program includes initial-stage docking (ZDOCK algorithm) and refinement methods (RDOCK algorithm). The initial-stage docking is designed for searching all possible docking poses [13, 14]. In the refinement stage, the side chains of the docking poses from the ZDOCK algorithm are minimized [15]. The scoring functions (features) of the docking poses are energy terms, such as pairwise shape complementarity (PSC), desolvation (DE), electrostatics (ELEC), and van der Waals. This program has been demonstrated to be one of the most accurate prediction programs in the Critical Assessment of Predicted Interactions (CAPRI) [16]. ZDOCK has successfully predicted several near-native complexes (poses) of antibody-antigen, enzyme-inhibitor and other pairings via assessing the CAPRI criteria in the 10 top-rankings based on the features of ZDock, ZRank, or E_RDock. However, successful predictions do not occur for all cases [17–20]. Moreover, the near-native predictors are not selected from an easy ranking of those features, and manual inspections are often needed as well. Note that manual inspections include cluster, density, favourable contact, charge complementarity, buried hydrophobic residues, and overall agreement with the biological data in the literature. Importantly, all protein–protein cases do not agree with the manual inspections. Similar to other reports, the complex-type-dependent combinatorial scoring function was introduced and indicated that the weights of the scoring function were different between protease-inhibitor, antibody-antigen, and enzyme-inhibitor pairings [21]. Therefore, a complicated strategy has to be adopted for obtaining a near-native complex based on certain types of protein–protein complexes. The near-native docking pose of Ankyrin-Her2 was successfully predicted using ZDOCK and an extra scoring function [10]. Recently, the universal criteria for obtaining the near-native complex of ankyrin-proteins have not been reported, and there was only a computational method that was applied to identify the repeat number of ankyrin-proteins [22]. According to different types of protein-protein complexes, the ankyrin-protein complex requires an individual strategy. Therefore, we aimed to search for explicit criteria to obtain a near-native pose using a set of features generated from one program to avoid using complicated methods or combining scores from several software programs. In this study, we made a systematic attempt to develop a computational approach for achieving near-native predictors in 10 top-rankings of ankyrin-protein docking poses, which we named AnkPlex. Moreover, this method was generated for (i) analysing and characterizing ankyrin-protein complexes by using a set of informative features that have potential applications and (ii) establishing a user-friendly web server to obtain the desired results without the need to follow complicated mathematical equations generated by the research scientist. The docking poses of seven X-ray complexes of APKs, which had ankyrins with 3 internal domains, were generated using the reliable docking tool ZDOCK. The construction of the docking poses calculated by PSC alone and summation of PSC + DE + ELEC demonstrated there were different numbers of near-native docking poses. The steps for AnkPlex establishment included (i) balancing the near-native and non-near-native poses; (ii) processing the dataset through machine learning of a decision tree algorithm (DT) and a logistic regression (LG) with a combination of 11 features; (iii) selecting the efficient predictive models of DT and LG; and (iv) processing the dataset by combining models of DT and LG. X-ray crystal structures of ankyrin-protein complexes (APKs) were collected from the Protein Data Bank (PDB) database for 41 APKs reported up to May 2014. Analyses of the 41 APKs were performed through data pre-processing using the following steps: (i) APKs containing 3-internal-domain were included; (ii) redundant APKs were excluded; (iii) APKs were filter based on the recognition areas [7]; and (iv) alpha, beta, and alpha–beta proteins were selected using the SCOP database [23]. Nine X-ray crystal structures of APKs (called Ank9) were obtained, as summarized in Additional file 1: Table S1. Subsequently, seven of the APKs were randomly selected as training complexes (Ank-TRN), including complex 1 (C1), complex 2 (C2), complex 3 (C3), complex 4 (C4), complex 5 (C5), complex 6 (C6), and complex 7 (C7). At the same time, the rest of the APKs, including unknown 1 (U1) and unknown 2 (U2), were designated the test group (Ank-TEST). In order to avoid the distinct results from the different selections of training and test sets, other 35 possible datasets were constructed and were used to generate the predictive models for the identification of near-native poses. The docking poses of Ank-TRN and Ank-TEST were regenerated by using the protein docking software ZDOCK [13, 14]. Two versions of the docking poses were generated, which were different in terms of energy calculations (especially PSC) and the combination of PSC, DE, and ELEC (PSC + DE + ELEC). Then, all the generated-docking poses were superimposed with the original X-ray crystal structures and were calculated for the root-mean-square deviation of the Cα atom (Cα-RMSD) value. The docking poses that presented Cα-RMSD values ≤10 Å were designated to be near-native poses or positive samples, whereas the docking poses that presented Cα-RMSD values >10 Å were defined as non-near-native poses or negative samples [24]. The numbers of near-native poses for the two versions of the docking poses were compared. In addition to screening near-native poses by the Cα-RMSD value, eight binding residues of the APKs on the second domain of ankyrin (Fig. 1) were used for filtering near-native poses based on the recognition areas (regKp). The molecular architecture of ankyrin and its three recognition areas, as shown in ribbon style. a Amino acid sequence of an internal repeat [7] in which the recognition residues are shown in three colours. b The ribbon style of an internal repeat of ankyrin related to the above sequence. c The structure of the 3 internal domains of ankyrin flanked by the N-cap and C-cap. The recognition area consisted of six variable residues [7] (red and blue are positioned on the helix and turn, respectively) and two constant amino acids (green) on the second domain Based on observations of the generation of the features, ankyrin-protein docking poses were generated for the energy features using the ZDOCK protocol [13, 14] (a set of 5 features) and the RDOCK protocol [15] (a set of 6 features). Five features, including ZDock, ZRankElec, ZRnakSolv, ZRank, and ZRankVdw, were obtained from the protein-docking protocol (ZDOCK) using the CHARMm force field [25]. At the same time, six features, including E_vdw1, E_elec1, E_vdw2, E_elec2, E_sol, and E_RDock, were calculated from the docking refinement protocol (RDOCK) using the CHARMm polar H force field [25]. The energy equation used in RDOCK was the same as ZDOC. However, the ankyrin-protein docking poses were minimized before calculation. The details of the 11 features (A, B, C, D, E, F, G, H, I, J, K) are described below: ZDock (A) is the Pairwise Shape Complementarity (PSC) score and it was optionally augmented with the electrostatics (ELEC) and the desolvation energy (DE). In this study, the ZDock score was calculated using the following equation: $$ ZDock\ score=\alpha PSC+ D E+\beta ELEC $$ where α and β have the default values of 0.01 and 0.06, respectively. ZRankElec (B) is the long-range electrostatic energy and the only fully charged side-chain, as represented in the following equation: $$ ZRankElec\left( i, j\right) = 332\frac{q_i{q}_j}{{r^2}_{i j}} $$ where q i and q j are the charges on ankyrin and the protein atoms, respectively. The r ij in the equation stands for the distance between the atoms of ankyrin and the protein. ZRankSolv (C) is the desolvation term based on the Atomic Contact Energy (ACE). $$ ZRankSolv\left( i, j\right) = {a_i}_j $$ where a ij is the ACE score. ZRank (D) is a linear combination of ZRankVdw, ZRankElec, and ZRankElec. $$ ZRank\ score= ZRankElec + ZRankSolv + ZRankVdw $$ ZRankVdw (E) is the van der Waals and short-range electrostatics energy with a distance between the atom pair being less than 5.0 Å. This calculation was based on the parameters of the CHARMm 19 polar hydrogen potential. The ZRankVdw score was calculated as follows: $$ ZRankVdw\left( i, j\right) = {\varepsilon_i}_j\left[{\left[\frac{\sigma_{i j}}{r_{i j}}\right]}^{12}-2{\left[\frac{\sigma_{i j}}{r_{i j}}\right]}^6\right] $$ where ε ij and σ ij are the depth and the width, respectively, of the coefficient for the CHARMm 19 polar H. E_vdw1 (F) and E_vdw2 (H) are the van der Waals energy, as presented in Equation (5), of the 1st and the 2nd minimized structure of the ankyrin-protein docking poses, respectively. E_elec1 (G) and E_elec2 (I) are the electrostatic energy, as presented in Equation (2), of the ankyrin-protein docking poses processed for the 1st and the 2nd minimization, respectively. E_sol (J) is the desolvation energy, as shown in Equation (4), of the 2nd minimization of the ankyrin-protein docking poses. E_RDock (K) is the summation of E_sol and (0.9 × E_elec2). Construction of learning method Several learning models were constructed including decision tree (DT), logistic regression (LG), artificial neural network (ANN), and support vector machine (SVM) using Ank-TRN (C1-C7). As shown in Additional file A1: Table S5, SVM yielded 100% near-native poses in the internal testing sets but could not obtain any near-native pose in the external testing sets. The DT and ANN provided the near-native poses from both internal and external testing sets. According to a dataset of C5, the DT was superior in achieving the near-native poses of internal testing sets than the ANN. The LG provided a weighted summation that could rank the docking poses to achieve the near-native poses in the 10 top-rankings. As a consequence, the DT and the LG were selected to construct an ensemble model. To identify the near-native docking poses of APKs, a learning method named AnkPlex was established by combining a decision tree (DT) and a logistic method (LG). The decision trees and the logistic regression methods were selected due to the fact that they provide a high number of predicted positive values (true near-native poses). The logistic regression especially provided a weighted summation that was finally ranked to search for near-native poses in 10 top-rankings. All 11 features and all datasets were used to build the DT and LG models. The Ank-TRN (7 APKs) and the Ank-TEST (2 APKs) were evaluated by AnkPlex using the following steps, as shown Fig. 2: The flowchart system for the proposed AnkPlex 1. The number of near-native poses and non-near-native poses were balanced. ZDOCK using PSC + DE + ELEC and regKp provided 699 near-native poses and 44,334 non-near-native poses of Ank-TRN. The non-near-native poses were randomly clustered into 65 groups (≈44,334/669). Therefore, each training set was composed of the same near-native poses and different groups of non-near-native poses. 2. A predictive model using the DT and the LG models was established. All 11 features were combined and generated as feature subsets (i.e., A, B, C, D, E, F, G, H, I, J, K, AB, AC, AD, AE, AF, AG, AH, AI, AJ, AK, BC, BD, …, ABCDEFG). The total number of feature subsets was calculated to be 4,095 by following this equation: $$ L={\displaystyle \sum_{r=1}^{11}\frac{11!}{r!\left(11- r\right)!}}. $$ The DT model was established from 4,095 feature subsets and 65 training sets using the J48 algorithm [26, 27]. The parameters of the DT model were set with the confidence factor, the minimum number of objects, and the number of folds for reduced error pruning of 0.25, 2, and 3, respectively. Additionally, the LG model was constructed from the same feature subsets and training set with a ridge estimator [28] in which the maxis and the ridge were defined as −1 and 1.oE-8, respectively. Subsequently, the learning methods were generated by implementation of the DT and the LG models using the WEKA program [27]. 3. An efficient predictive model of the DT and the LG models was selected. Ank-TRN consisted of 7 APKs and was submitted to the learning method for predicting the near-native poses. True positive rates (TPrate) greater than 50% were used as the cut-off value for an efficient learning method. The learning methods that demonstrated a TPrate greater than 50% were selected to further establish an ensemble learning model. 4. Ensemble methods were established. The ensemble learning method, named AnkPlex, was constructed by randomly integrating the DT-based learning models (OLMDT) and the LG-based learning models (OLMLG) from Step 3 for reducing the number of non-near-native docking poses. The main process of the proposed method, AnkPlex, for increasing the number of TPs (reducing non-near-native poses) consisted of the following steps: (i) only predicted positive samples (PPVDT) derived from OLMDT were select, (ii) a logistic score (LGS) on PPVDT using the LG model was calculated, and (iii) PPVDT was ranked according to LGS and the 10 top-ranking poses that demonstrated the highest LGS were selected. The near-native pose(s) or the true positive (TP) in the 10 top-ranking poses were our targets. The summation score of AnkPlex was defined in the equation given as Equation (7) on C i , where i = 1, 2, …, 7, and Y i would be set as 1 in case TP was found in the 10 top-ranking poses. Otherwise, Y i would be set as 0. Finally, the score of AnkPlex was the summation product, as defined in the following equation: $$ \# P P={\displaystyle \sum_{i=1}^7{Y}_i} $$ where # PP belongs to Ank-TRN containing seven complexes (C 1 , C 2 , …, C 7 ) and Y i would be set as 1 when TP appears in the 10 top-ranking poses. Otherwise, Y i would be set as 0. The number of #PPTRN indicated the sample of Ank-TRN in which the LGS score was among the 10 top-ranking near-native poses. The number of #PPTEST showed the LGS score of the Ank-TEST was among the 10 top-ranking poses. The prediction performance of the AnkPlex method was evaluated by using 10-fold cross-validation (10-fold CV). The method validation parameters, including accuracy (ACC), sensitivity (SEN), and precision (PRES), were calculated using the following equations: $$ \mathrm{Accuracy} = \kern0.5em \frac{TP+ TN}{\left( TP+ TN+ FP+ FN\right)}\times 100 $$ $$ \mathrm{Sensitivity} = \kern0.5em \frac{TP}{\left( TP+ FN\right)}\times 100 $$ $$ \mathrm{Precision} = \kern0.5em \frac{TP}{\left( TP+ FP\right)}\times 100 $$ where TP, TN, FP, and FN are the numbers of true positive, true negative, false positive, and false negative results, respectively. Analysis of ankyrin-protein docking dataset There were a few ankyrin-protein complexes (APKs) reported in the PDB database. Forty-one ankyrin complexes, with the number of internal domains ranging from 2–7, have been reported (up to May, 2014). The highest number, 19 complexes, of ankyrin-proteins contained 3 internal domains. Furthermore, the APKs that reacted with the target using recognition areas were selected. Focusing on target proteins, only proteins with common folding structures, i.e., alpha-, beta- and alpha-beta structures, were considered. Therefore, nine complexes, which included 1SVX, 4ATZ, 3Q9N, 1AWC, 2BKK, 2Y1L, 4DRX, 2P2C, and 4HNA, were used in this study. These nine complexes were randomly divided into two groups, i.e., 7 complexes as Ank-TRN and 2 complexes as Ank-TEST. To optimize the ZDOCK calculation, X-ray crystal structures of APK-TRN, including seven APKs, were calculated with different feature calculations, including the PSC and the PSC + DE + ELEC. The total number of docking poses, including near-native and non-near-native poses, was 54,000 poses (54Kp). Subsequently, the numbers of near-native poses calculated by PSC and PSC + DE + ELEC were compared. As shown in Table 1, the average number of near-native poses calculated by PSC + DE + ELEC (116.57 ± 51.05) was twice as high as the number calculated using PSC (63.29 ± 41.43). To increase the predictive accuracy, binding sites on the second domain of Ank-TRN defined by Bintz et al. [7] were used for filtering near-native poses based on the recognition areas (regKp). The number of regKp calculated by PSC + DE + ELEC was observed to be slightly reduced (95.57 ± 52.58). According to the ZDOCK program suggestion, the near-native poses were identified in the top 2,000 poses (2Kp) ranked by the ZRank feature [13, 14]. The 2Kp were selected from the total docking poses of regKp compared to the near-native poses from regKp. The near-native poses of 2Kp (47.14 ± 31.49) substantially decreased two-fold compared to regKp. Thus, it can be concluded that 2Kp ranked by the ZRank feature was not suitable for screening near-native poses because of the exclusion of some near-native poses. Interestingly, screening by regKp resulted in a high number of near-native poses and an extremely reduced number of non-near-native poses. The results suggested that the ZDOCK calculation using PSC + DE + ELEC and the screening based on the recognition areas (regKp) were the optimal calculations because this procedure was capable of incorporating near-native poses and eliminating non-near-native poses. However, the number of non-near-native poses generated with regKp still remained high, which indicated that an alternative learning method is necessary for ruling out non-near-native poses. Table 1 Number of docking poses classified as near-native and non-near-native in Ank-TRN (C1-C7) and Ank-TEST (U1 and U2) Establishing learning methods According to the ZDOCK calculations of Ank-TRN, 11 features of near-native poses and non-near-native poses were generated. Univariate statistical approaches were employed to perform exploratory data analysis using average and standard deviations for summarizing important patterns. As shown in Table 2, five features that were generated by the ZDOCK protocol demonstrated the significant differences between the near-native poses and the non-near-native poses with a p-value <0.001. As presented in Table 2, the five top-ranked features included E_RDock (−11.96 ± 9.40/1.72 ± 12.10), ZRankElec (8.22 ± 16.56/29.17 ± 22.41), ZRank (−54.03 ± 25.84/−21.82 ± 31.33), E_elec2 (−18.21 ± 8.85/−8.18 ± 10.25), and E_sol (4.43 ± 6.26/9.07 ± 8.87). Almost all the features that were calculated using the RDOCK protocol were significantly different, except E_elec1 (p = 0.178) and E_vdw2 (p = 0.010). Subsequently, 11 features calculated with the ZDOCK calculation were applied to establish the learning methods. Table 2 Summary of statistical analysis of near-native and non-near-native poses of ankyrin-target complexes Eleven features of each of the near-native poses (669) and non-near-native poses (44,334) calculated from Ank-TRN based on the recognition areas (regKp) were used to establish the learning methods. Based on the unbalanced number of docking poses, training sets were generated by clustering the non-near-native poses and the near-native poses into 65 sets (44,334/669). Eleven features were calculated from each training set and were ordered to generate 4,095 feature sets. The DT-based learning models (OLMDT) and the LG-based learning models (OLMLG) were established using the 4,095 feature sets. The learning methods demonstrated the average of the true positive rate to be greater than 50% (TPrate ≥ 50%), and consisted of 4,762 OLMDT and 2,688 OLMLG. The learning models that represented TPrate ≥ 50% with the 10 top-ranking poses of %ACC are shown in Table 3 (10 top-rankings of OLMDT) and Table 4 (10 top-rankings of OLMLG). As a result, ABDEHIJK_g14 of OLMDT exhibited the highest %ACC with %TPrate ≥ 50%. This learning method consisted of sequential combination feature sets that included ZDock (A), ZRankElec (B), ZRank (D), ZRankVdw (E), E_vdw2 (H), E_elec2 (I), E_sol (J), and E_RDock (K) calculated from non-near-native dataset number 13. In addition, CDFGJ_g10 of OLMLG also demonstrated the highest %ACC with %TPrate ≥ 50%. The percentage of precision (%PRES) for all the 10 top-ranking poses of OLMDT and OLMLG was low, which indicated that there was a high number of false positive results (FP). To diminish the number of FP, only the 10 top-ranking poses based on the ZRank score were selected to represent the true positive poses (TP). If the TP were found in 10 top-ranking poses from each Ank-TRN, #PP was designated 1. Thus, the #PP-values of seven Ank-TRN (#PPTRN) were in the range of 0 to 7. As shown in Tables 3 and 4, the maximum values of the #PPTRN of OLMDT and OLMLG were only 6. Therefore, the individual learning method of OLMDT or OLMLG was not capable of providing the maximum value for #PPTRN. Table 3 Comparison of performances of 10 top-ranking OLMDT among various types of features and datasets in terms of 10-fold cross-validation Table 4 Comparison of performances of 10 top-ranking OLMLG among various Types of features and datasets in terms of 10-fold cross-validation Ensemble learning method to generate AnkPlex To enhance the prediction efficacy of the generated learning methods, 4,762 of the DT-based learning models (OLMDT) and 2,688 of the LG-based learning models (OLMLG) were randomly combined to generate an ensemble model. Interestingly, the combination of the ensemble model from ABEHIJ_g56 of OLMDT and CDFGHJ_g30 of OLMLG demonstrated superior prediction efficiency due to the fact that this ensemble model (ABEHIJ_g56- CDFGHJ_g30) achieved maximum values for #PPTRN and #PPTEST of 7 and 2, respectively. Therefore, the ensemble model, ABEHIJ_g56- CDFGHJ_g30, was designated to be an ensemble computational model for predicting the near-native docking pose of APKs or "AnkPlex" (Fig. 3). To compare the prediction efficiency of the ensemble model, AnkPlex with the single learning models, the total number of TP and the first TP of each Ank-TRN were used for the evaluation. As shown in Table 5, the single learning models of OLMDT (ABEHIJ_g56) and OLMLG (CDFGHJ_g30) provided a #PPTRN value of 6. The first TP of C5 predicted by ABEHIJ_g56 and the C6 predicted by CDFGHJ_g30 were found at pose numbers 14 and 19. This result indicated that a single learning model could not produce all the true positive poses. In the case of the Ank-TEST, OLMDT could not provide the value for the #PPTEST, whereas the #PPTEST of OLMLG was comparable to AnkPlex. Consequently, it can be concluded that the ensemble model, AnkPlex, was capable of including a #PPTRN value of 7 and a #PPTEST value of 2, which suggested that the prediction efficacy of AnkPlex was superior to the single learning model. In addition, the predictive models generated from other 35 possible datasets demonstrated the average number of #PPTRN and #PPTEST value of 6.78 ± 0.42 and 2 ± 0.00, respectively. This indicated that different selections of training and test sets had no effect in the generation of the learning models for predicting the near-native poses. Characteristics of the optimal AnkPlex Table 5 Comparison of performances of AnkPlex with single learning method and ZDOCK programa According to the ZDOCK program recommendations, near-native docking poses could be found in 2Kp, as indicated by a high ZDock score, low E_RDock, or low ZRank [15, 17–20]. Particularly, the ZRank score provided a #PPTRN value of 6, which was higher compared to the values for other features (Table 6). Thus, 2Kp ranked by the ZRank score was selected to identify #PP. As shown in Table 5, the #PPTRN and the #PPTEST of 2Kp could not reach the maximum value. In addition, the first TP of 2Kp was found to be a lower order number compared to AnkPlex. These results indicated that ZRANK was able to identify the most accurate near-native poses. Nevertheless, it would not be applied for all cases. Thus, the combined feature, AnkPlex, could be used to adjust this solution. Table 6 Number of near-native poses in 10 top-ranking poses obtained from ZDOCK program with 2Kp To apply the AnkPlex for investigating the ankyrin-protein complex, AnkGAG1D4 was used to study this learning model. AnkGAG1D4 is an artificial ankyrin that contains 3 internal domains and was designed as an antiretroviral agent. AnkGAG1D4 was able to bind to the N-terminal domain of the capsid protein (CANTD) of HIV-1 [3]. Recently, the X-ray structure AnkGAG1D4 was already constructed. However, the complex structure of AnkGAG1D4-CANTD was not detected [4]. Thus, we generated the docking poses of AnkGAG1D4-CANTD and performed re-scoring with AnkPlex. The results revealed that three near-native structures of AnkGAG1D4-CANTD were found in the 10 top-rankings. The recognition residues of AnkGAG1D4-CANTD interactions were further investigated by observing interacting distances ≤ 5 Å. As a result, one docking pose showed that residue R18 was located on the recognition areas of CANTD and two docking poses demonstrated residues R132 and R143 played key roles in the interaction with AnkGAG1D4 (data not shown). This result correlated with previous ELISA results. A point mutation of R18A on helix 1 and R132A and R143A on helix 7 of CANTD showed negative binding to AnkGAG1D4. Thus, R18, R132 and R143 were the key residues of CANTD binding to AnkGAG1D4 [4]. According to computational analysis of AnkGAG1D4 using this learning model, AnkPlex could not only discriminate the near native docking poses of AnkGAG1D4-CANTD complex but also demonstrated the correct orientation of the recognition area to CANTD. Feature importance analysis Identification of informative features among the 11 features was critical for designing a powerful learning model and for understanding and obtaining insights into the ankyrin-protein docking poses. Based on the six features (CDFGHJ) used in the calculation of the LGS in AnkPlex, the Pearson correlation coefficients (R values) were used to identify the correlation between LGS and the weights of the six features to obtain the near-native poses. As shown in Fig. 4 and Additional file 1: Table S3, the three top-ranked R values of the six features consisted of ZRank (R = 0.60), ZRankSolv (R = −0.56), and E_sol (R = 0.54), which indicated that these three features played an important role in the AnkPlex model for distinguishing near-native poses. The correlation coefficients between the dot products of the features and their weights of the best-ranking LGS for the near-native poses in the nine ankyrin-protein complexes The ZRank score was ranked as the 1st informative feature according to the highest R values (0.60). The characteristics of the ZRank score between the near-native and the non-near-native poses were significantly different, with p < 0.001, as shown in Table 2. To confirm the important roles of the ZRank score in AnkPlex, the ensemble learning method based on AnkPlex was constructed without ZRank (C). As a result, as demonstrated in Additional file 1: Table S2, the AnkPlex lacking ZRank (OLMDT(ABEHIJ_g56)–OLMLG(DFGHJ_g30) was able to obtain #PPTRN = 1 and #PPTEST = 0. However, AnkPlex (OLMDT(ABEHIJ_g56)–OLMLG(DFGHJ_g30) achieved success with #PPTRN = 7 and #PPTEST = 2. Therefore, ZRank was concluded to be an important feature of AnkPlex due to the fact that it could enhance the predictive performance of near-native poses. Since ZRank is a linear combination of van der Waals (ZRankVDW), electrostatics (ZRankElec), and desolvation energy (ZRankSolv), one of them had to be identified as the most important. From Additional file 1: Table S3, it is evident that the ZRankVDW (van der Waals interaction) was more dominant than the ZRankElec and the ZRankSolv. Recently, ZRank was developed by correcting the weight of the energies and combining a pairwise interface potential in which the weight of van der Waals was higher than the original ZRank [29]. This result supports the theory that van der Waals is an important property for near-native docking poses of ankyrin-protein pairings. ZRankSolv and E_sol were the desolvation energies estimated by the summation of the Atomic Contact Energy (ACE) in which the difference between the two features was in the force field calculation and side chain orientation. ZRankSolv and E_sol were the 2nd and the 3rd informative features with R values of −0.56 and 0.54, respectively. Moreover, these two features showed differences in their characteristics between near-native and non-near-native poses, with p < 0.001, as shown in Table 2. Our experimental results (see Additional file 1: Table S2) demonstrated that AnkPlex lacked ZRankSolv (D), i.e., OLMDT(ABEHIJ_g56)–OLMLG(CFGHJ_g30) provided #PPTRN = 6 and #PPTEST = 1. Similar to E_sol, the performance of AnkPlex lacked E_sol, i.e., (OLMDT(ABEHIJ_g56)–OLMLG(CDFGH_g30)) yielded #PPTRN = 6 and #PPTEST = 1. This result indicated that the absence of ZRankSolv and E_sol slightly reduced the predictive performance of AnkPlex. However, these two features were required for predicting near-native poses. ZRankSolv was a component of ZRank. This outcome emphasized that desolvation was important for obtaining near-native poses in the ankyrin-protein interaction. Additionally, it was also required for the accuracy of other protein–protein complexes [30–32]. The LGS score was a combination of the energy determined in the interaction area between ankyrin and proteins of ≤5 Å. In AnkPlex, the interaction area on ankyrin was located on variable and conserved residues of the L-shaped repeat belonging to the internal repeats, the N-terminal repeat and the C-terminal repeat [7]. The functional variable residues on ankyrin were required for the recognition of the target protein by using the available solvent-accessible surface [7, 33, 34]. To observe the variable area used for calculating the energy, analysis of the first TP of the near-native of Ank9 was carried out to count the variable and conserved residues in the interaction area. The result, which was presented in Additional file 1: Table S4, showed that there was 43.75 ± 12.90% of the interaction area on ankyrin belonging to the variable residues, and 58.50 ± 11.36% of this area represented the hydrophobic residues. This result indicated that the interaction energy was calculated on both the variable and the conserved residues. Therefore, computing the energy term at the interface of the variable residues could provide a score to distinguish between the near-native and the non-near-native docking poses. As a consequence, the calculation for evaluating the score based on the desired area could be applied in the docking algorithm. According to the hydrophobicity on the interface in AnkPlex (see Additional file 1: Table S4), the interactions between ankyrin and proteins were comprised of 18.05 ± 6.32%, hydrophobic–hydrophobic, 43.25 ± 13.70%, hydrophobic–hydrophilic and 38.70 ± 4.48%. hydrophilic–hydrophilic interactions. Moreover, the percentage of hydrophobic–hydrophobic interactions in the non-near-native pose was observed to be reduced by 12.69 ± 7.31%, as shown in Additional file 1: Table S4. However, the percentage of hydrophobic–hydrophilic interactions in the near-native pose increased to 50.32 ± 8.89%. This outcome indicated that the recognition site on ankyrin for the target protein was adopted to have hydrophobic and hydrophilic interactions, which promoted the solvent-accessible property [35]. Because LGS is modified from atom-based potential without considering the type of the hydrophobicity scale, the high LGS of the non-near-native docking pose could be calculated from the hydrophobic–hydrophilic interaction instead of the hydrophobic–hydrophobic interaction. An ensemble method, named AnkPlex, was constructed for fast prediction of near-native states of ankyrin-protein complexes. The AnkPlex model was constructed based on a combination of features generated from the ZDOCK program without using manual inspections. AnkPlex successfully obtained the near-native poses of nine ankyrin-protein complexes in the 10 top-ranking poses. ZRank, which is a combination of electrostatic, desolvation, and van der Waals energy, was the most important feature in AnkPlex. In addition, van der Waals was the dominant feature for obtaining the near-native docking poses. To develop the method for predicting near-native poses of protein complexes, we have implemented easy access to the best models for the scientific community on a web server. AnkPlex (http://ankplex.ams.cmu.ac.th) is freely available online. Atomic Contact Energy APKs: ankyrin-protein complexes Cα-RMSD: Carbon alpha root-mean-square deviation DARPins: Designed Ankyrin Repeat Proteins desolvation ELEC: LG: logistic model LGS: logistic score PPV: predicted positive value PSC: shape complementarity Binyamin L, Borghaei H, Weiner LM. Cancer therapy with engineered monoclonal antibodies. Update Cancer Ther. 2006;1(2):147–57. Trikha M, Yan L, Nakada MT. Monoclonal antibodies as therapeutics in oncology. Curr Opin Biotechnol. 2002;13(6):609–14. Nangola S, Urvoas A, Valerio-Lepiniec M, Khamaikawin W, Sakkhachornphop S, Hong SS, Boulanger P, Minard P, Tayapiwatana C. Antiviral activity of recombinant ankyrin targeted to the capsid domain of HIV-1 Gag polyprotein. Retrovirology. 2012;9:17. Praditwongwan W, Chuankhayan P, Saoin S, Wisitponchai T, Lee VS, Nangola S, Hong SS, Minard P, Boulanger P, Chen CJ, et al. Crystal structure of an antiviral ankyrin targeting the HIV-1 capsid and molecular modeling of the ankyrin-capsid complex. J Comput Aided Mol Des. 2014;28(8):869–84. Schweizer A, Rusert P, Berlinger L, Ruprecht CR, Mann A, Corthesy S, Turville SG, Aravantinou M, Fischer M, Robbiani M, et al. CD4-specific designed ankyrin repeat proteins are novel potent HIV entry inhibitors with unique characteristics. PLoS Pathog. 2008;4(7), e1000109. Binz HK, Amstutz P, Kohl A, Stumpp MT, Briand C, Forrer P, Grutter MG, Pluckthun A. High-affinity binders selected from designed ankyrin repeat protein libraries. Nat Biotechnol. 2004;22(5):575–82. Binz HK, Stumpp MT, Forrer P, Amstutz P, Pluckthun A. Designing repeat proteins: well-expressed, soluble and stable proteins from combinatorial libraries of consensus ankyrin repeat proteins. J Mol Biol. 2003;332(2):489–503. Stumpp MT, Binz HK, Amstutz P. DARPins: a new generation of protein therapeutics. Drug Discov Today. 2008;13(15-16):695–701. Kohl A, Binz HK, Forrer P, Stumpp MT, Pluckthun A, Grutter MG. Designed to be stable: crystal structure of a consensus ankyrin repeat protein. Proc Natl Acad Sci U S A. 2003;100(4):1700–5. Epa VC, Dolezal O, Doughty L, Xiao X, Jost C, Plückthun A, Adams TE. Structural model for the interaction of a designed Ankyrin Repeat Protein with the human epidermal growth factor receptor 2. PLoS One. 2013;8(3), e59163. Dobbins SE, Lesk VI, Sternberg MJE. Insights into protein flexibility: The relationship between normal modes and conformational change upon protein–protein docking. Proc Natl Acad Sci U S A. 2008;105(30):10390–5. Tuncbag N, Kar G, Keskin O, Gursoy A, Nussinov R. A survey of available tools and web servers for analysis of protein-protein interactions and interfaces. Brief Bioinform. 2009;10(3):217–32. Chen R, Li L, Weng Z. ZDOCK: an initial-stage protein-docking algorithm. Proteins. 2003;52(1):80–7. Pierce B, Weng Z. ZRANK: reranking protein docking predictions with an optimized energy function. Proteins. 2007;67(4):1078–86. Li L, Chen R, Weng Z. RDOCK: refinement of rigid-body protein docking predictions. Proteins. 2003;53(3):693–707. Janin J, Henrick K, Moult J, Eyck LT, Sternberg MJ, Vajda S, Vakser I, Wodak SJ. CAPRI: a Critical Assessment of PRedicted Interactions. Proteins. 2003;52(1):2–9. Hwang H, Vreven T, Pierce BG, Hung JH, Weng Z. Performance of ZDOCK and ZRANK in CAPRI rounds 13-19. Proteins. 2010;78(15):3104–10. Vreven T, Pierce BG, Hwang H, Weng Z. Performance of ZDOCK in CAPRI rounds 20-26. Proteins. 2013;81(12):2175–82. Wiehe K, Pierce B, Mintseris J, Tong WW, Anderson R, Chen R, Weng Z. ZDOCK and RDOCK performance in CAPRI rounds 3, 4, and 5. Proteins. 2005;60(2):207–13. Wiehe K, Pierce B, Tong WW, Hwang H, Mintseris J, Weng Z. The performance of ZDOCK and ZRANK in rounds 6-11 of CAPRI. Proteins. 2007;69(4):719–25. Li CH, Ma XH, Shen LZ, Chang S, Zu Chen W, Wang CX. Complex-type-dependent scoring functions in protein–protein docking. Biophys Chem. 2007;129(1):1–10. Chakrabarty B, Parekh N. Identifying tandem Ankyrin repeats in protein structures. BMC bioinformatics. 2014;15(1):6599. Gough J, Karplus K, Hughey R, Chothia C. Assignment of homology to genome sequences using a library of hidden Markov models that represent all proteins of known structure. J Mol Biol. 2001;313(4):903–19. Mendez R, Leplae R, De Maria L, Wodak SJ. Assessment of blind predictions of protein–protein interactions: current status of docking methods. Proteins. 2003;52(1):51–67. Momany FA, Rone R. Validation of the general purpose QUANTA® 3.2/CHARMm® force field. J Comput Chem. 1992;13(7):888–900. Quinlan JR. C4.5: programs for machine learning. 1993. Mark H, Eibe F, Geoffrey H, Bernhard P. The WEKA Data Mining Software: An Update. SIGKDD Explor. 2009;11(1):10–8. Cessie LS, van Houwelingen JC. Ridge Estimators in Logistic Regression. Appl Statist. 1992;41(1):191–201. Pierce B, Weng Z. A combination of rescoring and refinement significantly improves protein docking performance. Proteins. 2008;72(1):270–9. Camacho CJ, Kimura S, DeLisi C, Vajda S. Kinetics of desolvation-mediated protein–protein binding. Biophys J. 2000;78(3):1094–105. Camacho CJ, Weng Z, Vajda S, DeLisi C. Free energy landscapes of encounter complexes in protein-protein association. BIOPHYS J. 1999;76(3):1166–78. Comeau SR, Gatchell DW, Vajda S, Camacho CJ. ClusPro: an automated docking and discrimination method for the prediction of protein complexes. Bioinformatics. 2004;20(1):45–50. Magliery TJ, Regan L. Sequence variation in ligand binding sites in proteins. BMC Bioinforma. 2005;6(1):240. Sedgwick SG, Smerdon SJ. The ankyrin repeat: a diversity of interactions on a common structural framework. Trends Biochem Sci. 1999;24(8):311–6. Lins L, Thomas A, Brasseur R. Analysis of accessible surface of residues in proteins. Protein Sci. 2003;12(7):1406–17. We would like to thank the editor and all anonymous reviewers for valuable suggestions and constructive comments. We acknowledged the Centre of Research in Computational Sciences and Informatics for Biology, Bioindustry, Environment, Agriculture and Healthcare (CRYSTAL) at the University of Malaya Research Centre for performing our study using ZDOCK program. We would like to thank Chiang Mai University Press for their editorial suggestions and springer nature author services for improving the English language of our work. This work was supported by the Cluster and Program Management Office (CPMO), the National Science and Technology Development Agency (NSTDA), the Thailand Research Fund (TRF), the National Research Council of Thailand (NRCT), the Health Systems Research Institute (HSRI), the National Research University project under Thailand's Office of the Commission on Higher Education (NRU), the Centre of Biomolecular Therapy and Diagnostics (CBTD), the Mahidol University Talent Management Program. Data and materials of all datasets that have been used in this study are provided in the Additional file 2. The established learning model, AnkPlex is available at: http://ankplex.ams.cmu.ac.th. CT, KK and WS conceived the study. WS and TW participated in the design of the algorithms and experiments. TW, VSL and KK prepared the manuscript. All authors participated in manuscript preparation. All authors read and approved the final manuscript. Division of Clinical Immunology, Department of Medical Technology, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, 50200, Thailand Tanchanok Wisitponchai & Chatchai Tayapiwatana Center of Biomolecular Therapy and Diagnostic, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, 50200, Thailand Tanchanok Wisitponchai, Kuntida Kitidee & Chatchai Tayapiwatana Center of Data Mining and Biomedical Informatics, Faculty of Medical Technology, Mahidol University, Bangkok, 10700, Thailand Watshara Shoombuatong Thailand Center of Excellence in Physics, Commission on Higher Education, Bangkok, 10400, Thailand Vannajan Sanghiran Lee Department of Chemistry, Faculty of Science, University of Malaya, Kuala Lumpur, 50603, Malaysia Center for Research and Innovation, Faculty of Medical Technology, Mahidol University, Bangkok, 10700, Thailand Kuntida Kitidee Tanchanok Wisitponchai Chatchai Tayapiwatana Correspondence to Kuntida Kitidee or Chatchai Tayapiwatana. Informative characteristics of nine ankyrin-protein complexes. Table S2. Comparison of performances of 10 top-ranking among ensemble learning modelsa. Table S3. Features values of the best-ranking LGS for the near-native poses in the nine ankyrin-protein complexes. Table S4. Types of interaction pair and hydrophobic residues on interface (≤5 Å) of the nine ankyrin-protein complexes. Table S5. The percentage of predictable true near-native poses of the internal and external testing sets on leaning methods of decision tree (DT), logistic regression (LG), artificial neural network (ANN), and support vector machine (SVM). (DOC 125 kb) Supplementary datasets used in this study. (XLS 13646 kb) Wisitponchai, T., Shoombuatong, W., Lee, V.S. et al. AnkPlex: algorithmic structure for refinement of near-native ankyrin-protein docking. BMC Bioinformatics 18, 220 (2017). https://doi.org/10.1186/s12859-017-1628-6 Near-native docking pose Machine learning methods AnkPlex
CommonCrawl
Only show content I have access to (14) Physics and Astronomy (13) Materials Research (7) Statistics and Probability (4) Parasitology (26) Scottish Journal of Theology (11) MRS Online Proceedings Library Archive (6) The British Journal of Psychiatry (6) Weed Technology (6) Epidemiology & Infection (3) Psychological Medicine (3) Animal Science (2) Journal of Helminthology (2) Proceedings of the International Astronomical Union (2) Proceedings of the Nutrition Society (2) The Journal of Laryngology & Otology (2) British Actuarial Journal (1) Journal of Fluid Mechanics (1) Journal of Materials Research (1) Journal of Mechanics (1) The Journal of Agricultural Science (1) Weed Science (1) Weed Science Society of America (11) Materials Research Society (7) BSAS (2) International Astronomical Union (2) Nestle Foundation - enLINK (2) The Royal College of Psychiatrists (2) Institute and Faculty of Actuaries (1) International Neuropsychological Society INS (1) JLO (1984) Ltd (1) Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (1) Mineralogical Society (1) Nutrition Society (1) Royal Aeronautical Society (1) test society (1) Systematics Association Special Volume Series (2) Literature in Context (1) Australian square kilometre array pathfinder: I. system description Australian SKA Pathfinder A. W. Hotan, J. D. Bunton, A. P. Chippendale, M. Whiting, J. Tuthill, V. A. Moss, D. McConnell, S. W. Amy, M. T. Huynh, J. R. Allison, C. S. Anderson, K. W. Bannister, E. Bastholm, R. Beresford, D. C.-J. Bock, R. Bolton, J. M. Chapman, K. Chow, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, I. J. Feain, T. M. O. Franzen, D. George, N. Gupta, G. A. Hampson, L. Harvey-Smith, D. B. Hayman, I. Heywood, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, S. Johnston, M. Kesteven, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, E. Lenc, E. S. Lensson, S. Mackay, E. K. Mahony, N. M. McClure-Griffiths, R. McConigley, P. Mirtschin, A. K. Ng, R. P. Norris, S. E. Pearce, C. Phillips, M. A. Pilawa, W. Raja, J. E. Reynolds, P. Roberts, D. N. Roxby, E. M. Sadler, M. Shields, A. E. T. Schinckel, P. Serra, R. D. Shaw, T. Sweetnam, E. R. Troup, A. Tzioumis, M. A. Voronkov, T. Westmeier Published online by Cambridge University Press: 05 March 2021, e009 In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36 $\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown. Neutron Star Extreme Matter Observatory: A kilohertz-band gravitational-wave detector in the global network K. Ackley, V. B. Adya, P. Agrawal, P. Altin, G. Ashton, M. Bailes, E. Baltinas, A. Barbuio, D. Beniwal, C. Blair, D. Blair, G. N. Bolingbroke, V. Bossilkov, S. Shachar Boublil, D. D. Brown, B. J. Burridge, J. Calderon Bustillo, J. Cameron, H. Tuong Cao, J. B. Carlin, S. Chang, P. Charlton, C. Chatterjee, D. Chattopadhyay, X. Chen, J. Chi, J. Chow, Q. Chu, A. Ciobanu, T. Clarke, P. Clearwater, J. Cooke, D. Coward, H. Crisp, R. J. Dattatri, A. T. Deller, D. A. Dobie, L. Dunn, P. J. Easter, J. Eichholz, R. Evans, C. Flynn, G. Foran, P. Forsyth, Y. Gai, S. Galaudage, D. K. Galloway, B. Gendre, B. Goncharov, S. Goode, D. Gozzard, B. Grace, A. W. Graham, A. Heger, F. Hernandez Vivanco, R. Hirai, N. A. Holland, Z. J. Holmes, E. Howard, E. Howell, G. Howitt, M. T. Hübner, J. Hurley, C. Ingram, V. Jaberian Hamedan, K. Jenner, L. Ju, D. P. Kapasi, T. Kaur, N. Kijbunchoo, M. Kovalam, R. Kumar Choudhary, P. D. Lasky, M. Y. M. Lau, J. Leung, J. Liu, K. Loh, A. Mailvagan, I. Mandel, J. J. McCann, D. E. McClelland, K. McKenzie, D. McManus, T. McRae, A. Melatos, P. Meyers, H. Middleton, M. T. Miles, M. Millhouse, Y. Lun Mong, B. Mueller, J. Munch, J. Musiov, S. Muusse, R. S. Nathan, Y. Naveh, C. Neijssel, B. Neil, S. W. S. Ng, V. Oloworaran, D. J. Ottaway, M. Page, J. Pan, M. Pathak, E. Payne, J. Powell, J. Pritchard, E. Puckridge, A. Raidani, V. Rallabhandi, D. Reardon, J. A. Riley, L. Roberts, I. M. Romero-Shaw, T. J. Roocke, G. Rowell, N. Sahu, N. Sarin, L. Sarre, H. Sattari, M. Schiworski, S. M. Scott, R. Sengar, D. Shaddock, R. Shannon, J. SHI, P. Sibley, B. J. J. Slagmolen, T. Slaven-Blair, R. J. E. Smith, J. Spollard, L. Steed, L. Strang, H. Sun, A. Sunderland, S. Suvorova, C. Talbot, E. Thrane, D. Töyrä, P. Trahanas, A. Vajpeyi, J. V. van Heijningen, A. F. Vargas, P. J. Veitch, A. Vigna-Gomez, A. Wade, K. Walker, Z. Wang, R. L. Ward, K. Ward, S. Webb, L. Wen, K. Wette, R. Wilcox, J. Winterflood, C. Wolf, B. Wu, M. Jet Yap, Z. You, H. Yu, J. Zhang, J. Zhang, C. Zhao, X. Zhu Published online by Cambridge University Press: 05 November 2020, e047 Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica. A Comparison of Mental Workload in Individuals with Transtibial and Transfemoral Lower Limb Loss during Dual-Task Walking under Varying Demand Emma P. Shaw, Jeremy C. Rietschel, Brad D. Hendershot, Alison L. Pruziner, Erik J. Wolf, Christopher L. Dearth, Matthew W. Miller, Bradley D. Hatfield, Rodolphe J. Gentili Journal: Journal of the International Neuropsychological Society / Volume 25 / Issue 9 / October 2019 Published online by Cambridge University Press: 29 August 2019, pp. 985-997 Objectives: This study aimed to evaluate the influence of lower limb loss (LL) on mental workload by assessing neurocognitive measures in individuals with unilateral transtibial (TT) versus those with transfemoral (TF) LL while dual-task walking under varying cognitive demand. Methods: Electroencephalography (EEG) was recorded as participants performed a task of varying cognitive demand while being seated or walking (i.e., varying physical demand). Results: The findings revealed both groups of participants (TT LL vs. TF LL) exhibited a similar EEG theta synchrony response as either the cognitive or the physical demand increased. Also, while individuals with TT LL maintained similar performance on the cognitive task during seated and walking conditions, those with TF LL exhibited performance decrements (slower response times) on the cognitive task during the walking in comparison to the seated conditions. Furthermore, those with TF LL neither exhibited regional differences in EEG low-alpha power while walking, nor EEG high-alpha desynchrony as a function of cognitive task difficulty while walking. This lack of alpha modulation coincided with no elevation of theta/alpha ratio power as a function of cognitive task difficulty in the TF LL group. Conclusions: This work suggests that both groups share some common but also different neurocognitive features during dual-task walking. Although all participants were able to recruit neural mechanisms critical for the maintenance of cognitive-motor performance under elevated cognitive or physical demands, the observed differences indicate that walking with a prosthesis, while concurrently performing a cognitive task, imposes additional cognitive demand in individuals with more proximal levels of amputation. Managing Herbicide Resistance: Listening to the Perspectives of Practitioners. Procedures for Conducting Listening Sessions and an Evaluation of the Process Jill Schroeder, Michael Barrett, David R. Shaw, Amy B. Asmus, Harold Coble, David Ervin, Raymond A. Jussaume, Jr., Micheal D. K. Owen, Ian Burke, Cody F. Creech, A. Stanley Culpepper, William S. Curran, Darrin M. Dodds, Todd A. Gaines, Jeffrey L. Gunsolus, Bradley D. Hanson, Prashant Jha, Annie E. Klodd, Andrew R. Kniss, Ramon G. Leon, Sandra McDonald, Don W. Morishita, Brian J. Schutte, Christy L. Sprague, Phillip W. Stahlman, Larry E. Steckel, Mark J. VanGessel Journal: Weed Technology / Volume 32 / Issue 4 / August 2018 Seven half-day regional listening sessions were held between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide-resistance management. The objective of the listening sessions was to connect with stakeholders and hear their challenges and recommendations for addressing herbicide resistance. The coordinating team hired Strategic Conservation Solutions, LLC, to facilitate all the sessions. They and the coordinating team used in-person meetings, teleconferences, and email to communicate and coordinate the activities leading up to each regional listening session. The agenda was the same across all sessions and included small-group discussions followed by reporting to the full group for discussion. The planning process was the same across all the sessions, although the selection of venue, time of day, and stakeholder participants differed to accommodate the differences among regions. The listening-session format required a great deal of work and flexibility on the part of the coordinating team and regional coordinators. Overall, the participant evaluations from the sessions were positive, with participants expressing appreciation that they were asked for their thoughts on the subject of herbicide resistance. This paper details the methods and processes used to conduct these regional listening sessions and provides an assessment of the strengths and limitations of those processes. Managing Wicked Herbicide-Resistance: Lessons from the Field Herbicide resistance is 'wicked' in nature; therefore, results of the many educational efforts to encourage diversification of weed control practices in the United States have been mixed. It is clear that we do not sufficiently understand the totality of the grassroots obstacles, concerns, challenges, and specific solutions needed for varied crop production systems. Weed management issues and solutions vary with such variables as management styles, regions, cropping systems, and available or affordable technologies. Therefore, to help the weed science community better understand the needs and ideas of those directly dealing with herbicide resistance, seven half-day regional listening sessions were held across the United States between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide resistance management. The major goals of the sessions were to gain an understanding of stakeholders and their goals and concerns related to herbicide resistance management, to become familiar with regional differences, and to identify decision maker needs to address herbicide resistance. The messages shared by listening-session participants could be summarized by six themes: we need new herbicides; there is no need for more regulation; there is a need for more education, especially for others who were not present; diversity is hard; the agricultural economy makes it difficult to make changes; and we are aware of herbicide resistance but are managing it. The authors concluded that more work is needed to bring a community-wide, interdisciplinary approach to understanding the complexity of managing weeds within the context of the whole farm operation and for communicating the need to address herbicide resistance. Migration and fate of 14CH4 in subsoil: tracer experiments to inform model development B. S. Atkinson, W. Meredith, C. Snape, M. Steven, A. R. Hoch, D. Lever, G. Shaw Journal: Mineralogical Magazine / Volume 76 / Issue 8 / December 2012 The degree of transport and retention of 14CH4 in soil is being investigated in a series of laboratory experiments in preparation for field scale trials at the University of Nottingham. The experimental programme focusses on the behaviour and fate of 14CH4 injected into subsoil and its subsequent incorporation into vegetation under field conditions. Due to restrictions on the use of radioactive tracers in the field, 13CH4 is being used as a surrogate gas which can be handled conveniently in the laboratory and field and which can be measured with high precision using gas chromatography with isotope ratio mass spectrometry. The laboratory data indicate significant differences between the diffusion and oxidation rates of 13CH4 in re-packed and undisturbed soil columns, with both rates appearing to be significantly lower in undisturbed soils. Data from both laboratory and field experiments will be used to inform the development of a model of 14CH4 migration and its fate in the biosphere above a geological disposal facility. Evolution of the low-frequency pulse profile of PSR B2217+47 D. Michilli, J. W. T. Hessels, J. Y. Donner, J.-M. Grießmeier, M. Serylak, B. Shaw, B. W. Stappers, J. P. W. Verbiest, A. T. Deller, L. N. Driessen, D. R. Stinebring, L. Bondonneau, M. Geyer, M. Hoeft, A. Karastergiou, M. Kramer, S. Osłowski, M. Pilia, S. Sanidas, P. Weltevrede Journal: Proceedings of the International Astronomical Union / Volume 13 / Issue S337 / September 2017 An evolution of the low-frequency pulse profile of PSR B2217+47 is observed during a six-year observing campaign with the LOFAR telescope at 150 MHz. The evolution is manifested as a new component in the profile trailing the main peak. The leading part of the profile, including a newly-observed weak component, is steady during the campaign. The transient component is not visible in simultaneous observations at 1500 MHz using the Lovell telescope, implying a chromatic effect. A variation in the dispersion measure of the source is detected in the same timespan. Precession of the pulsar and changes in the magnetosphere are investigated to explain the profile evolution. However, the listed properties favour a model based on turbulence in the interstellar medium (ISM). This interpretation is confirmed by a strong correlation between the intensity of the transient component and main peak in single pulses. Since PSR B2217+47 is the fourth brightest pulsar visible to LOFAR, we speculate that ISM-induced pulse profile evolution might be relatively common but subtle and that SKA-Low will detect many similar examples. In this scenario, similar studies of pulse profile evolution could be used in parallel with scintillation arcs to characterize the properties of the ISM. U.S. Grower Views on Problematic Weeds and Changes in Weed Pressure in Glyphosate-Resistant Corn, Cotton, and Soybean Cropping Systems Greg R. Kruger, William G. Johnson, Stephen C. Weller, Micheal D. K. Owen, David R. Shaw, John W. Wilcut, David L. Jordan, Robert G. Wilson, Mark L. Bernards, Bryan G. Young Journal: Weed Technology / Volume 23 / Issue 1 / March 2009 Corn and soybean growers in Illinois, Indiana, Iowa, Mississippi, Nebraska, and North Carolina, as well as cotton growers in Mississippi and North Carolina, were surveyed about their views on changes in problematic weeds and weed pressure in cropping systems based on a glyphosate-resistant (GR) crop. No growers using a GR cropping system for more than 5 yr reported heavy weed pressure. Over all cropping systems investigated (continuous GR soybean, continuous GR cotton, GR corn/GR soybean, GR soybean/non-GR crop, and GR corn/non-GR crop), 0 to 7% of survey respondents reported greater weed pressure after implementing rotations using GR crops, whereas 31 to 57% felt weed pressure was similar and 36 to 70% indicated that weed pressure was less. Pigweed, morningglory, johnsongrass, ragweed, foxtail, and velvetleaf were mentioned as their most problematic weeds, depending on the state and cropping system. Systems using GR crops improved weed management compared with the technologies used before the adoption of GR crops. However, the long-term success of managing problematic weeds in GR cropping systems will require the development of multifaceted integrated weed management programs that include glyphosate as well as other weed management tactics. Efficacy of Several Herbicides on Yellow Archangel (Lamiastrum galeobdolon) Timothy W. Miller, Alison D. Halpern, Frances Lucero, Sasha H. Shaw Journal: Invasive Plant Science and Management / Volume 7 / Issue 2 / June 2014 Yellow archangel is a twining perennial species that produces a dense evergreen canopy and may negatively affect forest floor vegetation. Because it is spreading rapidly in the Pacific Northwest (PNW), greenhouse and field trials were conducted on yellow archangel to determine its relative sensitivity to several herbicides. Products that slowed or prevented yellow archangel regrowth at 9 mo after treatment (MAT) in one or both iterations of the greenhouse trial were aminopyralid, diclobenil, glufosinate, imazapyr, isoxaben, metsulfuron, sulfometuron, triclopyr amine, and triclopyr ester + 2,4-D ester. In the field trial at 10 MAT, triclopyr and imazapyr were controlling 81 and 78% of treated yellow archangel, respectively, similar to aminopyralid, glyphosate, and metsulfuron (61 to 65%). Two applications of 20% acetic acid or 20% clove oil were controlling 53% at the same timing. At 13 MAT, only imazapyr and glyphosate were still providing good control of yellow archangel (81 and 80%, respectively), while all other products were controlling the weed at 53% or less. By 7 or 8 MAT after a second application, only imazapyr and glyphosate provided effective control of yellow archangel (86 to 94%). Spectral reflectance curves to distinguish soybean from common cocklebur (Xanthium strumarium) and sicklepod (Cassia obtusifolia) grown with varying soil moisture W. Brien Henry, David R. Shaw, Kambham R. Reddy, Lori M. Bruce, Hrishikesh D. Tamhankar Journal: Weed Science / Volume 52 / Issue 5 / October 2004 Experiments were conducted to examine the use of spectral reflectance curves for discriminating between plant species across moisture levels. Weed species and soybean were grown at three moisture levels, and spectral reflectance data and leaf water potential were collected every other day after the imposition of moisture stress at 8 wk after planting. Moisture stress did not reduce the ability to discriminate between species. As moisture stress increased, it became easier to distinguish between species, regardless of analysis technique. Signature amplitudes of the top five bands, discrete wavelet transforms, and multiple indices were promising analysis techniques. Discriminant models created from data set of 1 yr and validated on additional data sets provided, on average, approximately 80% accurate classification among weeds and crop. This suggests that these models are relatively robust and could potentially be used across environmental conditions in field scenarios. Remote Sensing to Detect Herbicide Drift on Crops Journal: Weed Technology / Volume 18 / Issue 2 / June 2004 Glyphosate and paraquat herbicide drift injury to crops may substantially reduce growth or yield. Determining the type and degree of injury is of importance to a producer. This research was conducted to determine whether remote sensing could be used to identify and quantify herbicide injury to crops. Soybean and corn plants were grown in 3.8-L pots to the five- to seven-leaf stage, at which time, applications of nonselective herbicides were made. Visual injury estimates were made, and hyperspectral reflectance data were recorded 1, 4, and 7 d after application (DAA). Several analysis techniques including multiple indices, signature amplitude (SA) with spectral bands as features, and wavelet analysis were used to distinguish between herbicide-treated and nontreated plants. Classification accuracy using SA analysis of paraquat injury on soybean was better than 75% for both 1/2- and 1/8× rates at 1, 4, and 7 DAA. Classification accuracy of paraquat injury on corn was better than 72% for the 1/2× rate at 1, 4, and 7 DAA. These data suggest that hyperspectral reflectance may be used to distinguish between healthy plants and injured plants to which herbicides have been applied; however, the classification accuracies remained at 75% or higher only when the higher rates of herbicide were applied. Applications of a 1/2× rate of glyphosate produced 55 to 81% soybean injury and 20 to 50% corn injury 4 and 7 DAA, respectively. However, using SA analysis, the moderately injured plants were indistinguishable from the uninjured controls, as represented by the low classification accuracies at the 1/8-, 1/32-, and 1/64× rates. The most promising technique for identifying drift injury was wavelet analysis, which successfully distinguished between corn plants treated with either the 1/8- or the 1/2× rates of paraquat compared with the nontreated corn plants better than 92% 1, 4, and 7 DAA. These analysis techniques, once tested and validated on field scale data, may help determine the extent and the degree of herbicide drift for making appropriate and, more importantly, timely management decisions. Remote Sensing to Distinguish Soybean from Weeds After Herbicide Application Journal: Weed Technology / Volume 18 / Issue 3 / September 2004 Two experiments, one focusing on preemergence (PRE) herbicides and the other on postemergence (POST) herbicides, were conducted and repeated in time to examine the utility of hyperspectral remote sensing data for discriminating common cocklebur, hemp sesbania, pitted morningglory, sicklepod, and soybean after PRE and POST herbicide application. Discriminant models were created from combinations of multiple indices. The model created from the second experimental run's data set and validated on the first experimental run's data provided an average of 97% correct classification of soybean and an overall average classification accuracy of 65% for all species. These data suggest that these models are relatively robust and could potentially be used across a wide range of herbicide applications in field scenarios. From the data set pooled across time and experiment types, a single discriminant model was created with multiple indices that discriminated soybean from weeds 88%, on average, regardless of herbicide, rate, or species. Signature amplitudes, an additional classification technique, produced variable results with respect to discriminating soybean from weeds after herbicide application and discriminating between controls and plants to which herbicides were applied; thus, this was not an adequate classification technique. U.S. Farmer Awareness of Glyphosate-Resistant Weeds and Resistance Management Strategies William G. Johnson, Micheal D. K. Owen, Greg R. Kruger, Bryan G. Young, David R. Shaw, Robert G. Wilson, John W. Wilcut, David L. Jordan, Stephen C. Weller A survey of farmers from six U.S. states (Indiana, Illinois, Iowa, Nebraska, Mississippi, and North Carolina) was conducted to assess the farmers' views on glyphosate-resistant (GR) weeds and tactics used to prevent or manage GR weed populations in genetically engineered (GE) GR crops. Only 30% of farmers thought GR weeds were a serious issue. Few farmers thought field tillage and/or using a non-GR crop in rotation with GR crops would be an effective strategy. Most farmers did not recognize the role that the recurrent use of an herbicide plays in evolution of resistance. A substantial number of farmers underestimated the potential for GR weed populations to evolve in an agroecosystem dominated by glyphosate as the weed control tactic. These results indicate there are major challenges that the agriculture and weed science communities must face to implement long-term sustainable GE GR-based cropping systems within the agroecosystem. Heritable temperament pathways to early callous–unemotional behaviour Rebecca Waller, Christopher J. Trentacosta, Daniel S. Shaw, Jenae M. Neiderhiser, Jody M. Ganiban, David Reiss, Leslie D. Leve, Luke W. Hyde Journal: The British Journal of Psychiatry / Volume 209 / Issue 6 / December 2016 Early callous–unemotional behaviours identify children at risk for antisocial behaviour. Recent work suggests that the high heritability of callous–unemotional behaviours is qualified by interactions with positive parenting. To examine whether heritable temperament dimensions of fearlessness and low affiliative behaviour are associated with early callous–unemotional behaviours and whether parenting moderates these associations. Using an adoption sample (n=561), we examined pathways from biological mother self-reported fearlessness and affiliative behaviour to child callous–unemotional behaviours via observed child fearlessness and affiliative behaviour, and whether adoptive parent observed positive parenting moderated pathways. Biological mother fearlessness predicted child callous–unemotional behaviours via earlier child fearlessness. Biological mother low affiliative behaviour predicted child callous–unemotional behaviours, although not via child affiliative behaviours. Adoptive mother positive parenting moderated the fearlessness to callous–unemotional behaviour pathway. Heritable fearlessness and low interpersonal affiliation traits contribute to the development of callous–unemotional behaviours. Positive parenting can buffer these risky pathways. The Computation of Aerodynamic Support W. R. D. Shaw Journal: The Aeronautical Journal / Volume 22 / Issue 88 / April 1918 Published online by Cambridge University Press: 14 September 2016, p. 109 The function of an aerofoil of circular arc camber is to impart motion in a circular path to a mass of air whose volume is determined by the product of span, chord, and mean sweep, the latter being a linear measure of the depth of the strata of air affected above and below the surface. Defining the neuroanatomic basis of motor coordination in children and its relationship with symptoms of attention-deficit/hyperactivity disorder P. Shaw, D. Weingart, T. Bonner, B. Watson, M. T. M. Park, W. Sharp, J. P. Lerch, M. M. Chakravarty Journal: Psychological Medicine / Volume 46 / Issue 11 / August 2016 Published online by Cambridge University Press: 10 June 2016, pp. 2363-2373 When children have marked problems with motor coordination, they often have problems with attention and impulse control. Here, we map the neuroanatomic substrate of motor coordination in childhood and ask whether this substrate differs in the presence of concurrent symptoms of attention-deficit/hyperactivity disorder (ADHD). Participants were 226 children. All completed Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5)-based assessment of ADHD symptoms and standardized tests of motor coordination skills assessing aiming/catching, manual dexterity and balance. Symptoms of developmental coordination disorder (DCD) were determined using parental questionnaires. Using 3 Tesla magnetic resonance data, four latent neuroanatomic variables (for the cerebral cortex, cerebellum, basal ganglia and thalamus) were extracted and mapped onto each motor coordination skill using partial least squares pathway modeling. The motor coordination skill of aiming/catching was significantly linked to latent variables for both the cerebral cortex (t = 4.31, p < 0.0001) and the cerebellum (t = 2.31, p = 0.02). This effect was driven by the premotor/motor cortical regions and the superior cerebellar lobules. These links were not moderated by the severity of symptoms of inattention, hyperactivity and impulsivity. In categorical analyses, the DCD group showed atypical reduction in the volumes of these regions. However, the group with DCD alone did not differ significantly from those with DCD and co-morbid ADHD. The superior cerebellar lobules and the premotor/motor cortex emerged as pivotal neural substrates of motor coordination in children. The dimensions of these motor coordination regions did not differ significantly between those who had DCD, with or without co-morbid ADHD. Notes on contributors By Stuart Allen, Simon Bainbridge, Andrew Bennett, Toby R. Benis, John Bugg, Sally Bushell, James Chandler, Daniel Cook, Richard Cronin, David Fairer, Michael Ferber, Frances Ferguson, Kurt Fosso, Paul H. Fry, Stephen Gill, Kevis Goodman, Scott Hess, David Higgins, Noel Jackson, Robin Jarvis, Susan M. Levin, Maureen N. Mclane, Samantha Matthews, Tim Milnes, Michael O'Neill, Judith W. Page, Alexander Regier, Jonathan Roberts, Daniel Robinson, Ann Wierda Rowland, Philip Shaw, Peter Simonsen, Christopher Stokes, Sophie Thomas, Anne D. Wallace, Joshua Wilner Edited by Andrew Bennett, University of Bristol Book: William Wordsworth in Context Print publication: 12 February 2015, pp ix-xvi Preterm birth affects GABAA receptor subunit mRNA levels during the foetal-to-neonatal transition in guinea pigs J. C. Shaw, H. K. Palliser, D. W. Walker, J. J. Hirst Journal: Journal of Developmental Origins of Health and Disease / Volume 6 / Issue 3 / June 2015 Published online by Cambridge University Press: 09 February 2015, pp. 250-260 Modulation of gamma-aminobutyric acid A (GABAA) receptor signalling by the neurosteroid allopregnanolone has a major role in late gestation neurodevelopment. The objective of this study was to characterize the mRNA levels of GABAA receptor subunits (α4, α5, α6 and δ) that are key to neurosteroid binding in the brain, following preterm birth. Myelination, measured by the myelin basic protein immunostaining, was used to assess maturity of the preterm brains. Foetal guinea pig brains were obtained at 62 days' gestational age (GA, preterm) or at term (69 days). Neonates were delivered by caesarean section, at 62 days GA and term, and maintained until tissue collection at 24 h of age. Subunit mRNA levels were quantified by RT-PCR in the hippocampus and cerebellum of foetal and neonatal brains. Levels of the α6 and δ subunits were markedly lower in the cerebellum of preterm guinea pigs compared with term animals. Importantly, there was an increase in mRNA levels of these subunits during the foetal-to-neonatal transition at term, which was not seen following preterm birth. Myelination was lower in preterm neonatal brains, consistent with marked immaturity. Salivary cortisol concentrations, measured by EIA, were also higher for the preterm neonates, suggesting greater stress. We conclude that there is an adaptive increase in the levels of mRNA of the key GABAA receptor subunits involved in neurosteroid action after term birth, which may compensate for declining allopregnanolone levels. The lower levels of these subunits in preterm neonates may heighten the adverse effect of the premature decline in neurosteroid exposure. SOLVENCY II TECHNICAL PROVISIONS FOR GENERAL INSURERS: By the Institute and Faculty of Actuaries General Insurance Reserving Oversight Committee's working party on Solvency II technical provisions: S. Dreksler, C. Allen, A. Akoh-Arrey, J. A. Courchene, B. Junaid, J. Kirk, W. Lowe, S. O'Dea, J. Piper, M. Shah, G. Shaw, D. Storman, S. Thaper, L. Thomas, M. Wheatley, M. Wilson Journal: British Actuarial Journal / Volume 20 / Issue 1 / March 2015 Published online by Cambridge University Press: 06 February 2015, pp. 7-129 This paper brings together the work of the GI Solvency II Technical Provisions working party. The working party was formed in 2009 for the primary purpose of raising awareness of Solvency II and the impact it would have on the work that reserving actuaries do. Over the years, the working party's focus has shifted to exploring and promoting discussion of the many practical issues raised by the requirements and to promoting best practice. To this end, we have developed, presented and discussed many of the ideas contained in this paper at events and forums. However, the size of the subject means that at no one event have we managed to cover all of the areas that the reserving actuary needs to be aware of. This paper brings together our thinking in one place for the first time. We hope experienced practitioners will find it thought provoking, and a useful reference tool. For new practitioners, we hope it helps to get you up-to-speed quickly. Good luck! 9 - Zooplankton Identification Manual for North European Seas (ZIMNES) from Part II - The products of descriptive taxonomy By L. C. Hastie, J. Rasmussen, M. V. Angel, G. A. Boxshall, S. J. Chambers, D. V. P. Conway, S. Fielding, A. Ingvarsdottir, A. W. G. John, S. J. Hay, S. P. Milligan, A. L. Mulford, G. J. Pierce, M. Shaw, M. Wootton Edited by Mark F. Watson, Royal Botanic Garden Edinburgh, Chris H. C. Lyal, Natural History Museum, London, Colin A. Pendry, Royal Botanic Garden Edinburgh Book: Descriptive Taxonomy Print publication: 08 January 2015, pp 107-110
CommonCrawl
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB Kiselev, Denis Dmitrievich Total publications: 20 (20) in MathSciNet: 15 (15) in zbMATH: 15 (15) in Web of Science: 14 (14) in Scopus: 20 (20) Cited articles: 15 Citations in Math-Net.Ru: 39 Citations in Web of Science: 5 Citations in Scopus: 20 Presentations: 9 This page: 2572 Abstract pages: 3205 Full texts: 650 References: 415 Doctor of physico-mathematical sciences (2018) Speciality: 01.01.06 (Mathematical logic, algebra, and number theory) Birth date: 10.03.1987 Keywords: finite group, representation of a finite group, Schur index, embedding problem, Galois theory, chattering-control. UDC: 512.547.2, 512.622, 512.623.3, 512.623.32, 517.977.5 Galois theory, embedding problem, Schur index, generalised Fuller optimal control problem Main publications: D. D. Kiselev, "Optimalnye otsenki indeksa Shura i realizuemost predstavleniya", Matem. sbornik, 205:4 (2014), 69-78 D. D. Kiselev, "Primery zadach pogruzheniya, u kotorykh resheniya tolko polya", UMN, 68:4 (2013), 181-182 M. I. Zelikin, D. D. Kiselev, L. V. Lokutsievskii, "Optimalnoe upravlenie i teoriya Galua", Matem. sbornik, 204:11 (2013), 83-98 http://www.mathnet.ru/eng/person53027 List of publications on Google Scholar http://zbmath.org/authors/?q=ai:kiselev.denis-d https://mathscinet.ams.org/mathscinet/MRAuthorID/1017852 http://elibrary.ru/author_items.asp?spin=6679-5708 ISTINA http://istina.msu.ru/workers/63332570 http://www.researcherid.com/rid/C-4028-2017 http://www.scopus.com/authid/detail.url?authorId=55538373900 https://www.researchgate.net/profile/Denis_Kiselev3 Full list of scientific publications: | by years | by types | by times cited in WoS | by times cited in Scopus | scientific publications | common list | 1. D. D. Kiselev, "Minimal $p$-extensions and the embedding problem", Communications in Algebra, 46:1 (2018), 290–321 (cited: 2) 2. D. D. Kiselev, "Ultrasoluble coverings of some nilpotent groups by a cyclic group over number fields and related questions", Izv. Math., 82:3 (2018), 512–531 3. D. D. Kiselev, A. V. Yakovlev, "Ultrasolvable and Sylow extensions with cyclic kernel", St. Petersburg Math. J., 30:1 (2019), 95–102 4. D. D. Kiselev, "Optimal Control, Everywhere Dense Torus Winding, and Wolstenholme Primes", Moscow Univ. Math. Bull., 73:4 (2018), 162–163 5. D. D. Kiselev, "Galois theory, the classification of finite simple groups and a dense winding of a torus", Sb. Math., 209:6 (2018), 840–849 6. D. D. Kiselev, "Applications of Galois Theory to Optimal Control", CEUR Workshop Proceedings, 1894 (2017), 50–56 (cited: 1) 7. D. D. Kiselev, "Metacyclic $2$-extensions with cyclic kernel and ultrasolvability questions", J. Math. Sci. (N.Y.), 240:4 (2019), 447–458 8. D. D. Kiselev, "On a dense winding of the 2-dimensional torus", Sb. Math., 207:4 (2016), 581–589 (cited: 2) 9. S. I. Bogataya, S. A. Bogatyi, D. D. Kiselev, "Powers of elements of the series substitution group $\mathcal J(\mathbb Z_2)$", Topology and its applications, 201, 2014 International Conference on Topology and its Applications, Nafpaktos, Greece (2016), 29–56 (cited: 2) 10. D. D. Kiselev, "On the Ultrasolvability of p-Extensions of an Abelian Group by a Cyclic Kernel", J. Math. Sci. (N.Y.), 232:5 (2018), 662–676 11. D. D. Kiselev, I. A. Chubarov, "On the Ultrasolvability of Some Classes of Minimal Nonsplit p-Extensions with Cyclic Kernel for p > 2", J. Math. Sci. (N.Y.), 232:5 (2018), 677–692 12. D. D. Kiselev, "Ultrasolvable embedding problems with cyclic kernel", Russian Math. Surveys, 71:6 (2016), 1149–1151 (cited: 1) 13. D. D. Kiselev, "Explicit Embeddings of Finite abelian $p$-Groups in the Group $\mathcal J(\mathbb F_p)$", Math. Notes, 97:1 (2015), 63–68 (cited: 1) 14. D. D. Kiselev, "Ultrasolvable Covering of the Group $Z_2$ by the groups $Z_8$, $Z_{16}$ and $Q_8$", J. Math. Sci. (N. Y.), 219:4 (2016), 523–538 15. D. D. Kiselev, "Optimal bounds for the Schur index and the realizability of representations", Sb. Math., 205:4 (2014), 522–531 16. D. D. Kiselev, "Examples of embedding problems the only solutions of which are fields", Russian Math. Surveys, 68:4 (2013), 776–778 (cited: 2) 17. M. I. Zelikin, D. D. Kiselev, L. V. Lokutsievskii, "Optimal control and Galois theory", Sb. Math., 204:11 (2013), 1624–1638 (cited: 3) (cited: 1) (cited: 1) (cited: 4) 18. D. D. Kiselev, "A bound for the Schur index of irreducible representations of finite groups", Sb. Math., 204:8 (2013), 1152–1160 (cited: 1) (cited: 1) (cited: 1) (cited: 1) 19. D. D. Kiselev, B. B. Lur'e, "Ultrasolvability and singularity in the embedding problem", J. Math. Sci. (N. Y.), 199:3 (2014), 306–312 (cited: 3) 20. D. D. Kiselev, "Splitting fields of finite groups", Izv. Math., 76:6 (2012), 1163–1174 (cited: 1) (cited: 1) (cited: 1) (cited: 1) Presentations in Math-Net.Ru 1. Ульртаразрешимые накрытия и смежные вопросы теории Галуа D. D. Kiselev Research Seminar of the Department of Higher Algebra MSU 2. Singular extremals in the generalised Fuller problem M. I. Zelikin, D. D. Kiselev, L. V. Lokutsievskiy June 1, 2017 15:45 3. Application of Galois theory in optimal control Arithmetic geometry seminar 4. Об одном теоретико-числовом сравнении и его применении в теории групп 5. An application of Galois theory to the optimal control General Mathematics Seminar of the St. Petersburg Division of Steklov Institute of Mathematics, Russian Academy of Sciences 6. Группы Галуа и оптимальное управление VI Workshop and Conference on Lie Algebras, Algebraic Groups, and Invariant Theory 7. Optimal control, the Galois theory and everywhere dense winding of the torus 8. "О всюду плотной обмотке 2-мерного тора II" Contemporary Problems in Number Theory 9. "О всюду плотной обмотке 2-мерного тора" Russian Academy of ForeignTrade Lomonosov Moscow State University, Faculty of Mechanics and Mathematics math-net2020_01 [at] mi-ras ru Terms of Use Registration Logotypes © Steklov Mathematical Institute RAS, 2020
CommonCrawl
Gaze behavior during visuomotor tracking with complex hand-cursor dynamics James Mathew; J. Randall Flanagan; Frederic R. Danion James Mathew Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France Current affiliation: Institute of Neuroscience, Institute of Communication & Information Technologies, Electronics & Applied Mathematics, Université Catholique de Louvain, Louvain-la-neuve, Belgium J. Randall Flanagan Department of Psychology and Centre for Neurosciences Studies, Queens University, Ontario, Canada Frederic R. Danion [email protected] James Mathew, J. Randall Flanagan, Frederic R. Danion; Gaze behavior during visuomotor tracking with complex hand-cursor dynamics. Journal of Vision 2019;19(14):24. doi: https://doi.org/10.1167/19.14.24. The ability to track a moving target with the hand has been extensively studied, but few studies have characterized gaze behavior during this task. Here we investigate gaze behavior when participants learn a new mapping between hand and cursor motion, such that the cursor represented the position of a virtual mass attached to the grasped handle via a virtual spring. Depending on the experimental condition, haptic feedback consistent with mass-spring dynamics could also be provided. For comparison a simple one-to-one hand-cursor mapping was also tested. We hypothesized that gaze would be drawn, at times, to the cursor in the mass-spring conditions, especially in the absence of haptic feedback. As expected hand tracking performance was less accurate under the spring mapping, but gaze behavior was virtually unaffected by the spring mapping, regardless of whether haptic feedback was provided. Specifically, relative gaze position between target and cursor, rate of saccades, and gain of smooth pursuit were similar under both mappings and both haptic feedback conditions. We conclude that even when participants are exposed to a challenging hand-cursor mapping, gaze is primarily concerned about ongoing target motion suggesting that peripheral vision is sufficient to monitor cursor position and to update hand movement control. The ability to track moving objects with the hand, or an object held in the hand, is important in many natural tasks and has been extensively studied (Foulkes & Miall, 2000; Miall, Weir, & Stein, 1993; Poulton, 1974; Streng, Popa, & Ebner, 2018). However, how such tracking behavior is supported by gaze has received less attention (Danion & Flanagan, 2018; Miall, Reckess, & Imamizu, 2001; Xia & Barnes, 1999). Previous studies have shown that when tracking simple (sinusoidal) or complex (Danion & Flanagan, 2018; Koken & Erkelens, 1992; Niehorster, Siu, & Li, 2015; Tramper & Gielen, 2011) trajectories with a cursor controlled by the hand, gaze typically leads the hand while both gaze and hand tend to lag behind the target. However, previous work has mostly focused on simple hand-cursor mappings, and it is not clear whether this observation holds for arm movement performed under more complex mappings, such as those that arise when tracking with a hand-held object with its own dynamics (e.g., a mass attached to the hand via a spring). Previous work has explored the effects of delaying (Foulkes & Miall, 2000; Miall & Jackson, 2006; Vercher & Gauthier, 1992), inverting (Grigorova & Bock, 2006; Vercher, Quaccia, & Gauthier, 1995), and rotating visual feedback of the hand (Gouirand, Mathew, Brenner, & Danion, 2019). However, the effects of more complex perturbations, linked to object dynamics, on eye-hand coordination remains to be fully explored. When being asked to track a moving target with the hand, not only do participants need to monitor the target position, they also need to keep track of the current cursor position. Evaluating the difference between target and cursor position is mandatory for accurate hand tracking. When people perform full arm movements under a simple (one-to-one) hand–cursor relationship, their gaze is much closer to the target than the cursor (Danion & Flanagan, 2018), suggesting that an estimate of cursor position is accessible through peripheral vision and/or arm (efferent/afferent) signals. We recently showed that when participants track a moving target with a joystick, their gaze is also closer to the target than the cursor after adapting to a visuomotor rotation that rotates the cursor away from the hand but preserves a one-to-one mapping between cursor speed and hand speed (Gouirand et al., 2019). The goal of the current study was to determine whether fixating the target with the eyes is a gaze strategy that extends to full arm movements performed under more complex (nonlinear) hand–cursor mappings. We asked participants to move a cursor controlled by the hand. In the spring condition, the cursor that behaved like a mass attached to the hand by means of a spring (Danion, Diamond, & Flanagan, 2012; Dingwell, Mah, & Mussa-Ivaldi, 2002; Landelle, Montagnini, Madelain, & Danion, 2016; Nagengast, Braun, & Wolpert, 2009). We examined hand and gaze behavior during both initial learning and subsequent steady state performance. For comparison, we also examined a rigid condition in which the cursor behaved like a mass without a spring. We hypothesized that, during learning, the mass-spring dynamics, in the spring condition, would affect gaze behavior because the location of the controlled object (cursor) cannot be easily estimated based on arm movement related signals. More specifically, we reasoned that, in comparison to the rigid condition, gaze would become more equally shared between the cursor and target. We also expected that the need to monitor cursor position with gaze would decrease as hand tracking improves during learning. When manipulating nonrigid objects, the contribution of haptic feedback has been demonstrated to be valuable (Danion et al., 2012; although see Hasson, Nasseroleslami, Krakauer, & Sternad, 2012; Huang, Gillespie, & Kuo, 2006). Therefore, we also included a spring haptic condition in which we applied forces to the arm that simulated a mass-spring acting at the hand, to the arm. We reasoned that the provision of haptic feedback might improve the sensory estimate of the cursor position, and therefore allows gaze to be released from the monitoring of the cursor. For each of these three conditions (rigid, spring, and spring haptic) participants performed 40 consecutive trials, allowing monitoring of possible changes in gaze behavior as learning progressed. In contrast to our hypotheses, and despite marked differences in hand tracking performance across cursor–target mappings and across trials, results showed only modest changes in gaze behavior, such that in all conditions gaze was predominantly directed toward the target. Eighteen self-proclaimed right-handed participants (Aged 24.2 ± 6.9 years; 10 women, 8 men) participated in this study. None of the participants had neurological or visual disorders. They were naïve as to the experimental conditions and hypotheses, and had no previous experience of ocular motor testing. All participants gave written informed consent prior to the study. The experimental protocol was approved by the General Research Ethics Board at Queen's University in compliance with the Canadian Tri-Council Policy on Ethical Conduct for Research Involving humans. Each experimental protocol lasted about one hour, and participants were compensated $15 for their participation. The experimental setup is similar to the one used in our recent study (Danion & Flanagan, 2018), thus we will only report key information. Our setup is illustrated in Figure 1. Participants were comfortably seated and performed the tasks with their arm supported by, and secured to, a robotic exoskeleton (Kinarm, BKIN Technologies, Kingston, ON, Canada) that allowed the arm to move in the horizontal plane and could apply torques at the elbow and shoulder joint to simulate loads acting on the hand (Scott, 1999). Visual stimuli (i.e., target and cursor) were projected from above onto an opaque mirror positioned over the arm, and appeared in the plane of arm motion. Participants could not see their actual hand or arm. Hand movements were recorded at a sampling rate of 1000 Hz with a resolution of 0.1 mm. Top view of the experimental setup. Both arms of the participant were inserted into an exoskeleton. An opaque mirror placed above the arms blocked their view. A purple target was projected on the mirror from above and appeared at the height of the hand. The dotted line shows an example target path (not visible to the participant). A red cursor representing the right index fingertip was also displayed and the participant was instructed to move his/her right arm so as to bring the cursor as close as possible to the moving target. The cursor and the target were represented, respectively, as red and purple filled circles (0.6 cm in diameter). A built-in video based eye tracker (Eyelink 1000; SR Research Ltd., Ottawa, ON, Canada) recorded eye movements at 500 Hz. Before the experiment, gaze position in the work plane was calibrated by having participants fixate a grid of targets. When looking at the center of the region of the work plane (and the center of target motion), a 1 cm change in gaze position corresponded to a 1.6° change in gaze angle. Two types of hand-cursor visual mapping were tested. Under the RIGID mapping, the cursor position directly matched the position of the hand in the horizontal plane. No haptic feedback was implemented under the RIGID mapping; the motors of the robotic device were simply turned off. Under the SPRING mapping, the cursor behaved as a mass attached to the hand by means of a spring. We used the following parameters for the simulation: mass = 1 Kg, stiffness = 40 N/m, damping = 1.66 N/m/s, resting length = 0 m. These values are about one third of values used in previous studies investigating the manipulation of nonrigid objects (Danion et al., 2012; Dingwell et al., 2002; Dingwell, Mah, & Mussa-Ivaldi, 2004; Landelle et al., 2016; Nagengast et al., 2009), but a similar parameter setting was used in our recent study (Danion, Mathew, & Flanagan, 2017). The rational for decreasing object inertia was to prevent possible fatigue effects while keeping a 1 Hz resonance frequency as in other studies; the resonance frequency (F) of a mass-spring system depends on its mass (m) and its stiffness (k) such that \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}F = {1 \over {2\pi }}\sqrt {{k \over m}}\end{equation} Depending on the experimental conditions, haptic feedback could be provided (SPRING-HAPT) or not (SPRING). When haptic feedback was provided the same parameters were employed to simulate the physical and visual behavior of the cursor. In the absence of haptic feedback, the motors of the robotic device were turned off. Overall our experimental design included three conditions: RIGID, SPRING, SPRING-HAPT. For each experimental condition participants were instructed to track as accurately as possible a target with the cursor. There was no explicit requirement in terms of gaze behavior. The motion of the target resulted from the combination of sinusoids: two along the x axis (one fundamental and a second or third harmonic) and two along the y axis (same procedure; see Figure 1 for axes). We used the following equation to construct target motion \begin{equation}{x_t} = {A_{1x}}\cos \omega t + {A_{2x}}\cos \left( {{h_x}\omega t - {\varphi _x}} \right)\end{equation} \begin{equation}{y_t} = {A_{1y}}\sin \omega t + {A_{2y}}\sin \left( {{h_y}\omega t - {\varphi _y}} \right)\end{equation} A similar technique was used elsewhere to generate pseudo-random 2D pattern while preserving smooth changes in velocity and direction (Mrotek & Soechting, 2007; Soechting, Rao, & Juveli, 2010). A total of five different patterns were used throughout the experiment (see one example in Figure 1). All trajectories had a period of 5 s (fundamental = 0.2 Hz). The parameters (gain, frequency, phase, and harmonics) used to generate all our patterns can be found in our previous study (Danion & Flanagan, 2018). They were selected so as to maintain the same path length over one cycle (78 cm). Given that each trial was 10 s long (i.e., two cycles), the total distance covered by the target was 156 cm, resulting in a mean tangential target velocity of 15.6 cm/s. Before the experimental session each participant completed a familiarization session with five trials under the RIGID mapping. Each participant then completed one block of 40 trials in each experimental condition. The order of the three blocks was randomized across participants. Overall, a total of 120 experimental trials were collected per participant. The overall duration of the experiment averaged 60 min. To assess the participants' ability to perform our hand-tracking task, the following dependent variables were extracted from each trial. For all trials we computed the mean Euclidian distance between cursor position and target position. The temporal relationship between cursor and target movement, and between eye and target movement was estimated by means of cross correlations that simultaneously took into account the vertical and horizontal axes. To simultaneously cross correlate horizontal (x) and vertical (y) position signals between effectors, we interleaved the x and y signals, and always time shifted these interleaved signals by a multiple of two samples (Danion & Flanagan, 2018; Flanagan, Terao, & Johansson, 2008). A positive value indicates that the cursor was lagging on the target. Regarding gaze behavior, we first performed a sequence of analyses to separate periods of smooth pursuit, saccades, and blinks from the raw eye position signals. The identification and removal of the blinks (1% of the total trial duration on average) was performed by visual inspection. Eye signals were then low-pass filtered with a fourth order Butterworth filter using a cutoff frequency of 25 Hz. The resultant signals were differentiated to obtain velocity traces, and then were low-pass filtered again with a cutoff frequency of 25 Hz to remove the noise from the numerical differentiation. These eye velocity signals were differentiated to provide accelerations traces that we also low-pass filtered at 25 Hz to remove noise. The identification of the saccades was based on acceleration and deceleration peaks of the eye (>1,500 cm/s2). Based on these computations, periods of pursuit and of saccades were extracted. To better characterize saccadic activity we computed for each trial the mean saccade rate (average number of saccades per second). To evaluate the performance of smooth pursuit, we computed its mean tangential velocity as well as its gain (SP gain) by averaging the ratio between instantaneous gaze and target tangential velocities (only situations where target tangential velocity was greater than 10 cm/s were considered; Landelle et al., 2016). Finally, to assess the relative contribution of saccades and smooth pursuit, for each trial we computed the total distance traveled by the eye with saccades and then expressed this as a percentage of the total distance traveled by the eye using both saccades and smooth pursuit (Orban de Xivry et al., 2006; Landelle et al., 2016). To assess how gaze was shared between target and cursor, we developed the following procedure (see also Danion & Flanagan, 2018). At each point in time we projected the gaze position onto an axis connecting the target and cursor and determined the relative position along this axis, with 0 indicating that gaze projected on the target, 1 indicating gaze projected on the cursor, and a value of 0.5 indicating that gaze was equidistance between the cursor and target. We will refer to this variable as the relative projected gaze position. For all the analyses described above, the first second of each trial was discarded. Finally, as an aside, hand–cursor dynamics was investigated by means of computing the mean distance between the cursor (projected on the opaque mirror) and the projected hand position on the opaque mirror (for a similar approach, see Danion et al., 2012). Because this distance is 0 under the RIGID mapping, it will not be presented. Dynamics of hand movements was also investigated by means of hand tangential velocity. Specifically, we computed mean tangential velocity and its associated fluctuation (SD) over each trial. Two-way repeated measures ANOVAs were used to assess the effect of TRIAL (2 first vs. 2 last trials) and MAPPING (RIGID, SPRING, SPRING-HAPT). Newman-Keuls corrections were used for post hoc t tests to correct for multiple comparisons. A conventional 0.05 significance threshold was used for all analyses. Typical trials Figure 2 plots typical trials performed by the same subject in each of the three experimental conditions. As can be seen, cursor and gaze were always lagging behind the target; however, this lag was substantially smaller for gaze. It is also apparent that tracking performance (i.e., how well the cursor tracked the target) was better under the RIGID mapping than the SPRING ones. In the next section we analyze in more details those observations. Typical trials by the same participant in each of the three experimental conditions. Target, cursor, eye, and hand position signals during early exposure. Although each trial was 10 s long, for clarity only 5 s of signals are displayed. For convenience, we have selected three trials that used the same target trajectory. Saccadic eye movements are depicted by red segments. Tracking performance The accuracy with which the cursor tracked the target was greatly influenced by hand–cursor mapping, with lower performance under both SPRING mappings. Figure 3 shows mean tracking performance as a function of trials and experimental conditions. Regarding the cursor–target distance (Figure 3A), the ANOVA showed a main effect of MAPPING, F(2, 34) = 247.27, p < 0.001; TRIAL, F(1, 17) = 15.79, p < 0.001; and an interaction, F(2, 34) = 15.14, p < 0.001. Post hoc analyses of the MAPPING effect indicated that cursor–target distance was nearly doubled under SPRING and SPRING-HAPT compared to RIGID (4.8 vs. 2.5 cm; p < 0.001); however, there was no significant difference between SPRING and SPRING-HAPT (p = 0.21). Post hoc analysis of the interaction revealed that cursor–target distance decreased across trials under both SPRING mappings (p < 0.001), confirming that prolonged experience benefitted cursor tracking. However, no similar improvement was observed under the RIGID mapping (p = 0.65). Average cursor tracking performance as a function of experimental condition and trial number. A. Euclidian distance between cursor and target. B. Temporal lag between cursor and target (a positive lag indicates that the hand is lagging behind the target). Error bars represent SEM. For both indexes, note the lower performance under the SPRING mappings. Rather similar observations were obtained when examining the temporal lag between cursor and target (see Figure 3B). The ANOVA showed a main effect of MAPPING, F(2, 34) = 138.41, p < 0.001, and an interaction, F(2, 34) = 11.64, p < 0.001, but no TRIAL effect, F(1, 17) = 0.05, p = 0.83. Post hoc analysis of MAPPING revealed that the lag was more than doubled under both SPRING mappings compared to the RIGID mapping (245 vs. 112 ms; p < 0.001). Post hoc analysis of the interaction revealed a significant decrease in lag across trials under SPRING-HAPT (p < 0.001), but not under SPRING and RIGID (p > 0.10). Overall, those analyses support the view that the SPRING mappings were more challenging than the RIGID one, even though tracking improved across trials with the SPRING mappings. Hand–cursor dynamics under SPRING and SPRING-HAPT Figure 4A shows the mean hand tangential velocity as a function of trials for each SPRING mapping. As can be seen, hand movements were typically slower under the SPRING conditions. Indeed, the ANOVA showed a main effect of MAPPING, F(2, 34) = 25.52, p < 0.001, with post hoc comparisons indicating lower hand velocities under SPRING and SPRING-HAPT compared to RIGID (p < 0.001). Although the provision of haptic feedback tends to increase hand velocity, the difference between SPRING-HAPT and SPRING did not reach significance (p = 0.09). The ANOVA also showed a main effect of TRIAL, F(1, 17) = 7.39, p < 0.05), and a TRIAL by MAPPING interaction, F(2, 34) = 5.37, p < 0.01, linked to a decrease in hand velocity across trial in the SPRING conditions only. Further analyses of fluctuations in hand tangential velocity (see Figure 4B) revealed that hand movements were also smoother under the SPRING conditions. Indeed, the ANOVA showed a main effect of MAPPING, F(2, 34) = 16.90, p < 0.001, with post hoc comparisons indicating smaller fluctuations under SPRING and SPRING-HAPT compared to RIGID (p < 0.001). Although haptic feedback tends to favor hand velocity fluctuations, the difference between SPRING-HAPT and SPRING did not reach significance (p = 0.16). The ANOVA also showed a main effect of TRIAL, F(1, 17) = 36.67, p < 0.001), and a TRIAL by MAPPING interaction, F(2, 34) = 7.78, p < 0.01, associated with a decrease in hand velocity fluctuations in the SPRING conditions only. Overall, these analyses demonstrate that participants employed slower and smoother hand movements when moving the cursor during the SPRING conditions. Average hand tangential velocity and its fluctuations as a function of experimental condition and trial number. Error bars represent SEM. Figure 5 shows the mean distance between hand and cursor as a function of trials for each SPRING mapping. Averaged across trials and mappings, mean hand–cursor distance was 2.1 cm. ANOVA showed a significant difference between SPRING and SPRING-HAPT, F(1, 17) = 72.51; p < 0.001, such that the provision of haptic feedback led to a smaller hand–cursor distance (1.8 vs. 2.3 cm). There was also an effect of TRIAL, F(1, 17) = 46.05; p < 0.001, due to a progressive reduction in hand–cursor distance under both mappings. Regarding the temporal level, as expected, the cursor lagged behind the hand. Again we found a significant difference between SPRING and SPRING-HAPT, F(1, 17) = 89.9; p < 0.001, such that the provision of haptic feedback led to a smaller temporal lag (61 vs. 92 ms). There was also an interaction between TRIAL and MAPPING, F(1, 17) = 19.98; p < 0.001, due to a progressive reduction in hand–cursor lag across trials under SPRING but not under SPRING-HAPT. Overall, although prolonged exposure and haptic feedback allowed participants to keep the cursor closer from the hand, the cursor remained temporally and spatially dissociated from the hand motion. Average hand–cursor distance as a function of experimental condition and trial number. Error bars represent SEM. Gaze behavior Figure 6 presents the time course of mean group eye–target (panel A) and eye–cursor (panel B) distance as a function of experimental conditions. The comparison between these two panels indicates that the eye was usually closer from the target than the cursor. Indeed, although eye–target distance was on the order of 1–2 cm, the eye–cursor distance ranged between 2 and 5 cm, a twofold difference. Although the eye–target distance increased for both SPRING mappings compared to RIGID (1.8 vs. 1 cm), a similar phenomenon was observed for the eye–cursor distance (3.5 vs. 2.2 cm), suggesting that the relative position of gaze did not change much across mappings. Gaze position as a function of experimental condition and trial number. A. Euclidian distance between eye and target. B. Euclidian distance between eye and cursor. Error bars represent SEM. For all mappings, note how gaze is closer to the target than the cursor. To characterize in more detail gaze behavior relative to the cursor and target, in Figure 7 we show the mean group distribution of the relative projected gaze position for each experimental condition. As can be seen, for each of the three mappings, the distribution had a single peak that was closer to the target than the cursor. Further analyses comparing early and late trials show no obvious trend in the location of this peak: 21.8 vs. 22.5%; F(1, 17) = 0.29; p = 0.59. However a main effect of MAPPING was found, F(2, 34) = 9.18; p < 0.001). Post hoc analysis indicated that the peak was shifted slightly away from the target in SPRING-HAPT compared to the other two mappings (25 vs. 20%; p < 0.01). Overall, it seems that relative projected gaze position was rather invariant across TRIALS and MAPPING, with gaze being closer to the target than the cursor. Distribution of relative projected gaze position in each experimental condition. Mean group distributions are presented by thick lines (with dotted lines indicating ±1 SEM). All distributions had a single peak much closer to the target than the cursor. Saccadic and smooth pursuit activity The rate of saccades was approximately 2 per second and was rather stable across conditions and trials. ANOVA showed no main effect of MAPPING, F(2, 34) = 0.48, p = 0.62, or TRIAL, F(1, 17) = 2.92, p = 0.11, and there was no interaction, F(2, 34) = 0.19, p = 0.82. Similar results were obtained for the contribution of saccades to total eye displacement, reaching on average 22% (p > 0.17). Smooth pursuit gain and velocity were also found to be similar across mappings, F(2, 34) < 1.75; p > 0.19, reaching, respectively, 0.94 and 15.7 cm/s on average. Overall, saccadic and smooth pursuit activity appeared rather insensitive to our experimental factors. The main goal of this study was to investigate free gaze behavior under a complex hand–cursor mapping when participants have to track a visual target with a cursor controlled by the arm movement. Overall our experiment brought the following key findings. As largely expected, hand tracking accuracy was substantially impaired by the SPRING mapping, resulting in a doubling of cursor–target distance and lag. Despite substantial differences in cursor tracking accuracy, only minimal changes were found with respect to gaze behavior. Indeed, gaze position relative to the target and cursor positions was similar under the SPRING and RIGID mapping. In both cases, gaze was typically located between the cursor and the target, but was closer to the target than the cursor (20% vs. 80%). Analyses of the distribution of the relative position of gaze between the cursor and target showed unimodal distributions, ruling out the possibility that gaze alternated between cursor and target fixation. Furthermore, the saccadic and smooth pursuit activity did not change with hand–cursor mapping. Finally, although the provision of haptic feedback influenced hand behavior, it had virtually no impact on gaze behavior. We will now discuss in more detail these findings and their implications. Cursor tracking and hand behavior are strongly dependent on hand–cursor dynamics As expected, the accuracy of cursor tracking was substantially impaired by the SPRING mapping, resulting in a doubling of both the distance and the time lag between the cursor and target when compared to the RIGID mapping. Although cursor tracking in the SPRING condition improved over the time course of the experiment, it never reached the level in the RIGID condition. Those observations are consistent with earlier ones emphasizing the real challenge of manipulating nonrigid objects (Danion et al., 2012; Nagengast et al., 2009) even when extended practice is offered (Dingwell et al., 2002; Hasson, Shen, & Sternad, 2012). Regarding hand behavior, we observed a progressive reduction in hand–cursor distance under both SPRING mappings. This strategy contrasts with our previous study in which participants had to move a cursor with mass-spring dynamics as fast as possible from one location to another one (Danion et al., 2012). Indeed, during that discrete task we found that the best strategy was to have the object move away from the hand. As participants were given more practice we observed an increase in hand–cursor distance. Here our continuous task may be more challenging in the sense that this time the object has to follow an imposed trajectory, and if the object gets "out of control", cursor tracking will be poor. Of course, if participants were given days to practice, the best strategy may be to free the object. With perfect control, we would expect participants to generate larger, more rapid hand movements, which would cause the object to move further from the hand. However, over a shorter time scale and in the absence of perfect control, this strategy would likely result in large tracking errors. This reasoning is in line with our observation that hand tangential velocity and its fluctuations were substantially reduced under both SPRING mappings. More generally this comparison across studies suggests that the way participants handle the properties of nonrigid objects is strongly influenced by the characteristics of the task. Gaze behavior is virtually unaffected by hand–cursor dynamics In contrast to cursor tracking performance, gaze behavior was modestly influenced by the spring dynamics. We did find that gaze was further away from the target under the SPRING mapping compared to the RIGID mapping. However, when normalized by the target–cursor distance, gaze position was rather similar across our experimental conditions. In all cases we found that gaze was much closer to the target than the cursor, and that this relative position did not change much with learning. Furthermore, the distribution of gaze indicated that the eye was not alternating between periods of target and cursor fixation, even during the early stage of exposure. This is quite different than the gaze behavior observed when a participant was learning a completely novel and arbitrary mapping between hand actions and cursor motion, where gaze tends to be directed at the cursor in early learning and then to the target in late learning (Sailer et al., 2005). Overall, those observations suggest that participants employed a rather robust gaze strategy that consists of positioning gaze between cursor and target but closer to the target. Although similar findings were observed when participants perform this tracking task with a joystick and a rotated cursor (Gouirand et al., 2019), the current study demonstrates that this strategy (gaze on target first) holds for more complex mappings and full arm movements. As noted above, gaze was not strictly on the target, but rather in between the target and the cursor. A first reason is that since target motion was not fully predictable, gaze was necessarily lagging on the target. Second, this behavior is reminiscent of the center-looking strategy that participants often adopt when tracking, with their eyes, multiple objects simultaneously (Fehd & Seiffert, 2008). For instance, when participants track three moving targets surrounded by distractors, fixation is close to the center of the triangle formed by the targets, rather than alternating between individual targets. This behavior is interpreted as a strategy that consists in grouping multiple targets into a single object, and also limits saccades (among individual targets) during which targets cannot be tracked. Participants in our experiment may have employed a similar strategy, allowing them to track both the target and the cursor while limiting saccades. Under the SPRING mappings, participants were able to improve their tracking performance across trials without directed their gaze at the cursor. This suggests that peripheral vision was sufficient to monitor the cursor and provide the error signals necessary for updating the novel relationship between hand motor commands and their visual consequences. Work on reaching to static targets has shown that peripheral vision provides precise information direction and speed of the cursor controlled by the hand—information that can be used to rapidly update motor commands during the reach (Brenner & Smeets, 2003; de Brouwer, Gallivan, & Flanagan, 2018; Dimitriou, Wolpert, & Franklin, 2013; Franklin & Wolpert, 2008; Knill, Bondada, & Chhabra, 2011; Sarlegna et al., 2003; Saunders & Knill, 2003, 2005). Of course, one can ask why participants do not fixate on the cursor and use peripheral vision to track the target. Presumably, participants tend to fixate on the target because it facilitates the use of extraretinal information (i.e., gaze-related proprioceptive signal and/or efference copies of eye movement comments) in locating what is the target of their action (Mrotek & Soechting, 2007; Neggers & Bekkering, 2001; Prablanc, Echallier, Komilis, & Jeannerod, 1979; Prablanc, Pélisson, & Goodale, 1986). Separate contribution of haptics for eye and hand Based on our previous study, we reasoned that haptic feedback would provide relevant information for cursor tracking (Danion et al., 2012). This reasoning is consistent with novel evidence that the provision of haptic feedback accelerated learning. Indeed, despite similar initial performance, target–cursor distance was smaller in the SPRING-HAPT condition compared to the SPRING condition. Regarding the influence of haptic feedback on gaze behavior, our previous study, in which participants had to track, with the eyes, a self-moved target (Danion et al., 2017) that was transiently occluded, showed that haptic feedback was useful under a SPRING mapping. In the current study, however, haptic feedback had no effect on eye–target lag and eye–target distance, as well as relative gaze position. Although these results extend the view that haptic feedback benefits the hand movement control, they do not support a systematic contribution of haptic feedback to eye movement control. Moreover, we conclude that haptic feedback can have distinct contribution depending on the effector, even when these effectors need to be coordinated as envisaged during manual tracking of a visual target. The main goal of this study was to investigate free gaze behavior when participants learn a complex hand–cursor mapping in order to track a visual target. Overall, our study makes two main contributions. First, our results indicate that maintaining gaze on the target remains a priority, thereby suggesting that peripheral vision is sufficient to learn a new cursor dynamics. Second, despite an intricate relationship between eye and hand movements (Crawford, Medendorp, & Marotta, 2004; de Brouwer, Albaghdadi, Flanagan, & Gallivan, 2018; Johansson, Westling, Bäckström, & Flanagan, 2001), we show that haptic feedback can make distinct contributions to each of these effectors. We would like to thank Martin York, Justin Peterson, and Tayler Jarvis for technical support and logistical support. Support for this research was provided by the CNRS (PICS N° 191607), the Natural Sciences and Engineering Research Council of Canada (RGPIN/04837), and the Canadian Institutes of Health Research (82837). JM was supported by the Innovative Training Network Perception and Action in Complex Environment (PACE) that has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement N° 642961. This paper reflects only the authors' view and that the Research Executive Agency (REA) of the European Commission is not responsible for any use that may be made of the information it contains. FRD and RF designed research; FRD performed research; FRD and JM analyzed data; FRD and JM prepared the figures; FRD, JM, and RF interpreted the results and wrote the paper. Corresponding author: Frederic R. Danion. Email: [email protected]. Address: Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France. Brenner, E., & Smeets, J. B. J. (2003). Perceptual requirements for fast manual responses. Experimental Brain Research, 153 (2), 246–252, https://doi.org/10.1007/s00221-003-1598-y. Crawford, J. D., Medendorp, W. P., & Marotta, J. J. (2004). Spatial transformations for eye–hand coordination. Journal of Neurophysiology, 92 (1), 10–19, https://doi.org/10.1152/jn.00117.2004. Danion, F., Diamond, J. S., & Flanagan, J. R. (2012). The role of haptic feedback when manipulating nonrigid objects. Journal of Neurophysiology, 107 (1), 433–441, https://doi.org/10.1152/jn.00738.2011. Danion, F. R., & Flanagan, J. R. (2018). Different gaze strategies during eye versus hand tracking of a moving target. Scientific Reports, 8 (1): 10059, https://doi.org/10.1038/s41598-018-28434-6. Danion, F., Mathew, J., & Flanagan, J. R. (2017). Eye tracking of occluded self-moved targets: Role of haptic feedback and hand-target dynamics. eNeuro, 4 (3), 1–12, http://www.eneuro.org/content/early/2017/06/26/ENEURO.0101-17.2017. de Brouwer, A. J., Albaghdadi, M., Flanagan, J. R., & Gallivan, J. P. (2018). Using gaze behavior to parcellate the explicit and implicit contributions to visuomotor learning. Journal of Neurophysiology, 120 (4), 1602–1615, https://doi.org/10.1152/jn.00113.2018. de Brouwer, A. J., Gallivan, J. P., & Flanagan, J. R. (2018). Visuomotor feedback gains are modulated by gaze position. Journal of Neurophysiology, 120 (5), 2522–2531, https://doi.org/10.1152/jn.00182.2018. Dimitriou, M., Wolpert, D. M., & Franklin, D. W. (2013). The temporal evolution of feedback gains rapidly update to task demands. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 33 (26), 10898–10909, https://doi.org/10.1523/JNEUROSCI.5669-12.2013. Dingwell, J. B., Mah, C. D., & Mussa-Ivaldi, F. A. (2002). Manipulating objects with internal degrees of freedom: Evidence for model-based control. Journal of Neurophysiology, 88 (1), 222–235. Dingwell, J. B., Mah, C. D., & Mussa-Ivaldi, F. A. (2004). Experimentally confirmed mathematical model for human control of a nonrigid object. Journal of Neurophysiology, 91 (3), 1158–1170, https://doi.org/10.1152/jn.00704.2003. Fehd, H. M., & Seiffert, A. E. (2008). Eye movements during multiple object tracking: Where do participants look? Cognition, 108 (1), 201–209, https://doi.org/10.1016/j.cognition.2007.11.008. Flanagan, J. R., Terao, Y., & Johansson, R. S. (2008). Gaze behavior when reaching to remembered targets. Journal of Neurophysiology, 100 (3), 1533–1543, https://doi.org/10.1152/jn.90518.2008. Foulkes, A. J., & Miall, R. C. (2000). Adaptation to visual feedback delays in a human manual tracking task. Experimental Brain Research, 131 (1), 101–110. Franklin, D. W., & Wolpert, D. M. (2008). Specificity of reflex adaptation for task-relevant variability. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 28 (52), 14165–14175, https://doi.org/10.1523/JNEUROSCI.4406-08.2008. Gouirand, N., Mathew, J., Brenner, E., & Danion, F. (2019). Eye movements do not play an important role in the adaptation of hand tracking to a visuomotor rotation. Journal of Neurophysiology, 121 (5), 1967–1976, https://doi.org/10.1152/jn.00814.2018. Grigorova, V., & Bock, O. (2006). The role of eye movements in visuo-manual adaptation. Experimental Brain Research, 171 (4), 524–529, https://doi.org/10.1007/s00221-005-0301-x. Hasson, C. J., Nasseroleslami, B., Krakauer, J. W., & Sternad, D. (2012, October). Comparing haptic and visual feedback control of an object with complex dynamics. Paper presented at the Society for Neuroscience 42nd Annual Meeting, New Orleans, LA. Hasson, C. J., Shen, T., & Sternad, D. (2012). Energy margins in dynamic object manipulation. Journal of Neurophysiology, 108 (5), 1349–1365, https://doi.org/10.1152/jn.00019.2012. Huang, F. C., Gillespie, R. B., & Kuo, A. D. (2006). Human adaptation to interaction forces in visuo-motor coordination. IEEE Transactions on Neural Systems and Rehabilitation Engineering: A Publication of the IEEE Engineering in Medicine and Biology Society, 14 (3), 390–397, https://doi.org/10.1109/TNSRE.2006.881533. Johansson, R. S., Westling, G., Bäckström, A., & Flanagan, J. R. (2001). Eye–hand coordination in object manipulation. The Journal of Neuroscience, 21 (17), 6917–6932. Knill, D. C., Bondada, A., & Chhabra, M. (2011). Flexible, task-dependent use of sensory feedback to control hand movements. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 31 (4), 1219–1237, https://doi.org/10.1523/JNEUROSCI.3522-09.2011. Koken, P. W., & Erkelens, C. J. (1992). Influences of hand movements on eye movements in tracking tasks in man. Experimental Brain Research, 88 (3), 657–664. Landelle, C., Montagnini, A., Madelain, L., & Danion, F. (2016). Eye tracking a self-moved target with complex hand-target dynamics. Journal of Neurophysiology, 116 (4), 1859–1870, https://doi.org/10.1152/jn.00007.2016. Miall, R. C., & Jackson, J. K. (2006). Adaptation to visual feedback delays in manual tracking: Evidence against the Smith predictor model of human visually guided action. Experimental Brain Research, 172 (1), 77–84, https://doi.org/10.1007/s00221-005-0306-5. Miall, R. C., Reckess, G. Z., & Imamizu, H. (2001). The cerebellum coordinates eye and hand tracking movements. Nature Neuroscience, 4 (6), 638–644, https://doi.org/10.1038/88465. Miall, R. C., Weir, D. J., & Stein, J. F. (1993). Intermittency in human manual tracking tasks. Journal of Motor Behavior, 25 (1), 53–63. Mrotek, L. A., & Soechting, J. F. (2007). Target interception: Hand-eye coordination and strategies. The Journal of Neuroscience, 27 (27), 7297–7309, https://doi.org/10.1523/JNEUROSCI.2046-07.2007. Nagengast, A. J., Braun, D. A., & Wolpert, D. M. (2009). Optimal control predicts human performance on objects with internal degrees of freedom. PLoS Computational Biology, 5 (6), e1000419, https://doi.org/10.1371/journal.pcbi.1000419. Neggers, S. F., & Bekkering, H. (2001). Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a non-visual signal. Journal of Neurophysiology, 86 (2), 961–970. Niehorster, D. C., Siu, W. W. F., & Li, L. (2015). Manual tracking enhances smooth pursuit eye movements. Journal of Vision, 15 (15): 11, 1–14, https://doi.org/10.1167/15.15.11. [PubMed] [Article] Orban de Xivry, J.-J., Bennett, S. J., Lefèvre, P., & Barnes, G. R. (2006). Evidence for synergy between saccades and smooth pursuit during transient target disappearance. Journal of Neurophysiology, 95, 418–427. Poulton, E. (1974). Tracking skill and manual control. New York, NY: Academic Press. Prablanc, C., Echallier, J. F., Komilis, E., & Jeannerod, M. (1979). Optimal response of eye and hand motor systems in pointing at a visual target. I. Spatio-temporal characteristics of eye and hand movements and their relationships when varying the amount of visual information. Biological Cybernetics, 35 (2), 113–124. Prablanc, C., Pélisson, D., & Goodale, M. A. (1986). Visual control of reaching movements without vision of the limb. I. Role of retinal feedback of target position in guiding the hand. Experimental Brain Research, 62 (2), 293–302. Sailer, U., Flanagan, J. R., & Johansson, R. S. (2005). Eye-hand coordination during learning of a novel visuomotor task. Journal of Neuroscience, 25, 8833–8842. Sarlegna, F., Blouin, J., Bresciani, J.-P., Bourdin, C., Vercher, J.-L., & Gauthier, G. M. (2003). Target and hand position information in the online control of goal-directed arm movements. Experimental Brain Research. Experimentelle Hirnforschung. Expérimentation Cérébrale, 151 (4), 524–535, https://doi.org/10.1007/s00221-003-1504-7. Saunders, J. A., & Knill, D. C. (2003). Humans use continuous visual feedback from the hand to control fast reaching movements. Experimental Brain Research, 152 (3), 341–352, https://doi.org/10.1007/s00221-003-1525-2. Saunders, J. A., & Knill, D. C. (2005). Humans use continuous visual feedback from the hand to control both the direction and distance of pointing movements. Experimental Brain Research, 162 (4), 458–473, https://doi.org/10.1007/s00221-004-2064-1. Scott, S. H. (1999). Apparatus for measuring and perturbing shoulder and elbow joint positions and torques during reaching. Journal of Neuroscience Methods, 89 (2), 119–127. Soechting, J. F., Rao, H. M., & Juveli, J. Z. (2010). Incorporating prediction in models for two-dimensional smooth pursuit. PLoS One, 5 (9), e12574, https://doi.org/10.1371/journal.pone.0012574. Streng, M. L., Popa, L. S., & Ebner, T. J. (2018). Modulation of sensory prediction error in Purkinje cells during visual feedback manipulations. Nature Communications, 9 (1): 1099, https://doi.org/10.1038/s41467-018-03541-0. Tramper, J. J., & Gielen, C. C. (2011). Visuomotor coordination is different for different directions in three-dimensional space. The Journal of Neuroscience, 31 (21), 7857–7866, https://doi.org/10.1523/JNEUROSCI.0486-11.2011. Vercher, J. L., & Gauthier, G. M. (1992). Oculo-manual coordination control: Ocular and manual tracking of visual targets with delayed visual feedback of the hand motion. Experimental Brain Research, 90 (3), 599–609. Vercher, J. L., Quaccia, D., & Gauthier, G. M. (1995). Oculo-manual coordination control: Respective role of visual and non-visual information in ocular tracking of self-moved targets. Experimental Brain Research, 103 (2), 311–322. Xia, R., & Barnes, G. (1999). Oculomanual coordination in tracking of pseudorandom target motion stimuli. Journal of Motor Behavior, 31 (1), 21–38. Characterizing and automatically detecting smooth pursuit in a large-scale ground-truth data set of dynamic natural scenes The visual control of interceptive steering: How do people steer a car to intercept a moving target? Objective Measurement of Local Rod and Cone Function Using Gaze-Controlled Chromatic Pupil Campimetry in Healthy Subjects Compression of the Choroid by Horizontal Duction Mobility-Related Gaze Training in Individuals With Glaucoma: A Proof-of-Concept Study Quantification of Visual Fixation in Multiple Sclerosis
CommonCrawl
the open journal for quantum science Perspectives & editorials Outreach: Leaps! For authors of Views Characterization of solvable spin models via graph invariants Adrian Chapman and Steven T. Flammia Centre for Engineered Quantum Systems, School of Physics, The University of Sydney, Sydney, Australia Published: 2020-06-04, volume 4, page 278 Eprint: arXiv:2003.05465v2 Doi: https://doi.org/10.22331/q-2020-06-04-278 Citation: Quantum 4, 278 (2020). Get full text pdf Read on arXiv Vanity Comment on Fermat's library Find this paper interesting or want to discuss? Scite or leave a comment on SciRate. Exactly solvable models are essential in physics. For many-body spin-$\mathbf{\sf{1}/{2}}$ systems, an important class of such models consists of those that can be mapped to free fermions hopping on a graph. We provide a complete characterization of models which can be solved this way. Specifically, we reduce the problem of recognizing such spin models to the graph-theoretic problem of recognizing line graphs, which has been solved optimally. A corollary of our result is a complete set of constant-sized commutation structures that constitute the obstructions to a free-fermion solution. We find that symmetries are tightly constrained in these models. Pauli symmetries correspond to either: (i) cycles on the fermion hopping graph, (ii) the fermion parity operator, or (iii) logically encoded qubits. Clifford symmetries within one of these symmetry sectors, with three exceptions, must be symmetries of the free-fermion model itself. We demonstrate how several exact free-fermion solutions from the literature fit into our formalism and give an explicit example of a new model previously unknown to be solvable by free fermions. Popular summary An important situation in theoretical physics, called a duality, occurs when the behaviors of two physical systems perfectly coincide. A physical system is any isolated section of the universe, such as a collection of gas particles in a box, or vibrational waves traveling along a guitar string. A duality between two systems allows physicists to talk about the physics of one system in terms of the other system. Systems which are related in this way can be surprisingly different, and finding dualities is often a key step to understanding the behaviors of both. In the scenarios where one system looks very complicated, the other system can be very simple, and vice versa. By thinking in terms of the simpler system, physicists can bypass a great deal of complexity to understand the more complicated one. In this work, we examine a certain class of dualites between two systems: quantum spin lattices and noninteracting fermions. A spin lattice consists of many interacting compass needles, or spins, arranged in some structure. Each spin feels the competing, or "frustrating", influence from many different nearby spins, making the behavior of this model appear very complicated. A noninteracting fermion system consists of particles hopping between sites in a similarly discrete arrangement. Because the particles are fermions, they cannot occupy the same site, but they otherwise do not influence each other. In contrast to the spin model, the noninteracting nature of the fermion model makes it much simpler to work with. By considering the precise frustration structure of the spin model as a kind of network, we apply tools from network theory to find collections of spins which behave like emergent fermions, allowing us to extract the behavior of these complicated models in terms of the simpler noninteracting fermions. Though these types of dualities have been explored in the past, we developed a new framework to systematically find them. We expect these results to lead to the design of new quantum materials for the development of a quantum computer. ► BibTeX data @article{Chapman2020characterizationof, doi = {10.22331/q-2020-06-04-278}, url = {https://doi.org/10.22331/q-2020-06-04-278}, title = {Characterization of solvable spin models via graph invariants}, author = {Chapman, Adrian and Flammia, Steven T.}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {4}, pages = {278}, month = jun, year = {2020} } [1] E. Lieb, T. Schultz, and D. Mattis, Annals of Physics 16, 407 (1961). https:/​/​doi.org/​10.1016/​0003-4916(61)90115-4 [2] P. Jordan and E. Wigner, Zeitschrift für Physik 47, 631 (1928). https:/​/​doi.org/​10.1007/​BF01331938 [3] E. Fradkin, Phys. Rev. Lett. 63, 322 (1989). https:/​/​doi.org/​10.1103/​PhysRevLett.63.322 [4] Y. R. Wang, Phys. Rev. B 43, 3786 (1991). https:/​/​doi.org/​10.1103/​PhysRevB.43.3786 [5] L. Huerta and J. Zanelli, Phys. Rev. Lett. 71, 3622 (1993). https:/​/​doi.org/​10.1103/​PhysRevLett.71.3622 [6] C. D. Batista and G. Ortiz, Phys. Rev. Lett. 86, 1082 (2001). [7] F. Verstraete and J. I. Cirac, Journal of Statistical Mechanics: Theory and Experiment 2005, P09012 (2005). https:/​/​doi.org/​10.1088/​1742-5468/​2005/​09/​p09012 [8] Z. Nussinov, G. Ortiz, and E. Cobanera, Phys. Rev. B 86, 085415 (2012). https:/​/​doi.org/​10.1103/​PhysRevB.86.085415 [9] Y.-A. Chen, A. Kapustin, and DJ. Radičević, Annals of Physics 393, 234 (2018). https:/​/​doi.org/​10.1016/​j.aop.2018.03.024 [10] S. Backens, A. Shnirman, and Y. Makhlin, Scientific reports 9, 2598 (2019). https:/​/​doi.org/​10.1038/​s41598-018-38128-8 [11] N. Tantivasadakarn, arXiv e-prints , arXiv:2002.11345 (2020), arXiv:2002.11345 [cond-mat.str-el]. arXiv:2002.11345 [12] A. Kitaev, Annals of Physics 321, 2 (2006). [13] E. Knill, ArXiv e-prints (2001), arXiv:quant-ph/​0108033. arXiv:arXiv:quant-ph/0108033 [14] B. M. Terhal and D. P. DiVincenzo, Phys. Rev. A 65, 032325 (2002). https:/​/​doi.org/​10.1103/​PhysRevA.65.032325 [15] M. Van Den Nest, Quantum Info. Comput. 11, 784 (2011). http:/​/​dl.acm.org/​citation.cfm?id=2230936.2230941 [16] D. J. Brod, Phys. Rev. A 93, 062332 (2016). [17] R. Jozsa and A. Miyake, Proc. R. Soc. A 464, 3089 (2008). https:/​/​doi.org/​10.1098/​rspa.2008.0189 [18] D. J. Brod and E. F. Galvão, Phys. Rev. A 84, 022310 (2011). [19] S. Bravyi, Phys. Rev. A 73, 042313 (2006). [20] M. Hebenstreit, R. Jozsa, B. Kraus, S. Strelchuk, and M. Yoganathan, Phys. Rev. Lett. 123, 080503 (2019). https:/​/​doi.org/​10.1103/​PhysRevLett.123.080503 [21] D. J. Brod and A. M. Childs, Quant. Info. Comput. 14, 901 (2014). https:/​/​doi.org/​10.26421/​qic14.11-12 [22] L. G. Valiant, SIAM Journal on Computing 31, 1229 (2002). https:/​/​doi.org/​10.1137/​S0097539700377025 [23] J.-Y. Cai and V. Choudhary, in Proceedings of the Third International Conference on Theory and Applications of Models of Computation, TAMC'06 (Springer-Verlag, Berlin, Heidelberg, 2006) pp. 248–261. https:/​/​doi.org/​10.1007/​11750321_24 [24] J. Cai, V. Choudhary, and P. Lu, in Twenty-Second Annual IEEE Conference on Computational Complexity (CCC'07) (2007) pp. 305–318. https:/​/​doi.org/​10.1109/​CCC.2007.22 https:/​/​doi.org/​10.1137/​070682575 [26] C. H. Papadimitriou, in Encyclopedia of Computer Science (John Wiley and Sons Ltd., Chichester, UK, 1994) pp. 260–265. [27] P. Kasteleyn, Physica 27, 1209 (1961). [28] H. N. V. Temperley and M. E. Fisher, Philosophical Magazine 6, 1061 (1961). https:/​/​doi.org/​10.1080/​14786436108243366 [29] M. Planat and M. Saniga, Quant. Inf. Comput. 8, 127 (2008), arXiv:quant-ph/​0701211 [quant-ph]. arXiv:quant-ph/0701211 [30] A. Jena, S. Genin, and M. Mosca, arXiv e-prints , arXiv:1907.07859 (2019), arXiv:1907.07859 [quant-ph]. [31] V. Verteletskyi, T.-C. Yen, and A. F. Izmaylov, The Journal of Chemical Physics 152, 124114 (2020). https:/​/​doi.org/​10.1063/​1.5141458 [32] A. Zhao, A. Tranter, W. M. Kirby, S. F. Ung, A. Miyake, and P. Love, arXiv e-prints , arXiv:1908.08067 (2019), arXiv:1908.08067 [quant-ph]. [33] A. F. Izmaylov, T.-C. Yen, R. A. Lang, and V. Verteletskyi, Journal of Chemical Theory and Computation 16, 190 (2019). https:/​/​doi.org/​10.1021/​acs.jctc.9b00791 [34] T.-C. Yen, V. Verteletskyi, and A. F. Izmaylov, Journal of Chemical Theory and Computation 16, 2400 (2020). https:/​/​doi.org/​10.1021/​acs.jctc.0c00008 [35] P. Gokhale, O. Angiuli, Y. Ding, K. Gui, T. Tomesh, M. Suchara, M. Martonosi, and F. T. Chong, arXiv e-prints , arXiv:1907.13623 (2019), arXiv:1907.13623 [quant-ph]. [36] O. Crawford, B. van Straaten, D. Wang, T. Parks, E. Campbell, and S. Brierley, arXiv e-prints , arXiv:1908.06942 (2019), arXiv:1908.06942 [quant-ph]. [37] X. Bonet-Monroig, R. Babbush, and T. E. O'Brien, arXiv e-prints , arXiv:1908.05628 (2019), arXiv:1908.05628 [quant-ph]. [38] N. D. Roussopoulos, Information Processing Letters 2, 108 (1973). https:/​/​doi.org/​10.1016/​0020-0190(73)90029-x [39] P. G. H. Lehot, J. ACM 21, 569 (1974). https:/​/​doi.org/​10.1145/​321850.321853 [40] D. G. Degiorgi and K. Simon, in Graph-Theoretic Concepts in Computer Science (Springer Berlin Heidelberg, Berlin, Heidelberg, 1995) pp. 37–48. https:/​/​doi.org/​10.1007/​3-540-60618-1_64 [41] A. J. Kollár, M. Fitzpatrick, and A. A. Houck, Nature 571, 45 (2019a). https:/​/​doi.org/​10.1038/​s41586-019-1348-3 [42] A. J. Kollár, M. Fitzpatrick, P. Sarnak, and A. A. Houck, Communications in Mathematical Physics , online only (2019b). [43] I. Boettcher, P. Bienias, R. Belyansky, A. J. Kollár, and A. V. Gorshkov, arXiv e-prints , arXiv:1910.12318 (2019), arXiv:1910.12318 [quant-ph]. [44] T. Jochym-O'Connor, S. Roberts, S. Bartlett, and J. Preskill, ``Frustrated hexagonal gauge 3d color code,'' (2019), 5th International Conference on Quantum Error Correction (QEC 2019). https:/​/​www.iopconferences.org/​iop/​frontend/​reg/​absViewDocumentFE.csp?documentID=28672&eventID=1264 [45] H. Whitney, American Journal of Mathematics 54, 150 (1932). https:/​/​doi.org/​10.2307/​2371086 [46] D. M. Goodmanson, American Journal of Physics 64, 870 (1996). https:/​/​doi.org/​10.1119/​1.18113 [47] L. W. Beineke, Journal of Combinatorial Theory 9, 129 (1970). https:/​/​doi.org/​10.1016/​s0021-9800(70)80019-9 [48] Ľ. Šoltés, Discrete Mathematics 132, 391 (1994). https:/​/​doi.org/​10.1016/​0012-365x(92)00577-e [49] Y. Yang, J. Lin, and C. Wang, Discrete Mathematics 252, 287 (2002). https:/​/​doi.org/​10.1016/​S0012-365X(01)00459-9 [50] P. Erdős, A. W. Goodman, and L. Pósa, Canadian Journal of Mathematics 18, 106 (1966). https:/​/​doi.org/​10.4153/​CJM-1966-014-3 [51] F. Harary, Graph Theory, Addison Wesley series in mathematics (Addison-Wesley, 1971). https:/​/​books.google.com.au/​books?id=q8OWtwEACAAJ [52] J. Krausz, Matematikai és Fizikai Lapok 50 (1943). http:/​/​real-j.mtak.hu/​7300/​ [53] A. Bednarek, Discrete Mathematics 56, 83 (1985). https:/​/​doi.org/​10.1016/​0012-365x(85)90196-7 [54] M. Suchara, S. Bravyi, and B. Terhal, Journal of Physics A: Mathematical and Theoretical 44, 155301 (2011). https:/​/​doi.org/​10.1088/​1751-8113/​44/​15/​155301 [55] H. Bombín, New Journal of Physics 18, 043038 (2016). https:/​/​doi.org/​10.1088/​1367-2630/​18/​4/​043038 [56] H. Bombín, Phys. Rev. X 5, 031043 (2015). https:/​/​doi.org/​10.1103/​PhysRevX.5.031043 [57] A. Kubica and M. E. Beverland, Phys. Rev. A 91, 032330 (2015). [59] B. J. Brown, N. H. Nickerson, and D. E. Browne, Nature Communications 7, 12302 (2016). https:/​/​doi.org/​10.1038/​ncomms12302 [60] A. Y. Kitaev, Physics-Uspekhi 44, 131 (2001). https:/​/​doi.org/​10.1070/​1063-7869/​44/​10s/​s29 [61] J. Klassen and B. M. Terhal, Quantum 3, 139 (2019). https:/​/​doi.org/​10.22331/​q-2019-05-06-139 [62] P. Fendley, Journal of Physics A: Mathematical and Theoretical 52, 335002 (2019). https:/​/​doi.org/​10.1088/​1751-8121/​ab305d [63] S. B. Bravyi and A. Y. Kitaev, Ann. Phys. (N. Y.) 298, 210 (2002). https:/​/​doi.org/​10.1006/​aphy.2002.6254 [64] J. T. Seeley, M. J. Richard, and P. J. Love, The Journal of Chemical Physics 137, 224109 (2012). [65] K. Setia, S. Bravyi, A. Mezzacapo, and J. D. Whitfield, Phys. Rev. Research 1, 033033 (2019). https:/​/​doi.org/​10.1103/​PhysRevResearch.1.033033 [66] R. C. Ball, Phys. Rev. Lett. 95, 176407 (2005). https:/​/​doi.org/​10.1103/​PhysRevLett.95.176407 [67] S. Bravyi, J. M. Gambetta, A. Mezzacapo, and K. Temme, arXiv e-prints , arXiv:1701.08213 (2017), arXiv:1701.08213 [quant-ph]. [68] M. Steudtner and S. Wehner, New Journal of Physics 20, 063010 (2018). https:/​/​doi.org/​10.1088/​1367-2630/​aac54f [69] Z. Jiang, J. McClean, R. Babbush, and H. Neven, Phys. Rev. Applied 12, 064041 (2019). https:/​/​doi.org/​10.1103/​PhysRevApplied.12.064041 [70] V. Havlíček, M. Troyer, and J. D. Whitfield, Phys. Rev. A 95, 032332 (2017). [71] Z. Jiang, A. Kalev, W. Mruczkiewicz, and H. Neven, arXiv e-prints , arXiv:1910.10746 (2019), arXiv:1910.10746 [quant-ph]. [72] S. Bravyi and D. Gosset, Communications in Mathematical Physics 356, 451 (2017). [73] S. Bravyi, D. Browne, P. Calpin, E. Campbell, D. Gosset, and M. Howard, Quantum 3, 181 (2019). [1] Masahiro Ogura, Yukihisa Imamura, Naruhiko Kameyama, Kazuhiko Minami, and Masatoshi Sato, "Geometric criterion for solvability of lattice spin systems", Physical Review B 102 24, 245118 (2020). [2] Matteo Ippoliti, Michael J. Gullans, Sarang Gopalakrishnan, David A. Huse, and Vedika Khemani, "Entanglement phase transitions in measurement-only dynamics", arXiv:2004.09560. [3] Jarrod R. McClean, Matthew P. Harrigan, Masoud Mohseni, Nicholas C. Rubin, Zhang Jiang, Sergio Boixo, Vadim N. Smelyanskiy, Ryan Babbush, and Hartmut Neven, "Low depth mechanisms for quantum optimization", arXiv:2008.08615. The above citations are from Crossref's cited-by service (last updated successfully 2021-01-16 01:57:48) and SAO/NASA ADS (last updated successfully 2021-01-16 01:57:49). The list may be incomplete as not all publishers provide suitable and complete citation data. This Paper is published in Quantum under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions. Quantum is an open-access peer-reviewed journal for quantum science and related fields. Quantum is non-profit and community-run: an effort by researchers and for researchers to make science more open and publishing more transparent and efficient. Sign up for our monthly digest of papers and other news. Anne Broadbent Harry Buhrman Jens Eisert Debbie Leung Chaoyang Lu Ana Maria Rey Anna Sanpera Urbasi Sinha Robert W. Spekkens Reinhard Werner Birgitta Whaley Andreas Winter Ahsan Nazir António Acín Carlo Beenakker Dan Browne Nicolas Brunner Daniel Burgarth Earl Campbell Eric Cavalcanti Andrea Coladangelo Raúl García-Patrón Omar Fawzi Steven Flammia Sevag Gharibian Géza Giedke Christopher Granade Alex Grilo Aram Harrow Khabat Heshami Chris Heunen Fred Jendrzejewski Isaac Kim Matthew Leifer Anthony Leverrier Tongyang Li Chiara Macchiavello Kavan Modi Tomoyuki Morimae Milan Mosonyi Ion Nechita Roman Orus Saverio Pascazio Marco Piani Joseph Renes Mohan Sarovar Jörg Schmiedmayer Ujjwal Sen Jens Siewert Paul Skrzypczyk John A. Smolin André Stefanov Aephraim Steinberg Luca Tagliacozzo Francesca Vidotto Michael Walter Xin Wang Witlef Wieczorek Alexander Wilce Ronald de Wolf Magdalena Zych Karol Życzkowski Christian Gogolin Marcus Huber Lídia del Rio Support Quantum and or print our poster. Memberships and Indexing Feedback and discussion on /r/quantumjournal Contact us by email. © Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften. https://doi.org/10.22331/q Quantum practices open accounting. Copyright © 2021 Quantum – OnePress theme by FameThemes This website uses cookies to improve your experience. For more information see the data protection and privacy policy. Accept
CommonCrawl
Péter Pál Pálfy Péter Pál Pálfy (Debrecen, 23 August 1955) is a Hungarian mathematician, working in algebra, more precisely in group theory and universal algebra. Between 2006 and 2018 he served as the director of the Alfréd Rényi Institute of Mathematics.[1] Career Pálfy graduated from Eötvös University, Budapest in 1978 and started working at the Alfréd Rényi Institute of Mathematics. He was deputy director of the Institute from 1991 to 1997. From 2000 to 2005 he had a full-time professorship at Eötvös University. In 2006, he returned to the Alfréd Rényi Institute of Mathematics as director holding this position until 2018 as well as a part-time professorship at Eötvös University. Pálfy obtained his DSc degree in 1997. He was elected a corresponding member of the Hungarian Academy of Sciences in 2004 and full member in 2010.[2] References 1. "Rényi – The History of the Institute". renyi.hu. Retrieved 2023-04-01. 2. "MTA – Members of MTA". mta.hu. Retrieved 2016-01-15. External links • Personal webpage, Alfréd Rényi Institute of Mathematics Authority control International • ISNI • VIAF National • United States Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
\begin{document} \maketitle \begin{abstract} We prove some de Rham theorems on bounded subanalytic submanifolds of $\R^n$ (not necessarily compact). We show that the $L^1$ cohomology of such a submanifold is isomorphic to its singular homology. In the case where the closure of the underlying manifold has only isolated singularities this implies that the $L^1$ cohomology is Poincar\'e dual to $L^\infty$ cohomology (in dimension $j <m-1$). In general, Poincar\'e duality is related to the so-called $L^1$ Stokes' Property. For oriented manifolds, we show that the $L^1$ Stokes' property holds if and only if integration realizes a nondegenerate pairing between $L^1$ and $L^\infty$ forms. This is the counterpart of a theorem proved by Cheeger on $L^2$ forms. \end{abstract} \section{introduction} Given a Riemannian manifold $M$, the $L^1$ forms are the differential forms $\omega$ on $M$ satisfying \begin{equation}\label{eq_l1_condition}\int_M |\omega|\, d vol_M <\infty,\end{equation} where $|\omega|$ is the norm of the differential form $\omega$ derived from the Riemannian metric of $M$. The smooth $L^1$ forms having an $L^1$ exterior derivative constitute a cochain complex which gives rise to cohomology groups, {\bf the $L^1$ cohomology groups of $M$}. In this paper we first prove a de Rham theorem for the $L^1$ cohomology: \begin{thm}\label{thm_intro}Let $M$ be a bounded subanalytic submanifold of $\R^n$. The $L^1$ cohomology of $M$ is isomorphic to its singular cohomology. \end{thm} Here, $M$ is equipped with the Riemannian metric inherited from the ambient space. In particular, the $L^1$ cohomology groups are finitely generated and are topological invariants of $M$. Forms with integrability conditions have been the focus of interest of many authors. Let us mention, among many others, \cite{ bgm,c1, c2, c3, cgm,d, weber,hp,s1,s2,y}. First, integration is necessary to construct a pairing, crucial to define a Poincar\'e duality morphism which we study below. Secondly, integrability conditions are of foremost importance in geometric analysis and differential equations on manifolds. The $L^1$ condition if of metric nature. The metric geometry of singularities is much more challenging than the study of their topology. For instance it is well known that subanalytic sets may be triangulated and hence are locally homeomorphic to cones. This property is very important for it reduces the study of the topology of the singularity to the study of the topology link. The story is more complicated is one is interested in the description of the aspect of singularities from the metric point of view. A triangulation may not be achieved without affecting drastically the metric structure the singular set. The proof of this theorem thus requires new techniques for we do not restrict ourselves to metrically conical singularities. In \cite{vlt,vpams}, the author introduced and constructed some triangulations enclosing enough information to determine al the metric properties of the singularities. The idea was to control the way the metric is affected by the triangulation. The proof of Theorem \ref{thm_intro} requires an acurate description of the metric type of subanalytic singularities. Using the techniques developped in \cite{vlt} \cite{vpams} and \cite{vlinfty} we show that the conical structure of subanalytic singularities is not only topological but Lipschitz in a very explicit sense that we shall define in this paper. This is achieved in section \ref{sect_lip} of this paper and it is the keystone of the proof of Theorem \ref{thm_intro}. This section is of its own interest, offering a nice new description of the Lipschitz geometry of subanalytic sets. We improve the results of \cite{vlinfty} where it was shown that every subanalytic germ may be retracted in a Lipschitz way (see also \cite{sv}). The history of $L^p$ forms on singular varieties began when J. Cheeger started constructing a Hodge theory for singular compact varieties. He first computed in \cite{c1,c2} the $L^2$ cohomology groups for varieties with metrically conical singularities. It turned out to be related to intersection cohomology making of it a good candidate to get a generalized Hodge theory on singular varieties \cite{c3,c4, c5, cgm}. Since Cheeger's work on $L^2$ forms, many authors have investigated $L^p$ forms on singular varieties \cite{bgm,d, weber,hp,s1,s2,y} (among many others). Nevertheless all of them focus on particular classes of Riemmanian manifolds, with strong restrictions on the metric near the singularities, like in the case of the so-called $f$-horns or metrically conical singularities. In the present paper we only assume that the given set is subanalytic. Recently, the author of the present paper computed the $L^\infty$ cohomology groups for any subanalytic pseudomanifold. Let us recall the de Rham theorem achieved in \cite{vlinfty}. \begin{thm}\label{thm_intro_linfty}\cite{vlinfty} Let $X$ be a compact subanalytic pseudomanifold. Then, for any $j$: $$H_\infty ^j(X_{reg}) \simeq I^{t}H^j (X).$$ Furthermore, the isomorphism is induced by the natural map provided by integration on allowable simplices. \end{thm} Here, $H_\infty^\bullet$ denotes the $L^\infty$ cohomology and $I^tH^j(X)$ the intersection cohomology of $X$ in the maximal perversity. The definitions of these cohomology theories are recalled in sections \ref{sect_ih} and \ref{sect_linfty} below. We write $X_{reg}$ for the nonsingular part of $X$, i. e. the set of points at which $X$ is a smooth manifold. Intersection homology was discovered by M. Goresky and R. MacPherson who computed these homology groups. What makes it very attractive is that they showed in their fundamental paper \cite{ih1} that it satisfies Poincar\'e duality for a quite large class of sets (recalled in Theorem \ref{thm_poincare_ih}), enclosing all the complex analytic sets (see also \cite{ih2}). In view of the above paragraph, the two above de Rham theorems raise the very natural question of whether we can hope for Poincar\'e duality between $L^1$ and $L^\infty$ cohomology. Actually, the two above theorems, via Goresky and MacPherson's generalized Poincar\'e duality, admit the following corollary. \begin{cor}\label{cor_poincare_duality_intro} Let $X$ be an oriented subanalytic pseudomanifold with isolated singularities. Then, $L^1$ cohomology is Poincar\'e dual to $L^\infty$ cohomology in dimension $j<m-1$, i. e. for any $j < m-1$: $$H_{(1)} ^j(X_{reg}) \simeq H_\infty^{m-j} (X_{reg}).$$ \end{cor} More generally, if the singular locus is of dimension $k$ then the $L^1$ cohomology is dual to the $L^\infty$ cohomology in dimension $j <m-k-1$. This is due to the fact that in this case intersection homology coincides with the usual homology of $X_{reg}$ (in dimension $j$). Intersection homology turns out to be very useful to assess the lack of duality between $L^1$ and $L^\infty$ cohomology. We see that the obstruction for this duality to hold is of purely topological nature. Although the $L^1$ and $L^\infty$ conditions are closely related to the metric structure of the singularities, the above theorems show that the knowledge of the topology of the singularities is enough to enure Poincar\'e duality. It is worthy of notice that the only data of the topology of $X_{reg}$ is not enough. In his study of $L^2$ cohomology, Cheeger also pointed out a problem that may arise on singular varieties, even with conical singularities: the $L^2$ Stokes' property may fail. Roughly speaking, this property says that the exterior differential operator is self-adjoint on $L^2$ forms (up to sign, considering $(m-j)$ -forms as the dual of $j$-forms, see (\ref{eq_Stokes'_intro})). This property is crucial in Hodge theory, which yields Poincar\'e duality as a byproduct. Cheeger investigated the case of conical singularities in \cite{c2} and completely clarified the situation. He showed that the $L^2$ Stokes' property holds on conical singularities if and only if Poincar\'e duality holds. Thus, in this case, a nice Hodge theory may be performed and Cheeger was able to prove that every cohomology class has a unique harmonic representative. Cheeger's $L^2$ Stokes' property is also crucial because it allows to define a pairing on the $L^2$ cohomology groups by integrating wedge products of forms. The Poincar\'e duality isomorphism on $L^2$ cohomology then results from this pairing which provides a very natural isomorphism. The $L^p$ Stokes' property has been then studied by Y. Youssin on $f$-horns in \cite{y}, who obtained an analogous result. Therefore, in our framework, the latter duality for $L^1$ cohomology very naturally raises the question on whether the $L^1$ Stokes property holds and whether integration provides an isomorphism between $L^1$ and $L^\infty$ cohomology. In order to be more specific, let us explicitly define the {\bf $L^1$ Stokes' property} by saying that it holds (in dimension $j$) on a $C^\infty$ manifold $M$ of dimension $m$ whenever for any $C^\infty$ $L^1$ $j$-form $\alpha$, with $d\alpha$ $L^1$ we have: \begin{equation}\label{eq_Stokes'_intro}\int_M \alpha \wedge d\beta =(-1)^{j+1} \int_M d \alpha \wedge \beta,\end{equation} for any $L^\infty$ $(m-j)$-form $\beta$ with $d\beta$ $L^\infty$. For smooth forms on compact manifolds without boundary this is always true by Stokes' formula. Somehow, the question is whether the singularities behave like a boundary or if the closure of $M$ may behave like a manifold. This question occurs especially in the case where the singular locus of the closure of $M$ is of low dimension. We shall answer this question in a very precise way, giving a $L^1$ counterpart of Cheeger's theorem on the $L^2$ Stokes' property. Again, our theorems on $L^1$ cohomology hold for any subanalytic bounded manifold (metrically conical or not). Given a submanifold $M\subset \R^n$ we shall write $\delta M$ for the set $cl(M)\setminus M$, where $cl(M)$ stands for the topological closure of $M$. We shall prove: \begin{thm}\label{thm_intro_l1_stokes_property} Let $j<m$ and let $M$ be a bounded subanalytic oriented manifold. The $L^1$ Stokes' property holds for $j$-forms iff $\dim \delta M <m-j-1$. \end{thm} In particular, if $ cl(M)$ has only isolated singularities then the $L^1$ Stokes' property holds in any dimension $j<m-1$. In this case, integration of forms induces the Poincar\'e duality isomorphism of Corollary \ref{cor_poincare_duality_intro}. Noteworthy, the obstruction for the $L^1$ Stokes property to hold is also purely topological. The only knowledge of the dimension of the singular locus is enough to ensure that this property holds, no matter how fast the volume is collapsing near the singularities. \subsection*{Dirichlet $L^1$ cohomology.} Let $M\subset \R^n$ be a subanalytic bounded submanifold. We just explained that in the case of non closed oriented manifolds, the $ L^1$-Stokes' property may fail. This "boundary phenomenon" may appear near the singularities preventing the $L^1$ classes from being Poincar\'e dual to the $L^\infty$ classes. On compact manifolds with boundary, "ideal boundary conditions" are usually put in order overcome this kind of problems. They give rise so-called {\bf Dirichlet cohomology}. The Dirichlet forms are those whose restriction to the boundary is identically zero. These are also the forms satisfying the $L^2$ Stokes' property. In our setting, if $\omega$ is a form defined on $M$, it does not make sense to require that it vanishes on $\delta{M}$. {\bf Dirichlet $L^1$ cohomology} is thus usually defined (see for instance \cite{iw}) as the cohomology of the $L^1$ forms $\alpha$ (with $d\alpha$ $L^1$) satisfying (\ref{eq_Stokes'_intro}). This is the biggest space of $L^1$ forms on which $d$ is self-adjoint (up to sign, identifying $L^\infty$ with the dual of $L^1$). This cohomology theory is discussed in section \ref{sect_l1sp}. We will denote the Dirichlet $L^1$ cohomology of a submanifold $M\subset \R^n$ by $H_{(1)}^{m-j}(M;\delta M)$. Now, as in the case of manifolds with boundary, Lefschetz-Poincar\'e duality holds in general: \begin{thm}\label{thm_intro_poincare_dirichlet} For any bounded subanalytic orientable submanifold $M\subset \R^n$:$$H_{(1)}^j(M;\delta M) \simeq H^{m-j}_{\infty}(M).$$ \end{thm} It is worthy of notice that this duality is a general fact on bounded subanalytic manifolds: we do not assume that the closure of $M$ is a pseudomanifold. The version stated in Theorem \ref{thm_Poincare duality_dirichlet} is actually even stronger. In particular, by Goresky and MacPherson's generalized Poincar\'e duality, the Dirichlet $L^1$ cohomology is isomorphic to intersection homology in the zero perversity and Theorem \ref{thm_intro_linfty} and Theorem \ref{thm_intro_poincare_dirichlet} admit the following immediate interesting corollary. \begin{cor}\label{cor_dirichlet_de_rahm_intro}(De Rham theorem for Dirichlet $L^1$ cohomology) Let $X$ be a subanalytic bounded orientable pseudomanifold. We have: $$ H_{(1)}^{j}(X_{reg};X_{sing})\simeq I^0 H^j(X_{reg}).$$ \end{cor} Here $X_{sing}$ stands for the singular locus and $X_{reg}$ denotes its complement in $X$. \subsection*{Content of the paper.} Section \ref{sect_lip} introduces and yields the "Lipschitz conic structure of subanalytic sets" (definition \ref{dfn conical}). We prove in section \ref{sect_weakly_l1} some basic results on $L^1$ cohomology and establish Theorem \ref{thm_intro} in section \ref{sect_de_rham_l1}. Poincar\'e duality for $L^1$ cohomology is then discussed in section \ref{sect_cor_poinc}. In section \ref{sect_dirichlet} we introduce the $L^1$ Dirichlet cohomology groups and establish the relative form of Lefschetz-Poincar\'e duality claimed in Theorem \ref{thm_intro_poincare_dirichlet}. We then study the $L^1$ Stokes' property, proving Theorem \ref{thm_intro_l1_stokes_property} in section \ref{sect_l1sp}. We end this paper with a concrete example, the suspension of the torus, on which we discuss all the results of this paper. \subsection*{Notations and conventions.} In the sequel, all the considered sets and maps will be subanalytic (if not otherwise specified) except the differential forms. By "{\bf subanalytic}" we mean "globally subanalytic", i. e. which remains subanalytic after compactifying $\R^n$ (by $ \mathbb{P}^n$). Given a set $X\subset \R^n$, we denote by $C^j(X)$ the singular cohomology cochain complex and by $H^j(X)$ the cohomology groups. Simplices are defined as continuous (subanalytic) maps $\sigma :\Delta _j \to X$, where $\Delta_j$ is the standard simplex. Given two nonnegative functions $\xi:X\to \R$ and $\eta:X\to \R$ we will write $\xi \sim \eta$ if there is a positive constant $C$ such that $\xi \leq C\eta$ and $\eta \leq C\xi$. We write $[\xi;\eta]$ for the set $\{x \in X\times \R: \xi(x)\leq y\leq \eta(x)\}$ and define similarly the open interval $(\xi;\eta)$. Given a (subanalytic) set $X$, we denote by $X_{reg}$ the set of point near which $X$ is a $C^\infty$ manifold and by $X_{sing}$, its complement in $X$. The subsets $\delta M$ and $cl(M)$ are also as explained above. By manifold we will mean $C^\infty$ manifold. We shall say that a function $\xi:X\to \R$ is {\bf Lipschitz} if there is a constant $C$ such that for any $x$ and $x'$ in $X$: $$|\xi(x)-\xi(x')|\leq C |x-x'|.$$ A map $f:X\to \R^k$ is Lipschitz if all its components are Lipschitz and a homeomorphism $h$ is {\bf bi-Lipschitz} if both $h$ and $h^{-1}$ are Lipschitz. We shall write $S^{n-1}(x_0;\varepsilon)$ for the sphere of radius $\varepsilon>0$ centered at $x \in \R^{n}$ and $B^n(x_0;\varepsilon)$ for the corresponding ball. We will write $L(x_0;X)$ for the {\bf link} of $X$ at $x_0$. It is the subset $S^{n-1}(x_0;\varepsilon)\cap X$, where $\varepsilon>0$ is small enough. By \cite{vpams} this subset is, up to a subanalytic bi-Lipschitz map, independent of $\varepsilon>0$. \section{On the Lipschitz geometry of subanalytic sets.}\label{sect_lip} The results of this section will be very important to compute the $L^1$ cohomology groups later on. It is well known that subanalytic sets are locally homeomorphic to cones. It is not true that subanalytic germs of singularities are bi-Lipschitz homeomorphic to cones. We describe the metric types of subanalytic germs in a very precise way. This is very important since the $L^1$ condition heavily relies on the metric. Roughly speaking, we show that, given a subanalytic germ $X$, we can find a subanalytic homeomorphism from a cone (over the link) such that the eigenvalues of the pullback of the metric induced by $\R^n$ on $X$ by this homeomorphism are increasing as we are wandering away from the origin. This improves significantly the results of \cite{vlt} \cite{vlinfty} where a Lipschitz strong deformation retraction onto the origin was constructed. Given $n>1$ and a positive constant $R$ we set: $$\C_n(R):=\{(x_1;x') \in \R\times \R^{n-1}: 0 \leq |x'| \leq R x_1\,\}.$$ For $n=1$, we just define $\C_1$ as the positive $x_1$-axis. \subsection{Regular lines.} We start by recalling a result of \cite{vlinfty}. \begin{dfn}\label{boule reguliere} Let $X$ be a subset of $\R^{n}$. An element $\lambda$ of $S^{n-1} $ is said to be {\bf regular for $X$} if there is a positive number $\alpha$ such that: $$dist(\lambda;T_x X_{reg}) \geq \alpha,$$ for any $x$ in $X_{reg}$. \end{dfn} Regular lines do not always exist, as it is shown by the simple example of a circle. Nevertheless, given a subanalytic set of empty interior, up to a bi-Lipschitz homeomorphism, we can get a line which is regular. This is what is established by theorem $3.13$ of \cite{vlt}. This theorem has then been improved in \cite{vlinfty} into a statement that we shall need in its full generality. It is recalled in Lemma \ref{prop proj reg}. To state this lemma, we need the following definition. \begin{dfn} Let $A, B \subset \R^n$. A map $h:A \to B$ is {\bf $x_1$-preserving} if it preserves the first coordinate in the canonical basis of $\R^n$. \end{dfn} We denote by $\pi_n:\R^{n}\to \R^{n-1}$ the canonical projection. In the Lemma below all the considered germs are germs at the origin. \begin{lem}\label{prop proj reg}\cite{vlinfty} Given germs $X_1,\dots,X_s \subset \C_{n}(R)$, there exist a germ of $x_1$-preserving bi-Lipschitz homeomorphism $h:\C_n(R) \to \C_{n}(R)$, with $R>0$, and a cell decomposition $\mathcal{D}$ of $\R^n$ such that: \begin{enumerate} \item $\mathcal{D}$ is compatible with $h(X_1),\dots, h(X_s)$ \item $e_n$ is regular for any cell of $\mathcal{D}$ in $\C_{n}(R)$ which is a graph over a cell of $\C_{n-1}(R)$ of $\mathcal{D}$ \item Given finitely many germs of nonegative functions $\xi_1,\dots,\xi_l$ on $\C_{n}(R)$, we may assume that on each cell $D$ of $\mathcal{D}$, every germ $\xi_i\circ h$ is $\sim$ to a function of the form:\begin{equation}\label{eq prep}|y-\theta(x)|^r a(x)\end{equation} (for $(x;y) \in \R^{n-1} \times \R$) where $a,\theta:\pi_n(D) \to \R$ are functions with $\theta$ Lipschitz and $r \in \Q$. \end{enumerate} \end{lem} \begin{rem}\label{rmk graphes en plus} Given a family of Lipschitz functions $f_1, \dots, f_k$ defined over $\R^n$ we can find some Lipschitz functions $\xi_1\leq \dots\leq \xi_l$ and a cell decomposition $\mathcal{D}$ of $\R^{n-1}$ such that over each cell $D \in \R^{n-1}$ delimited by the graphs of two consecutive functions $[\xi_{i|D};\xi_{i+1|D}]$, with $D\in \mathcal{D}$, the functions $|q_{n+1}-f_i(x)|$ (where $q=(x;q_{n+1})$) are comparable to each other (for relation $\leq$) and comparable to the functions $f_i \circ \pi_n$. Indeed, it suffices to choose a cell decomposition $\mathcal{D}$ compatible with the sets $f_i=f_j$ and to add the graphs of the functions $f_i$, $f_i+f_j$ and $\frac{f_i+f_j}{2}$. We may then use $\min$ and $\max$ to transform this family into an ordered family (for $\leq$). \end{rem} \subsection{Lipschitz conic structure of subanalytic sets.} This section is crucial in the proof of our de Rham theorems. We introduce and establish what we call "the Lipschitz conic structure" of subanalytic sets. Let $X\subset \R^n$ of dimension $m$ and let $x_0\in cl(X) $. \begin{dfn} A {\bf tame basis} on a manifold $M$ is a basis $\lambda _1,\dots,\lambda _m$ ($m=\dim M$) of bounded subanalytic $1$-forms on $ M$ such that: \begin{equation}\label{eq_tame_basis}|\wedge_{i=1} ^m \lambda_i|\geq \varepsilon >0, \end{equation} on $M$. \end{dfn} Let us make a point that we do not assume the tame bases to be continuous, but, as they are assumed to be subanalytic, they are indeed implicitly required to be smooth almost everywhere. This will be enough for us, since, for integrability conditions, only the behavior almost everywhere is relevant. Alike, in the definition below, the $\varphi_i$'s do not need to be continuous, but indeed only the generic values of these functions really matter since (\ref{item_dfn_metric_conical}) of the definition below is required almost everywhere. We shall also pull-back the forms via subanalytic maps. The pullback will be well defined almost everywhere since, once again, subanalytic mappings are smooth generically. \begin{dfn}\label{dfn conical} We say that $X$ is {\bf Lipschitz conical} at $x_0 \in X$ if there exist a positive real number $\varepsilon$ and a Lipschitz homeomorphism $$h: (0;\varepsilon) \times L(x_0;X)\to X \cap B^n(x_0;\varepsilon) \setminus \{x_0\},$$ with $d(x_0;h(t;x))=t$, such that we can find some positive functions $ \varphi_1,\dots,\varphi_{m-1}:(0;\varepsilon) \times L(x_0;X) \to \R$, for which we have: \begin{enumerate} \item\label{item_dfn_phi_decreasing} The $\varphi_i(t;x)$'s are decreasing to zero as $t$ is going to zero for any $x$, \item The $\varphi_i(t;x)$'s are bounded below on any closed set disjoint from $ \{t=0\}$.\item\label{item_dfn_metric_conical} There is a tame basis $ \lambda_1,\dots,\lambda_{m-1}$ of $L(x_0;X_{reg})$ such that if $\theta_i:=h^{-1*}( \varphi_{i}\cdot \lambda _i )$ then $(h^{-1*}dt; \theta_1;\dots;\theta_{m-1})$ is a tame basis on a dense subset of $X_{reg}$. \end{enumerate} \end{dfn} \begin{thm}\label{thm Lipschitz conic structure} Every (subanalytic) set is Lipschitz conical at any point. \end{thm} \begin{proof} We shall consider sets $A\subset \R^n$ as families parameterized by $x_1$ and write $A^\varepsilon$ for the "fiber" at $\varepsilon \in \R$, that is to say: $$A^\varepsilon:=\{x \in \R^{n-1}: (\varepsilon;x) \in A\}.$$ We will actually prove by induction on $n$ the following statements. {\bf$(\textrm{A}_n)$} Let $X_1,\dots,X_s$ be finitely many subsets of $\C_n(R)$ and let $\xi_1,\dots,\xi_l$ be some bounded functions. There exist positive real numbers $R$ and $\varepsilon$, together with a Lipschitz $x_1$-preserving homeomorphism $$h:(0;\varepsilon) \times B^{n-1}(0;R) \to \C_n(R) \setminus \{0\},$$ such that for every $j\in \{1,\dots,s\}$, we can find some positive functions $\varphi_{1,j}, \dots,\varphi_{\mu_j-1,j}$ ($\mu_j:=\dim X_j$) on $(0;\varepsilon) \times X_{j,reg}^\varepsilon $ with: \begin{enumerate}\item $h((0;\varepsilon) \times X_{j}^\varepsilon)=X_{j} \cap \{0< x_1<\varepsilon\} $ \item\label{item_phi_decreasing} The $\varphi_{i,j}(t;x)$'s are decreasing to zero as $t$ goes to zero, for any $x \in X_{j,reg}^\varepsilon$ \item\label{item_bounded_below} The $\varphi_i(t;x)$'s are bounded below on any closed set disjoint from $\{t= 0\}$ \item\label{item_metric_conical} There is a tame basis $ \lambda_{1,j},\dots,\lambda_{m-1,j}$ of $X^\varepsilon _{j,reg}$ such that if $\theta_{i,j}:= h^{-1*}( \varphi_{i,j}\cdot \lambda _{i,j}) $ then $(h^{-1*}dt; \theta_{1,j};\dots;\theta_{m-1,j})$ is a tame basis of a dense subset of $X_{j,reg}$. \item \label{item_tilda_decreasing} There is a constant $C$ such that for any $i \leq l$ and any $ 0< \tau \leq u \leq t$ we have:\begin{equation}\label{eq decroissance fn up to contant}C_\tau \xi_i(h(\tau;x)) \leq \xi_i(h(u;x)) \leq C \xi_i(h(t;x)).\end{equation} for some positive constant $C_\tau$. \end{enumerate} Before proving these statements, let us make it clear that this implies the desired result. Let $X\subset \R^n$. We can assume that $0\in X$ and work nearby the origin. The set $$\hat{X}:=\{(x_1;x) \in \R \times X: |x|=x_1\} $$ is a subset of $\C_{n+1}(R)$ (for $R>1$) to which we can apply {\bf$(\textrm{A}_{n+1})$}. Observe that $\hat{X}$ is bi-Lipschitz equivalent to $X$. This means that it is enough to check the properties $(1-3)$ of definition \ref{dfn conical} for $\hat{X}$. But they are implied by $(\ref{item_phi_decreasing})$, $(\ref{item_bounded_below})$ and $(\ref{item_metric_conical})$ of {\bf$(\textrm{A}_{n+1})$}. The assertion (\ref{item_tilda_decreasing}) is not necessary to prove that $X$ is Lipschitz conical. It is assumed so as to perform the proof of (\ref{item_phi_decreasing}) during the induction step. As {\bf$(\textrm{A}_n)$} obviously holds in the case where $n=1$ ($h$ being the identity map), we fix some $n>1$. We fix some subsets $X_1,\dots,X_s$ of $\C_n(R)$, for $R>0$, and some subanalytic bounded functions $\xi_1, \dots,\xi_l:\C_n(R)\to\R$. Apply Lemma \ref{prop proj reg} to the family constituted by the $X_i$'s and the union of the zero loci of the $\xi_i$'s. We get a $x_1$-preserving bi-Lipschitz map $h:\C_n(R)\to \C_n(R)$ and a cell decomposition $\mathcal{D}$ such that $(1)$, $(2)$, and $(3)$ of the latter lemma hold. As we may work up to a $x_1$-preserving bi-Lipschitz map we will identify $h$ with the identity map. Hence, thanks to $(3)$ of the latter Lemma, we may assume that the functions $\xi_i $'s are $\sim$ to a function like in (\ref{eq prep}). Let $\Theta$ be a cell of $\mathcal{D}$ in $\C_n (R)$ which is the graph of a function $\eta:\Theta' \to \R$, with $\Theta'\in \mathcal{D}$. By $(2)$ of Lemma \ref{prop proj reg}, $\eta$ is then necessarily a Lipschitz function. Consequently, it may be extended to a Lipschitz function on the whole $\C_{n-1}(R)$ whose graph still lies in $\C_n (R)$. Repeating this for all the cells of $\mathcal{D}$ which are graphs over a cell of $\mathcal{D}$ in $\R^{n-1}$ we get a family of functions $\eta_1,\dots,\eta_v$. Using the operators $\min$ and $\max$ we may transform this family in an ordered one (for $\leq$), so that, keeping the same notations for the new family, we will assume that it satisfies $\eta_1\leq \dots \leq \eta_v$. Fix an integer $1 \leq j < v $ and a connected component $B$ of $(\eta_j;\eta_{j+1})$. Let $\Theta$ be a cell of $\mathcal{D}$ and set for simplicity $D:=\Theta \cap B$. Up to constants, the functions $\xi_k $'s are like in (\ref{eq prep}) on $D$, i. e. there exist $(n-1)$-variable functions on $D$, say $\theta_k$ and $a_k$, $k=1,\dots,m$ with $\theta_k$ Lipschitz such that: $$\xi_k(x;y) \sim (y-\theta_k(x))^{\alpha_k} a_k(x),$$ for $(x;y)\in D \subset \R^{n-1}\times \R$. We shall apply the induction hypothesis to all the $a_k$'s (obtained for all such sets $D$). Unfortunately this is not enough if one wants to get that the $\xi_k$'s satisfy $(5)$, due to the term $(y-\theta_k)$ in the decomposition of the $\xi_k$'s just above. Therefore, before applying the induction hypothesis, we need to complete the family to which we will apply $(5)$ of the induction hypothesis by some extra bounded $(n-1)$ variable functions that we are going to introduce. As the zero loci of the $\xi_k$'s are included in the graphs of the $\eta_i$'s, we have on $D$ for every $k$, either $\theta_k \leq \eta_j$ or $\theta_k \geq \eta_{j+1}$. We will assume for simplicity that $\theta_k \leq \eta_j$. This means that for $(x;y)\in D \subset \R^{n-1}\times \R$: \begin{equation}\label{eq min}\xi_k(x;y) \sim \min ((y-\eta_j(x)) ^{\alpha_k} a_k(x) ;(\eta_j-\theta_k(x)) ^{\alpha_k} a_k(x)),\end{equation} if $\alpha_k$ is negative and \begin{equation}\label{eq max}\xi_k(x;y) \sim \max ((y-\eta_j(x)) ^{\alpha_k} a_k(x) ;(\eta_j-\theta_k(x)) ^{\alpha_k} a_k(x)),\end{equation} in the case where $\alpha_k$ is nonnegative. First, consider the following functions: \begin{equation}\label{eqdefkappak}\kappa_k(x):=(\theta_{k}(x)-\eta_j(x))^{\alpha_k} a_k(x), \qquad k=1,\dots ,l.\end{equation} For every $k$, the function $\kappa_k$ is bounded for it is equivalent to the function $\xi_k(x;\eta_{j}(x))$ which is bounded since $\xi_k$ is. Complete the family $\kappa$ by adding the functions $(\eta_{j+1}-\eta_j)$ as well as the functions $\min(a_k;1)$. The union of all these families (the just obtained family $\kappa$ depends on $D$), obtained for every such set $D$ (intersection of a connected component of $(\eta_j;\eta_{j+1})$, for some $j$, with some cell $D$ of $\mathcal{D}$) provides us a finite collection of functions $\sigma_1,\dots,\sigma_p$. We now turn to the construction of the desired homeomorphism. The cell decomposition $\mathcal{D}$ induces a cell decomposition of $\R^{n-1}$. Refine it into a cell decomposition $\mathcal{E}$ compatible with the zero loci of the functions $(\eta_j-\eta_{j+1})$. Apply induction hypothesis to the family constituted by the cells of $\mathcal{E}$ which lie in $\C_{n-1}(R)$. This provides a homeomorphism $$h:(0;\varepsilon) \times B^{n-2}(0;R) \to \C_{n-1}(R) .$$ We first are going to lift $h$ to a homeomorphism $\tilde{h}: (0;\varepsilon) \times B^{n-1}(0;R) \to \C_{n}(R) $. Thanks to the induction hypothesis, we may assume that the functions $\sigma_1,\dots,\sigma_p$ satisfy (\ref{eq decroissance fn up to contant}). We lift $h$ as follows. For simplicity we define $\eta_j'$ as the restriction of $\eta_j$ to $\C_n(R)\cap \{x_1=\varepsilon\}$. On $(\eta_j;\eta_{j+1})$ we set $$\nu(q):= \frac{y-\eta_{j}(x)}{\eta_{j+1}(x)-\eta_{j}(x)},$$ where $q=(x;y) \in \R^{n-1}\times \R$. Then, for $(t;q) \in (0;\varepsilon) \times (\eta_j';\eta_{j+1}')$ $$\widetilde{h}(t;q):=(h(t;x);\nu(q)(\eta_{j+1}(h(t;x))-\eta_j(h(t;x))) +\eta_j(h(t;x))).$$ In virtue of the induction hypothesis, the inequality (\ref{eq decroissance fn up to contant}) is fulfilled by the functions $(\eta_{i+1}-\eta_i)$. Therefore, as $h$ is Lipschitz, we see that $\widetilde{h}$ is Lipschitz as well. As $(1)$ holds by construction for every cell, it holds for all the $X_j$'s. We now turn to define the functions $\varphi_{i,j}$. Actually, as all the $X_i$'s are unions of cells, it is enough to carry out the proof on every cell $E\in \mathcal{E}$, i. e. to define some functions $\varphi_{1,E},\dots,\varphi_{\mu-1,E}$ (where $\mu=dim \, E$), decreasing to $0$ with respect to $t$, and a tame basis $\lambda_{1,E},\dots,\lambda_{\mu-1,E}$ such that the family $(\widetilde{h}^{-1*}dt; \theta_{1,E};\dots;\theta_{\mu-1,E})$, where $\theta_{i,E}:=\widetilde{h}^{-1*}(\varphi_{i,E}\cdot\lambda_{i,E})$, is a tame basis of $E$. Indeed, the desired functions $\varphi_{i,j}$ can then be defined as the functions induced by all the functions $\varphi_{i,E}$ (defined on $\widetilde{h}^{-1}(E)$), for all the cells $E$ of dimension $\mu_j$ included in $X_j$ (as pointed out before definition \ref{dfn conical} only the generic values of $\varphi_{i,j}$ actually matter). Fix a cell $E\subset \C_n(R)$, set $E':=\pi(E)$ and $\mu':=dim \,E'$, where $\pi:\C_n(R)\to \C_{n-1}(R)$ is the obvious orthogonal projection. Let now $\varphi_{1,E'},\dots,\varphi_{\mu'-1,E'}$ be the functions given by the induction hypothesis. We distinguish two cases. \underline{{\it First case}}: $\mu'=\mu-1$ (where $\mu=dim \,E$). Let us set: $$\varphi_{i,E}(t;x):=\varphi_{i,E'}(t;\pi(x)).$$ As $\mu'=\mu-1$, the cell $E$ is included in $[\eta_{j|E'};\eta_{j+1|E'}]$, for some $j\leq \lambda$, and we also set: \begin{equation}\label{eq_def_varphi_i} \varphi_{\mu-1,E}(t;x):=\frac{\eta_{j+1}(h(t;x))-\eta_j(h(t;x))}{\eta_{j+1}(h(\varepsilon;x))-\eta_j(h(\varepsilon;x))}, \end{equation} Let us show that these functions satisfy $(\ref{item_phi_decreasing})$ and $(\ref{item_bounded_below})$. Recall that we applied $(\ref{item_tilda_decreasing})$ of the induction hypothesis to the function $(\eta_{j+1}-\eta_j)(x)$. If a function $\xi$ satisfies $(\ref{eq decroissance fn up to contant})$ then $$\xi(h(s;x)) \sim \inf_{s \leq t< \varepsilon} \xi(h(t;x)),$$ and consequently $\xi\circ h$ is $\sim$ to an increasing function. Therefore, changing $\varphi_{i,E}$ for an equivalent function if necessary, we may assume that it is increasing with respect to $t$. As the graphs of the $\eta_i$'s are included in $\C_n(R)$, the $\eta_i$'s must vanish at the origin. Consequently $\varphi_{\mu-1,E}$ tends to zero, as $t$ goes to zero for any $x \in E$, which yields $(\ref{item_phi_decreasing})$. As $(\eta_{j+1}-\eta_j)$ satisfy (\ref{eq decroissance fn up to contant}), the $\varphi_i$'s are bounded away from zero on $ (\tau;\varepsilon)\times E^\varepsilon$ for every $0< \tau <\varepsilon $, showing (\ref{item_bounded_below}). We now are going to define our tame basis of $1$-forms $\lambda_{i,E}$ in order to prove (\ref{item_metric_conical}). Denote by $\pi_E:E \to E'$ the restriction of the orthogonal projection. Let us now set on $(0;\varepsilon) \times E^\varepsilon$ \begin{equation}\label{eq_lambda_iE}\lambda_{i,E}:=\pi_{|E}^* \lambda_{i,E'}.\end{equation} Then set for $x\in E'$ and $a \in [0;1]$: $$\eta_{j,a}(x)=(\eta_{j+1}(x)-\eta_j(x))a +\eta_j(x).$$ Denote by $E_a$ the graph of $\eta_{j,a}$. Put now $$\lambda_{\mu-1,E}(q)(u)=0,$$ if $u$ is tangent to $(E_{\nu(q)})^\varepsilon$, and finally set $$\lambda_{\mu-1,E}(q)(e_n)=1.$$ As the $\eta_{i,a}$ are Lipschitz with a Lipschitz constant bounded with respect to $a$, the angle between $e_n$ and the tangent to the graph of $\eta_{i,\nu(x)}$ is bounded below away from zero, and therefore the norm of $\lambda_{\mu-1,E}$ is bounded. The Lipschitz character of the $\eta_{j,a}$'s also implies that the family $\lambda_{1,E},\dots,\lambda_{\mu-1,E}$ is a tame basis of $E^\varepsilon$. By definition of $\widetilde{h}$ and $\varphi_{\mu-1,E}$ we have $d_{(t;x)} h(e_n)\sim \varphi_{\mu-1,E}(t;x)$ so that: $$|\widetilde{h}^{-1*} \lambda_{i,E}|\sim \frac{1}{\varphi_{\mu-1,E}}.$$ The forms $\theta_{i,E}:=\widetilde{h}^{-1*}(\varphi_{i,E}\cdot \lambda_{i,E})$ are thus bounded. For the same reasons as for the $\lambda_i$'s, the family $(\widetilde{h}^{-1*}dt; \theta_{1,E};\dots;\theta_{\mu-1,E})$ is a tame basis of $E$. \underline{\textit{Second case}}: $\mu=\mu'$. In this case we only have to define $(\mu'-1)$ functions and $(\mu'-1)$ $1$-forms. This may be done like in the first case (like in (\ref{eq_def_varphi_i}) and (\ref{eq_lambda_iE})). This is indeed much easier to check that $(2)$ $ (3)$ and $(4)$ hold, since, as $\pi_E$ is bi-Lipschitz, the required properties which are true downstairs for $h$ thanks to the induction hypothesis obviously continue to hold upstairs for $\tilde{h}$. This completes the proof of $(2)$ $ (3)$ and $(4)$. Finally, we have to check that the $\xi_k$'s fulfill (\ref{eq decroissance fn up to contant}) for $\widetilde{h}$. As the $\xi_k$'s are bounded this is enough to check it for the functions $\min(\xi_k;1)$. We check it on a given cell $E\in \mathcal{E}$. Fix an integer $1 \leq k \leq l$. By the induction hypothesis we know that the $\kappa_i$'s (see (\ref{eqdefkappak})) satisfy (\ref{eq decroissance fn up to contant}). Remark that the function $\nu(\widetilde{h}(t;q))$ is constant with respect to $t$. Observe that by (\ref{eq min}) and (\ref{eq max}) it is enough to show that the functions $\min((y-\eta_j(x))^{\alpha_k} a_k(x);1)$ and the functions $\min(|\theta_k-\eta_j|(x) ^{\alpha_k} a_k(x);1)$ satisfy (\ref{eq decroissance fn up to contant}). As for the latter functions this follows from the induction hypothesis and choice of the $\kappa_i$'s, we only need to focus on the former ones. For simplicity we set $$F(x;y):=(y-\eta_j(x))^{\alpha_k} a_k(x),$$ and $$G(x):=(\eta_{j+1}-\eta_j)(x)^{\alpha_k} \cdot a_k(x).$$ We have to show the desired inequality for $\min(F;1)$. We have: \begin{equation}\label{eq F G}F(q)=\nu(q)^{\alpha_k} \cdot G(x),\end{equation} where again $q=(x;y)$. As $\nu(\widetilde{h}(t;q))$ is constant with respect to $t$, this implies that: \begin{equation}\label{eq2 F G}F(\widetilde{h}(t;q))=\nu(q)^{\alpha_k} \cdot G(h(t;x))\end{equation} We assume first that $\alpha_k$ is negative. Thanks to the induction hypothesis we know that for $0< \tau \leq u \leq t$: $$C_\tau \min (G(h(\tau;x));1)\leq \min(G(h(u;x));1)\leq C \min (G(h(t;x));1),$$ for some positive constants $C_\tau,C$. But this implies (multiplying by $\nu^{\alpha_k}$ and applying (\ref{eq F G}) and (\ref{eq2 F G})) that $$C_\tau \min(F(\widetilde{h}(\tau;q));\nu^{\alpha_k}(q);1)\ \leq \min(F(\widetilde{h}(u;q));\nu^{\alpha_k}(q);1)\leq C \min (F(\widetilde{h}(t;q));\nu^{\alpha_k}(q);1). $$ But, as $\alpha_k$ is negative, $\min(F;\nu^{\alpha_k};1)=\min(F;1)$ and we are done. We now assume that $\alpha_k$ is nonnegative. This implies that $F $ is a bounded function (by (\ref{eq max})). Moreover, by (\ref{eq max}), it is enough to show the desired inequality for $F $, and thanks to (\ref{eq F G}), it actually suffices to show it for $G$. As $G$ is one of the $\kappa_i$'s, the result follows from the induction hypothesis. This yields (\ref{eq decroissance fn up to contant}) for $\widetilde{h}$, establishing (\ref{item_tilda_decreasing}). \end{proof} \begin{rem}\label{rem_conical} \label{rmk_conic_structure} \begin{enumerate} \item As in \cite{vlt}, the $\varphi_i$'s could be expressed as quotients of sums of products of powers of the monomial $t$ and distances to some subsets of the link.\item Observe that in the proof of the above the induction hypothesis is stronger than the theorem since we have proved the Lipschitz conic structure of finitely many sets simultaneously and that the homeomorphism is defined on the ambient space as well. \item \label{rem_conical_item3} Denote by $\rho_X$ the Riemannian norm induced by the ambient space on $ X_{reg}$. Condition $(3)$ of definition \ref{dfn conical} clearly implies the following: \begin{equation}\label{eq_dfn_conical} h^*\rho_{X}^2\approx dt^2+\sum_{i=1} ^{m-1}\varphi_i ^2 (t;x) \cdot \lambda_i^2(x),\end{equation} for $(t;x)$ in a dense subset of $ (0;\varepsilon)\times L(x_0;X_{reg}).$ As the $\varphi_i$'s are bounded below and above far away from $\{t=0\}$ we see that the above mapping $h$ is thus a quasi-isometry on any closed subset of $X_{reg}$ disjoint from $ \{t=0\}$. \end{enumerate} \end{rem} \begin{thm}\label{thm_hardt} Let $x_0 \in X\subset \R^n$ and set $\rho(x):=|x-x_0|$. There exists $\varepsilon>0$ such that $\rho$ is bi-Lipschitz trivial above $[\nu;\varepsilon]$ for any $0<\nu <\varepsilon$, i. e. for every $\nu>0$ we can find a bi-Lipschitz homeomorphism $$h: \rho^{-1}([\nu;\varepsilon]) \to \rho^{-1}(\varepsilon)\times [\nu;\varepsilon], $$ with $\pi_2(h(x))=\rho(x)$, where $\pi_2:\rho^{-1}(\varepsilon)\times [\nu;\varepsilon] \to [\nu;\varepsilon]$ is the projection onto the second factor. \end{thm} This theorem is a particular case of the bi-Lipschitz version of Hardt's Theorem proved in \cite{vlt}. This is also easy to derive from the proof of Theorem \ref{thm Lipschitz conic structure}. The subanalycity of the isotopy will be useful in section \ref{sect_hom} (recall that, except the differential forms, everything is implicitly assumed to be subanalytic). \section{$L^1$ cohomology groups}\label{sect_weakly_l1} In this section $M\subset \R^n$ stands for a bounded (subanalytic) submanifold. Such a manifold has a natural structure of Riemannian manifold giving rise to a measure on $M$ that we denote $dvol_M$. Below, the word $L^1$ will always mean $L^1$ with respect to this measure. \subsection*{The $L^1$ cohomology.} As we said in the introduction, the {\bf $L^1$ forms on $M$} are the forms $\omega$ on $M$ satisfying (\ref{eq_l1_condition}). We denote by $(\Omega_{(1)} ^\bullet(M);d)$ the differential complex constituted by the $C^\infty$ $L^1$ forms $\omega$ such that $d\omega$ is $L^1$. The $L^1$ cohomology groups, denoted $H_{(1)}^j(M)$ are the cohomology groups of the differential complex $(\Omega_{(1)} ^\bullet(M);d)$. We endow this de Rham complex with the natural norm: $$|\omega|_1:=\int_M |\omega|\, dvol_M+ \int_M |d\omega|\,d vol_M.$$ In this section we prove some preliminary results about $L^1$ cohomology that we shall need to establish our de Rham theorem in the next section. \subsection{$L^1$ cohomology with compact support.}\label{sect_cohomo_support_compact} We now define the $L^1$ forms with compact support. We prove some basic facts, relying on the bi-Lipschitz triviality result presented in Theorem \ref{thm_hardt}. Let us point out that our notion of forms with compact support is slightly different that the usual one since we allow the forms to be nonzero near the singularities of $cl(M)$. The support is indeed a subset of $cl(M)$. Let $M\subset \R^n$ be a submanifold and let $X:=cl(M)$. \begin{defs}\label{dfn_support} Let $U$ be an open subset of $M$ and let $V\supset U$ be an open subset of $X$. Let $\omega$ be a differential form on $U$. The {\bf support of $\omega$ in $V$} is the closure in $V$ of the set constituted by the points of $U$ at which $\omega$ is nonzero. We denote by $\Omega^j_{(1),V}(U)$ the $C^\infty$ $j$-forms $\omega$ on $U$ with compact support in $V$ such that $\omega$ and $d\omega$ are $L^1$, and by $H^j_{(1),V}(U)$ the resulting cohomology groups. \end{defs} For instance $\Omega^j_{(1),X}(M)$ stands for the $L^1$ $j$-forms (with an $L^1$ derivative) having compact support in $X$. Such forms have to be zero in a neighborhood of infinity (in $M$). However, they need not to be zero near the points of $\delta M$. \subsection{Weakly differentiable forms.} The homeomorphism that we constructed in Theorem \ref{thm Lipschitz conic structure} is not smooth. Thus, we will need to work with weakly differentiable forms, just differentiable as currents. Therefore, the first step is to prove that the bounded weakly differentiable forms give rise to the same cohomology theory. We will follow an argument similar to the one used by Youssin in \cite{y}. Given a smooth manifold $M$ (possibly with boundary), we denote by $\Omega_{0,\infty} ^{j}(M)$ the set of $C^\infty$ $j$-forms on $M$ with compact support (in $M$). \begin{dfn} Let $U$ be an open subset of $\R^n$. A differential $j$-form $\alpha$ on $U$ is called {\bf weakly differentiable} if there exists a $(j+1)$-form $\omega$ such that for any form $\varphi\in \Omega_{0,\infty} ^{n-j-1} (U)$:$$\int_{U} \alpha \wedge d\varphi =(-1)^{j+1}\int_{U} \omega \wedge \varphi.$$ The form $\omega$ is then called {\bf the weak exterior derivative of $\alpha$} and we write $\omega=\overline{d} \alpha$. A continuous differential $j$-form $\alpha$ on $M$ is called {\bf weakly differentiable} if it gives rise to weakly differentiable forms via the coordinate systems of $M$. \end{dfn} We denote by $\overline{\Omega}_{(1)} ^{j}(M)$ the set of measurable weakly differentiable $j$-forms, locally bounded in $M$, which are $L^1$ and which have an $L^1$ weak exterior derivative. Together with $\overline{d}$, they constitute a cochain complex. We denote by $\overline{H}_{(1)}^j (M)$ the resulting cohomology groups. We endow this de Rham complex with the corresponding norm: $$|\omega|_1:=\int_M |\omega|\, dvol_M+ \int_M |\overline{d}\omega|\,d vol_M.$$ Similarly, we may introduce the theory of {\bf weakly differentiable $L^1$ forms with compact support in $V$} that we shall denote $\overline{\Omega}^j_{(1),V}(U)$ and $\overline{H}^j_{(1),V}(U)$ (see definition \ref{dfn_support}). In the case of compact smooth manifolds it is easily checked that the two cohomology theories coincide: \begin{lem}\label{lem_l1_manifold}If $K$ is a smooth compact manifold (possibly with boundary) then: \begin{equation}\label{eq isom bar}\overline{H}_{(1)} ^j(K) \simeq H^j(K).\end{equation}\end{lem} \begin{proof} The proof follows the classical argument. As in the case of smooth forms (see for instance \cite{bl}) it is enough to show Poincar\'e Lemma. Both of the above cohomology theories are invariant under smooth homotopies. Any point of $K$ has a smoothly contractible neighborhood. As $K$ is compact, locally $L^1$ implies $L^1$. \end{proof} We now are going to see that the isomorphism also holds in the noncompact case: \begin{pro}\label{pro l1 isom smooth} Let $M\subset \R^n$ be a $C^\infty$ submanifold and let $V$ open in $cl(M)$. The inclusions $\Omega_{(1)} ^\bullet (M) \hookrightarrow \overline{\Omega}_{(1)} ^\bullet (M)$ and $\Omega_{(1),V} ^\bullet (V\cap M) \hookrightarrow \overline{\Omega}_{(1),V} ^\bullet (V \cap M)$ induce isomorphisms between the cohomology groups. \end{pro} \begin{proof} As the proof is the same for the two inclusions, we shall focus on the former one. It is enough to show that, for any form $\alpha \in \overline{\Omega}_{(1)} ^{j} (M)$ with $\overline{d}\alpha \in \Omega_{(1)} ^{j+1} (M)$ (i. e. $\alpha$ is weakly smooth and $\overline{d}\alpha$ is smooth), there exists $\theta \in \overline{\Omega}_{(1)} ^{j-1} (M)$ such that $(\alpha+\overline{d}\theta)$ is $C^\infty$. For this purpose, we prove by induction on $i$ the following statements. {\bf$(\textrm{H}_i)$} Fix a form $\alpha \in \overline{\Omega}_{(1)} ^{j} (M)$ with $\overline{d}\alpha \in \Omega_{(1)} ^{j+1} (M)$. Consider an exhaustive sequence of compact smooth manifolds with boundary $K_i \subset M$ such that for each $i$, $K_i$ is included in the interior of $K_{i+1}$ and $\cup K_i=M$. Then, for any integer $i$, there exists a closed form $\theta_i \in \overline{\Omega}_{(1)} ^{j-1}(M)$ such that $supp \; \theta_i \subset Int(K_{i}) \setminus K_{i-2}$ and $|\theta_i|_1\leq \frac{1}{2^i}$ and such that $\alpha_i:=\alpha + \sum_{k=1} ^i \overline{d}\theta_k$ is smooth in a neighborhood of $K_{i-1}$. Before proving these statements observe that $\theta=\sum_{i=1} ^\infty \theta_i$ is the desired exact form (this sum is locally finite). Let us assume that $\theta_{i-1}$ has been constructed, $i \geq 1$ (we may set $K_0=K_{-1}=K_{-2}=\emptyset$). Observe that by (\ref{eq isom bar}), there exists a smooth form $\beta \in \Omega^{(j-1)}_{(1)} (K_i)$ such that $d\beta=\overline{d}\alpha$. This means that $(\alpha_{i-1}-\beta)$ is $\overline{d}$-closed, and by (\ref{eq isom bar}) there is a smooth form $\beta' \in \Omega^{(j-1)}_{(1)} (K_i)$ such that $$\alpha_{i-1}-\beta=\beta' +\overline{d} \gamma,$$ with $\gamma \in \overline{\Omega}^{(j-2)}_{(1)} (K_i)$ (if $j=1$ then $\alpha_{i-1}-\beta$ is constant and then smooth). Thanks to the induction hypothesis there exists an open neighborhood $V$ of $K_{i-2}$ on which $\alpha_{i-1}$ is smooth. This implies that $\overline{d} \gamma$ is smooth on $V$. Therefore, by induction, we know that we can add an exact form $d\sigma$ to $\gamma$ to get a form smooth on $V$. Multiplying $\sigma$ by a function with support in $V$ which is $1$ in a neighborhood $W$ of $K_{i-2}$, we get a form $\sigma'$ on $M$ such that $(\overline{d}\sigma'+\gamma)$ is smooth on $W$. This means that we can assume that $\gamma$ is smooth on an open neighborhood $W$ of $K_{i-2}$ possibly replacing $\gamma$ by $(\overline{d}\sigma'+\gamma)$. We will assume this fact without changing notations. By means of a convolution product with bump functions, for any $\varepsilon>0$, we may construct a smooth form $\gamma_\varepsilon$ such that $|\gamma_\varepsilon-\gamma|_{1}\leq \varepsilon$. Consider a smooth function $\phi$ which is $1$ on a neighborhood of $(M \setminus W)\cap K_{i-1}$ and with support in $int(K_i) \setminus K_{i-2}$. Then set: $$\theta_i(x):=\phi(x)(\gamma_\varepsilon-\gamma)(x).$$ If $\varepsilon$ is chosen small enough $|\theta_i|_1 +|d\theta_i|_1\leq \frac{1}{2i}$. On a neighborhood of $(M \setminus W)\cap K_{i-1}$, because $\phi\equiv 1$, we have $\alpha_{i-1}+\overline{d}\theta_i=\beta+\beta'+d\gamma_\varepsilon$ which is smooth. The form $(\alpha_{i-1}+\overline{d}\theta_i)$ is smooth on $W$ as well since $\alpha_{i-1}$ and $\theta_i$ are both smooth. \end{proof} \subsection{Weakly smooth forms and bi-Lipschitz maps.} Given two open subsets of $\R^n$, it is well known that any subanalytic map $h:U \to V$ is smooth almost everywhere. Therefore, any form $\omega$ on $V$ may be pulled-back to a form $h^*\omega$ on $U$, defined almost everywhere. We are going to see that in the case where $h$ is locally bi-Lipschitz then the pull-back of a smooth form is weakly smooth (Proposition \ref{prop_pullback_weakly smooth forms}). \begin{dfn} Let $\Sigma$ be a stratification of $U\subset \R^k$ and let $h:U \to \R^n$ be smooth on strata. The map $h$ is {\bf horizontally $C^1$ (with respect to $\Sigma$)} if, for any sequence $(x_l)_{l\in \N}$ in a stratum $S$ of $\Sigma$ tending to some point $x$ in a stratum $S'$ and for any sequence $u_l \in T_{x_l} S$ tending to a vector $u$ in $T_x S'$, we have $$\lim d_{x_l} h_{|S}(u_l)=d_x h_{|S'} (u).$$ \end{dfn} Horizontally $C^1$ maps have been introduced by David Trotman and Claudio Murolo in \cite{mt}. They will be useful to show that the pull-back of a weakly differentiable $L^1$ form by a subanalytic bi-Lipschitz map (not everywhere smooth) is weakly differentiable. The following lemma will be needed. Similar results were proved in \cite{sv} where the theory of stratified forms is investigated and a de Rham type theorem for these forms is proved. \begin{lem}\label{lem_h_hor_C1} Let $h:U\to \R^m$ be a Lipschitz map. There exists a stratification of $U$ such that $h$ is horizontally $C^1$ with respect to this stratification. \end{lem} \begin{proof}Consider a Whitney $(a)$ stratification $\Sigma_h$ of the graph of $h$ (see for instance \cite{bcr, tesc} for the definition of the Whitney $(a)$ condition and the construction of such a stratification). Let $\pi_1$ (resp. $\pi_2$) be the projection on the source (resp. target) axis of $h$. The image of $\Sigma_h$ under $\pi_1$ gives rise to a stratification $\Sigma$ of $U$. Let us prove that $h$ is horizontally $C^1$ with respect to this stratification. Fix a stratum $S$ of this stratification, a sequence $x_l \in S$ tending to $x$ belonging to a stratum $S'$, as well as a sequence $u_l \in T_{x_l} S$ of vectors tending to some $u \in T_x S'$. Let $Z$ be the stratum which projects onto $S$ via $\pi_1$. For every $l$, there is a unique vector $v_l \in T_{(x_l;h(x_l))} Z$ which projects onto $u_l$. As $h$ is Lipschitz the norm of $v_l$ is bounded above and we may assume that $v_l$ is converging to a vector $v$. The vector $v$ then necessarily projects onto $u$. We claim that $v$ is tangent to the stratum $Z'$ of $\Sigma_h$ containing $(x;h(x))$. Indeed, if otherwise, there would be a vector $w$ in $\tau=\lim T_{(x_l;h(x_l))} Z$ such that $(w-v)$ lies in the kernel of $\pi_1$, in contradiction with the fact that $h$ is Lipschitz (the graph of Lipschitz map may no have a vertical limit tangent vector). This shows the claim, and consequently: $$\lim d_{x_l} h_{|S} (u_l)=\lim \pi_2(v_l)=\pi_2(v)= d_{x} h_{|S'}(u),$$ since $v$ is tangent to $Z'$. \end{proof} We shall need the following fact on subanalytic homeomorphisms. It seems that it could be improved but this will be enough for our purpose. \begin{pro}\label{prop_pullback_weakly smooth forms} Let $U$ be an open subset of $\R^n$ and let $\omega$ be a bounded weakly differentiable form on $U$ with $\overline{d}\omega$ bounded. If $h:U\to V$ is a locally bi-Lipschitz map, then $h^*\omega$ is weakly differentiable and $\overline{d} h^*\omega=h^* \overline{d} \omega$, almost everywhere. \end{pro} \begin{proof}Take $\varphi \in \Omega_{0,\infty} ^{m-j} (U)$. \underline{{\it First case:}} assume that $\omega$ is smooth. Let $\rho$ be the function defined by the distance to the boundary of $U$ and set $U^\varepsilon := \{\rho \geq \varepsilon\} $. By Lemma \ref{lem_h_hor_C1}, $h$ is horizontally $C^1$ with respect to some stratification of $U$. Consequently, the forms $ h^*\omega$ and $h^*d\omega$ are continuous at almost every point of $cl(U^\varepsilon)$ (it is a manifold with boundary a. e.). Hence, so are $h^*\omega \wedge \varphi$ and $h^*d\omega \wedge \varphi$. The form $h^*\omega$ is smooth almost everywhere. By Stokes' Formula for stratified forms \cite{sv} (see also \cite{l}), $$ \int_{U^\varepsilon}d( h^*\omega \wedge \varphi)= \int_{\rho =\varepsilon} h^*\omega \wedge \varphi=0,$$ for $\varepsilon>0$ small enough, since $\varphi$ has compact support in $U$. Now, integrating by parts we have for $\varepsilon>0$ small enough: $$(-1)^{j+1}\int_{U} h^*\omega \wedge d \varphi = \int_U d h^*\omega \wedge \varphi - \int_{U} d( h^*\omega \wedge \varphi)= \int_U d h^*\omega \wedge \varphi. $$ This completes the proof of our first case. In general, if $\omega$ is not smooth but just weakly smooth, as $\varphi$ is smooth and $h^{-1}$ bi-Lipschitz, $ h^{-1*} d \varphi$ is weakly smooth (by the {\it First case} applied to $\varphi$ and $h^{-1}$) and we may write: $$\int_{U} h^*\omega \wedge d \varphi =\int_V \omega \wedge h^{-1*} d \varphi = \int_V \omega \wedge \overline{d} h^{-1*} \varphi ,$$ and, again integrating by parts: $$ \int_V \omega \wedge \overline{d} h^{-1*} \varphi =(-1)^{j+1}\int_V \overline{d} \omega \wedge h^{-1*} \varphi=(-1)^{j+1}\int_{U} h^* \overline{d} \omega \wedge \varphi.$$ \end{proof} \subsection{Subanalytic bi-Lipschitz maps and $L^1$ cohomology.}\label{sect_hom} In general, if $f:M\to N$ is a weakly smooth map between smooth manifolds and if $\omega$ is a $L^1$ form on $M$ then $f^*\omega$ is not necessarily a $L^1$ form on $N$, even if $f$ has bounded derivatives. Nevertheless, if $f$ is a diffeomorphism and if $|d_x f^{-1}|$ is bounded above then the pullback of a $L^1$ form is $L^1$. In particular, if $f$ is a subanalytic bi-Lipschitz map, by Proposition \ref{prop_pullback_weakly smooth forms}, $h^*\omega$ is a weakly smooth $L^1$ form (it is well defined almost everywhere). This means that any subanalytic bi-Lipschitz map $h:M\to N$ induces some maps $$h^{*\bullet}:\overline{\Omega}_{(1)}^\bullet(N) \to\overline{\Omega}_{(1)}^\bullet(M),$$ pulling-back the forms. These mappings induce mappings in cohomology which are obviously isomorphisms since $h$ is invertible. Fix a $C^\infty$ submanifold $M\subset \R^n$. Let $x_0 \in cl(M)$, and set $M^\varepsilon :=B^n(x_0;\varepsilon)$ as well as $N^\varepsilon:=M\cap S^{n-1}(x_0;\varepsilon)$. \begin{pro}\label{pro_neigh} For any $\varepsilon$ positive small enough, there exists a fundamental system of neighborhoods $(U_i)_{i\in \N}$ of $N^\varepsilon$ such that: $$H_{(1)} ^\bullet (U_i\cap M^\varepsilon) \simeq H_{(1)} ^\bullet (L(x_0;M)).$$ \end{pro} \begin{proof} By Proposition \ref{pro l1 isom smooth}, it is enough to show the result for the $L^1$ cohomology of weakly smooth forms. Apply Theorem \ref{thm_hardt} to $cl(M)$. Then set $$U_i:=\rho^{-1}((\varepsilon-\frac{\varepsilon}{2i};2\varepsilon)),$$ for $i$ positive integer (with the notations of the latter theorem). Now the bi-Lipschitz homeomorphism provided by Theorem \ref{thm_hardt} induces an isomorphism (as explained in the paragraph preceding the proposition) between $\overline{H}_{(1)}^j(U_i\cap M^\varepsilon)$ and $\overline{H}_{(1)} ^j(N^{\nu} \times (\varepsilon-\frac{\varepsilon}{2i};\varepsilon))$, for any $\nu \in (\varepsilon-\frac{\varepsilon}{2i};\varepsilon)$. It is a routine to check that the latter is isomorphic to $\overline{H}_{(1)} ^j(N^\nu)$. \end{proof} \begin{rem}\label{rmk_restriction_map} We recall that the link is defined as the intersection of the set with a little sphere, say that it is $ N^\nu$. In the above proposition, the isomorphism is induced by restriction. Of course, the restriction of a $L^1$ form on $M^\varepsilon$ has no reason to give rise to a $L^1$ form on $N^{\nu}$ but every class has a representative which is $L^1$ in restriction to $N^\nu$, since the isomorphism $$\overline{H}_{(1)}^j(N^{\nu})\simeq \overline{H}^j_{(1)}(N^{\nu} \times (\varepsilon-\frac{\varepsilon}{2i};\varepsilon))$$ involved in the above proof is itself induced by the restriction. \end{rem} \subsection{An exact sequence nearby singularities.}\label{sect_cohomo_cpct_supp_local} The letter $M\subset \R^n$ still stands for a $C^\infty$ submanifold. We shall point out an exact sequence nearby a singular point of the closure of $M$. Fix $x_0\in X$ and set $M^\varepsilon:=B^n(x_0;\varepsilon)\cap M$, $N^\varepsilon:=S^{n-1}(x_0;\varepsilon)\cap N$ as well as $X^\varepsilon:=B^n(x_0;\varepsilon)\cap X$. By Proposition \ref{pro_neigh}, for any $\varepsilon$ small enough, there is a basis of neighborhoods $(U_i)_{i \in \N}$ of $N^\varepsilon$ for which the restriction map (see remark \ref{rmk_restriction_map}) induces an isomorphism for every $i$: \begin{equation}\label{eq_U_i} H_{(1)}^j(U_i)\simeq H_{(1)}^j(N^\varepsilon).\end{equation} Denote by $\hat{\Omega}_{(1)}^j (N^\varepsilon)$ the direct limit of $\Omega_{(1)}^j (U\cap M)$ where $U$ runs over all the neighoborhoods of $N^{\varepsilon}$. Denote by $\hat{H}_{(1)}^j(N^\varepsilon)$ the resulting cohomology (these groups are indeed isomorphic to $H_{(1)}^j (N^\varepsilon)$ thanks to Proposition \ref{pro_neigh}). The short exact sequences $$0 \to \Omega_{(1),X^\varepsilon} ^\bullet (M^\varepsilon)\to \Omega_{(1)} ^\bullet (M^\varepsilon)\to \hat{\Omega}_{(1)}^\bullet (N^\varepsilon)\to 0, $$ give rise to the following long exact sequence: $$\dots \to \hat{H}_{(1)}^{j-1} (N^\varepsilon) \to H_{(1),X^\varepsilon} ^j (M^\varepsilon) \to H_{(1)} ^j (M^\varepsilon)\to \dots .$$ Similarly let $C^\bullet _{X^\varepsilon} (M^\varepsilon)$ be the singular cohomology with compact support in $X^\varepsilon$, i. e. the singular cochains of $M^\varepsilon$ whose support does not meet any neighborhood of $S^{n-1}(x_0;\varepsilon)$. Consider now the mappings: $$\psi_{M^\varepsilon,X^\varepsilon }^\bullet: \Omega^\bullet _{(1),X^\varepsilon}(M^\varepsilon) \to C^\bullet_{X^\varepsilon} (M^\varepsilon),$$ obtained in the same way as $\psi_{X^\varepsilon} ^\bullet$, by integrating the $L^1$ differential forms on simplices. The above exact sequence, together with the analogous exact sequence in singular cohomology, provide the following commutative diagram: \vskip 0.5cm \begin{center} \begin{picture}(-140,0) \put(103,-95){$\mbox{diag.} \, 1. $} \put(-240,0){$\dots\longrightarrow H_{(1),X^\varepsilon}^j (M^\varepsilon)\; \longrightarrow \; H^{j}_{(1)}(M^\varepsilon) \longrightarrow\; H^{j}_{(1)}(N^\varepsilon) \; \longrightarrow \; H^{j+1}_{(1),X^\varepsilon}(M^\varepsilon) \; \longrightarrow \dots$} \put(-255,-70){$\;\dots \; \longrightarrow H_{X^\varepsilon} ^j (M^\varepsilon) \;\;\longrightarrow \;\; H^{j} (M^\varepsilon)\;\:\longrightarrow \:\; H^{j} (N^\varepsilon) \;\; \longrightarrow \; H^{j+1} _{X^\varepsilon} (M^\varepsilon) \longrightarrow \dots$} \put(-190,-9){\vector(0,-1){45}} \put(-185,-30){$\psi^j _{M^\varepsilon,X^\varepsilon}$} \put(-110,-9){\vector(0,-1){45}}\put(-105,-30){$\psi^j_{M^\varepsilon}$} \put(-40,-9){\vector(0,-1){45}}\put(-35,-30){$\psi^{j}_{N^\varepsilon}$} \put(25,-9){\vector(0,-1){45}} \put(30,-30){$\psi^{j+1}_{M^\varepsilon,X^\varepsilon}$} \end{picture} \end{center} \vskip 4cm \section{Proof of the de Rham theorem for $L^1$ cohomology.}\label{sect_de_rham_l1} The first step of the proof of Theorem \ref{thm_intro} is to compute the cohomology groups locally. This requires to construct some homotopy operators and describe their properties. The letter $M\subset \R^n$ still stands for a bounded submanifold. Set $X:=cl(M)$ and take $x_0\in X$. Set again for simplicity $M^{\varepsilon}:=M \cap B^n(x_0;\varepsilon)$ and $N^\varepsilon:=M \cap S^{n-1}(x_0;\varepsilon)$ as well as $X^\varepsilon:=B^n(x_0;\varepsilon)\cap X$ (we do not match $x_0$ since it is arbitrary). \subsection{Some operators on weakly smooth forms.}\label{sect_hom_hop} For $\varepsilon >0$ small enough and $j>0$ fixed, we are going to construct operators for weakly smooth forms. For this purpose, apply Theorem \ref{thm Lipschitz conic structure} to $X$ at $x_0$. Let $h: (0;\varepsilon)\times N^\varepsilon\to M^\varepsilon$ be the homeomorphism described in definition \ref{dfn conical} and fix $\omega \in \overline{\Omega}_{(1)} ^j (M^\varepsilon)$ with $j\geq 0$ (where $\varepsilon$ is also provided by definition \ref{dfn conical}). Set now $Z:= (0;\varepsilon)\times N^\varepsilon$. We may define two forms $\omega_1$ and $\omega_2 \in \overline{\Omega}_{(1)}^j(Z)$ by: $$h^*\omega(t;x):=\omega_1(t;x)+dt\wedge \omega_2(t;x),$$ where $\omega_1$ and $\omega_2$ do not involve the differential term $dt$. The forms $\omega_1$ and $\omega_2$ are indeed only defined for almost every $(t;x) \in Z$. Next, we set for almost every $(t;x) \in Z$ and $0<\nu\leq \varepsilon$: \begin{equation}\label{eq_def_alpha}\alpha(t;x):=\int_\nu ^t \omega_2 (s;x) ds,\end{equation} and \begin{equation}\label{eq_def_K} \mathcal{K}_\nu \omega:=h^{-1*}\alpha. \end{equation} We first show that $\mathcal{K}_\nu$ preserves the weakly smooth forms. \subsection*{The mapping $\pi_\nu$.} Given $\omega \in \overline{\Omega}_{(1)} ^j (M^\varepsilon)$ and $\nu\leq \varepsilon$, let $\pi_\nu:=h\circ P_\nu \circ h^{-1}$, where $P_\nu (t;x):=(\nu ;x)$. Given a differential form $\omega $ on $M^\varepsilon$ we will denote by $\pi_\nu ^*\omega$ the form given by the pull-back of $\omega$ by means of $\pi_\nu:M^\varepsilon \to M^\varepsilon$. \begin{lem}\label{lem_proprietes_de_K_nu} For $M$ as above, $\mathcal{K}_\nu$ preserves the weakly smooth forms and satisfies on $\overline{\Omega}_{(1)} ^j (M^\varepsilon)$: \begin{equation}\label{eq_K_nu_homot_operator}\overline{d}\mathcal{K}_\nu-\mathcal{K}_\nu \overline{d}=Id-\pi_\nu^*,\end{equation} \end{lem} \begin{proof} Take $\omega$ in $ \overline{\Omega}_{(1)} ^j (M^\varepsilon)$ and let us fix a form $\varphi \in \Omega^{m-j} _{0,\infty} (M)$. Let $h$ be as above. The mapping $h$ is locally bi-Lipschitz in $h^{-1}(M^\varepsilon)$ (see Remark \ref{rem_conical} (\ref{rem_conical_item3})). By Proposition \ref{prop_pullback_weakly smooth forms}, the form $h^*\omega$ is weakly differentiable and $\overline{d} h^* \omega= h^*\overline{d}\omega$ and the same is true for $\varphi$. Let $\alpha$ be the form defined in (\ref{eq_def_alpha}) and set $\psi:=h^*\varphi$. It is enough to show: $$(-1)^{j}\int_Z \alpha \wedge \overline{d}\psi = \int_Z h^*\mathcal{K}_\nu d\omega \wedge \psi+\int_Z h^*\omega \wedge \psi- \int_Z h^*\pi_\nu ^* \omega \wedge \psi . $$ For this purpose, note that we have (for relevant orientations): \begin{eqnarray*} (-1)^{j}\int_{Z} \alpha \wedge \overline{d}\psi&=&(-1)^{j}\int_0 ^\varepsilon(\int_{ [\nu;t]\times N^\varepsilon } h^*\omega \wedge \overline{d}\psi)\,dt \\&=& \int_0 ^\varepsilon\int_{ [\nu;t]\times N^\varepsilon} \overline{d}h^*\omega \wedge\psi-\int_0 ^\varepsilon\int_{[\nu;t]\times N^\varepsilon}\overline{d}( h^*\omega(s;x) \wedge \psi(t;x) ) , \end{eqnarray*} (integrating by parts) and therefore if $\Delta_\nu:=\{(s;t): \nu \leq s \leq t < \varepsilon \; \mbox{or} \; 0< t \leq s \leq \nu\}$ we have: $$(-1)^{j}\int_{Z} \alpha \wedge \overline{d}\psi =\int_{Z} h^*\mathcal{K}_\nu d \omega \wedge \psi- \int_{N^\varepsilon}\int_{\Delta_\nu}\overline{d}( h^*\omega(s;x) \wedge \psi(t;x) ) . $$ But, since $\psi$ has compact support in $M^\varepsilon$, by Stokes' formula we have: $$\int_{N^\varepsilon}\int_{\Delta_\nu}\overline{d}( h^*\omega(s;x) \wedge \psi(t;x) )=\int_Z h^*\pi_\nu ^* \omega \wedge \psi -\int_{Z} h^*\omega\wedge \psi.$$ Together with the preceding equality this implies that $$(-1)^{j}\int_{Z} \alpha \wedge \overline{d}\psi =\int_{Z} h^*\omega\wedge \psi +\int_{Z} h^*\mathcal{K}_\nu d \omega \wedge \psi - \int_Z h^*\pi_\nu ^* \omega \wedge \psi ,$$ as required. \end{proof} \subsection*{The homotopy operator $\mathcal{K}$.} We derive from $\mathcal{K}_\nu$ a local homotopy operator $\mathcal{K}$. Let $\varepsilon >0$ be as above and let $j>0$. We just saw that $\mathcal{K}_\nu$ preserves the weakly smooth forms. Observe that if $\omega$ has compact support in $X^\varepsilon$ then $h^*\omega(\nu;x)$ is zero for $\nu<\varepsilon$ sufficiently close to $\varepsilon$. Therefore $\mathcal{K}_\nu$ induces an operator:$$\mathcal{K}: \overline{\Omega}_{(1),X^\varepsilon} ^j (M^\varepsilon)\to \overline{\Omega}_{(1),X^\varepsilon} ^{j-1} (M^\varepsilon),$$ defined by the stationary limit $\mathcal{K}\omega:=\lim_{\nu \to \varepsilon} \mathcal{K}_\nu$. Below we describe the properties of $\mathcal{K}$. \begin{pro}\label{pro_proprietes_de_K} For $M$ as above, $\mathcal{K}$ is a homotopy operator, in the sense that: \begin{equation}\label{eq_K_homot_operator}\overline{d}\mathcal{K}-\mathcal{K} \overline{d}=Id,\end{equation} bounded for the $L^1$ norm and satisfying for $j<m$: \begin{equation}\label{eq_int_komega_link} \lim_{t\to 0} \int_{N^{t}} |\mathcal{K}\omega|=0, \end{equation} for any $\omega \in \overline{\Omega}_{(1),X^\varepsilon} ^j (M^\varepsilon)$. \end{pro} \begin{proof} As observed in the paragraph preceding the proposition, if $\omega$ has compact support in $X^\varepsilon$ then $h^*\omega(\nu;x)$ vanishes near $\nu =\varepsilon$, and thus $\pi_\nu ^*\omega$ is zero if $\nu$ is sufficiently close to $\varepsilon$. As a matter of fact, equality (\ref{eq_K_homot_operator}) follows from (\ref{eq_K_nu_homot_operator}). We have to check that $\mathcal{K}$ is bounded for the $L^1$ norm and show (\ref{eq_int_komega_link}). {\bf Some notations.} We shall write $\mathcal{I}_k^{m}$ for the set all the multi-indices $I=(i_1,\dots,i_k)$ with $0< i_1<\dots <i_k<m$. Given $I \in \mathcal{I}_k$ we shall write $\hat{I}$ for the multi-index of $\mathcal{I}_{m-1-k}$ such that $I \cup \hat{I}=\{1,\dots,m-1\}$. Let $\lambda_1,\dots,\lambda_{m-1}$ be the tame basis of $1$-forms provided by definition \ref{dfn conical} (on a dense subset $N'$ of $N^\varepsilon$) and set for any multi-index $\lambda_I:=\lambda_{i_1}\wedge \dots \wedge \lambda_{i_k}$. We now are going to show that the operator $\mathcal{K}$ is bounded for the $L^1$ norm. As $(\lambda_1;\dots;\lambda_{m-1})$ is a tame basis of $N^\varepsilon$ we may decompose $\alpha:=\sum_{I \in \mathcal{I}_{j-1}} \alpha_I \lambda_I$ (where $\alpha$ is the form defined in (\ref{eq_def_alpha})) and observe that by $(3)$ of definition \ref{dfn conical} \begin{equation}\label{eq_K_omega_simeq} |\mathcal{K} \omega|= |\sum_{I \in \mathcal{I}_{j-1}}h^{-1*} \alpha_I |\sim \sum_{I \in \mathcal{I}_{j-1}} \frac{|\alpha_I\circ h^{-1}|}{\varphi_I\circ h^{-1}},\end{equation} where $\varphi_I =\varphi_{i_1} \cdots \varphi_{i_k}$, and consequently it is enough to show that all the summands of the right hand side are $L^1$ on $M^\varepsilon$. Changing variables by means of $h$, this amounts to show that for any $I \in \mathcal{I}_k$: $$\int_{Z} |\alpha_I|\cdot\frac{J_h}{\varphi_I} <\infty,$$ where $J_h$ stands for the absolute value of the Jacobian determinant of $h$. Alike, decompose $$\omega_{2}=\sum_{I \in \mathcal{I}_{j-1}} \omega_{2,I} \lambda_I,$$ (recalled that we decomposed $h^*\omega:=\omega_1+dt \wedge \omega_2$). As $(\lambda_1;\dots;\lambda_{m-1})$ is a tame basis of $N^\varepsilon$ we have$|\omega_2|\sim \sum_{I\in \mathcal{I}_{j-1}} |\omega_{2,I}|$. For the same reasons as in (\ref{eq_K_omega_simeq}): \begin{equation}\label{eq maj de omega}|\omega_{2,I}(s;x)| \leq C |\omega (h(s;x))| \cdot \varphi_I(s;x). \end{equation} By $(3)$ of definition \ref{dfn conical} we have on $Z$: \begin{equation}\label{eq_varphi_et_jh}\varphi_I \cdot \varphi_{\hat{I}}\sim J_h\end{equation} Put $Y^t :=\{t\} \times N^\varepsilon$. There is a constant $C$ such that for almost every $t$ and any $I\in \mathcal{I}_k$: \begin{eqnarray} \int_{Y^t} \frac{|\alpha_I|}{\varphi_I} \cdot J_h &\leq &C \int_{x \in N^\varepsilon } |\alpha_I(t;x)|\cdot\dfrac{J_h(t;x)}{\varphi_{I}(t;x)} \nonumber\\ &\leq & C\int_{N^\varepsilon } |\alpha_I(t;x)|\cdot \varphi_{\hat{I}}(t;x) \quad \mbox{(by (\ref{eq_varphi_et_jh}))}\nonumber\\ &\leq & C\int_{N^\varepsilon } \int_t ^\varepsilon |\omega_{2,I} (s;x)|\cdot \varphi_{\hat{I}}(t;x)ds\quad \mbox{(by (\ref{eq_def_alpha}))}\nonumber \\ &\leq & C\int_{N^\varepsilon} \int_t ^\varepsilon |\omega (s;x)|\cdot \varphi_{I}(s;x) \cdot \varphi_{\hat{I}}(t;x)ds \quad \mbox{(by (\ref{eq maj de omega}))}\label{eq_calcul_L1norm_K}\\ &\leq & C\int_{N^\varepsilon} \int_t ^\varepsilon |\omega(s;x)|\cdot \varphi_{I}(s;x) \cdot \varphi_{\hat{I}}(s;x)ds\nonumber \end{eqnarray} since, by $(1)$ of definition \ref{dfn conical}, $\varphi_{\hat{I}}(t;x)$ is nondecreasing with respect to $t$. We finally get: \begin{equation}\label{eq_calcul_L1norm_KIII}\int_{Y^t} \frac{|\alpha_I|}{\varphi_I} \cdot J_h \leq C\int_{ (t;\varepsilon)\times N^\varepsilon} |\omega (s;x)|\cdot J_h(s;x) = C \int_{h((t;\varepsilon) \times N^\varepsilon ) } |\omega |.\end{equation} which is bounded above uniformly in $t$ since $\omega$ is a $L^1$ form, proving that $\frac{\alpha_I}{\varphi_I}$ is integrable. It remains to establish (\ref{eq_int_komega_link}). For simplicity set $$f_t(x)=\int_0 ^\varepsilon |\omega (s;x)|\cdot \varphi_{I}(s;x) \cdot \varphi_{\hat{I}}(t;x)ds.$$ As $\varphi_{\hat{I}}(t;x)$ is nondecreasing with respect to $t$, this family of functions is obviously bounded by the $L^1$ function $ \int_0 ^\varepsilon |\omega(x;s)|\cdot \varphi_{I}(s;x) \cdot \varphi_{\hat{I}}(s;x)ds$. Moreover, as $\varphi_{\hat{I}}$ goes to zero as $t$ tends to zero (since $j<m$), we see that the function $f_t$ tends to zero (pointwise) as $t$ goes to zero (by the Lebesgue dominated convergence theorem). Hence, (applying a second time the Lebesgue dominated convergence theorem) we conclude: $$ \lim_{t \to 0} \int_{N^\varepsilon} f_t=0. $$ By (\ref{eq_calcul_L1norm_K}): $$\lim_{t \to 0} \,\int_{Y^t} \frac{|\alpha_I|}{\varphi_I} \cdot J_h \leq C \lim_{t \to 0} \int_{N^\varepsilon } f_t=0. $$ But, by (\ref{eq_K_omega_simeq}) this establishes (\ref{eq_int_komega_link}). \end{proof} \begin{rem}\label{rem_K} Notice that equation (\ref{eq_calcul_L1norm_KIII}) yields that thee is a constant $C$ such that: $$\int_{N^t} |\mathcal{K} \omega| \leq C |\omega|_1, $$ for any $t\leq \varepsilon $ and any form $\omega$ in $\Omega_{(1)}^j(M^\varepsilon)$. \end{rem} \subsection{Local computations of the $L^1$ cohomology.} The following proposition may be considered as a "Poincar\'e Lemma for $L^1$ cohomology". This is an important step in the proof of Theorem \ref{thm_intro}. \begin{pro}\label{Poincare_lem_l1_simple} For $\varepsilon>0$ small enough we have for every $j$: $$H_{(1),X^\varepsilon}^j ( M^\varepsilon ) \simeq 0. $$ \end{pro} \begin{proof} For $j=0$, a closed form with compact support is the zero form and the result is clear. Fix a closed form $\omega \in \Omega_{(1),X^\varepsilon} ^j ( M^\varepsilon)$ with $j>0$. Let $\mathcal{K}$ be the homotopy operator constructed in the previous section (see Proposition \ref{pro_proprietes_de_K}). As $\omega$ is closed with compact support $\overline{d}\mathcal{K}\omega=\omega$, showing that $\omega$ is $\overline{d}$-exact and thus exact by Proposition \ref{pro l1 isom smooth}. \end{proof} \subsection{The mappings $\psi_M ^j$} As in the case of the classical de Rham theorem (for compact smooth manifolds), the isomorphism is given by integration on simplices. Let us define this natural map. Recall that singular simplices $\sigma:\Delta_j \to M $ are assumed to be subanalytic mappings. Therefore, see \cite{vlinfty} for details, we may define the following maps: \begin{eqnarray*}\psi_M^j:\Omega_{(1)}^j(M )&\to& C^{j}(M )\\ \omega &\mapsto& [\psi_M ^j(\omega):\sigma \mapsto \int_\sigma \omega].\end{eqnarray*} By Stokes' formula for singular simplices \cite{p,sv,vlinfty}, this is a cochain map. \subsection{De Rham theorem for $L^1$ cohomology.} We are now ready to prove the following theorem, which clearly implies Theorem \ref{thm_intro}. \begin{thm}\label{thm_l1_psi_M_isom} The above mappings $\psi_M ^j$ induce isomorphisms between the respective cohomology groups for any bounded (subanalytic) manifold $M$. \end{thm} \begin{proof} We prove the theorem by induction on $m$ ($=\dim M$). For $m=0$ the statement is vacuous. Define a complex of presheaves on $X$ by $\Omega_{(1)} ^j (U):=\Omega_{(1)}^j(U\cap M)$, if $U$ is an open subset of $X$ and denote by $\mathcal{L}^j$ the resulting differential sheaf. This is the sheaf on $X$ of {\it locally} $L^1$ forms of $M$ (locally in $X$). Denote by $\mathcal{H}^\bullet (\mathcal{L}^\bullet)$ the derived complex of sheaves, i. e. the complex of sheaves obtained from the presheaves $H^j (\Omega^\bullet _{(1)}(U))$. On the other hand, consider the complex of presheaves on $X$ defined by $S^j(U):=C^j(M\cap U),$ for $U$ open set of $X$, and denote by $\mathcal{S}^\bullet$ the associated complex of sheaves. As the $\mathcal{L}^j$'s are soft sheaves, they are acyclic and it follows from the theory of spectral sequences (see for instance \cite{bredon} IV Theorem 2.2) that, if the sheaf-mappings $\psi^j : \mathcal{H}^j(\mathcal{L}^\bullet) \to \mathcal{H}^j (\mathcal{S}^\bullet)$, induced by the morphisms of complexes of presheaves $\psi_U ^j:\Omega_{(1)}^j(U)\to C^j(U)$, are all isomorphisms, then the mappings $\psi_M^j$ must induce an isomorphism between the cohomology groups of the respective global sections of $\mathcal{S}^{\bullet}$ and $\mathcal{L}^\bullet$. Global sections of $\mathcal{L}^\bullet$ are $L^1$, since, as $M$ is bounded, $X$ is compact and then locally $L^1$ amounts to $L^1$. To see that the mappings $\psi^j : \mathcal{H}^j(\mathcal{L}^\bullet) \to \mathcal{H}^j (C^\bullet)$ are all local isomorphisms, we have to show that for every point $x_0$ in $X$, the mapping $\psi_{X^\varepsilon}$ is an isomorphism for any $\varepsilon$ small enough. Indeed, by section \ref{sect_cohomo_cpct_supp_local}, for any $\varepsilon$ small enough, we have the following commutative diagram for any $j$: \begin{center} \begin{picture}(-140,0) \put(-180,-3){$H_{(1)} ^j (M^{\varepsilon})$} \put(-90,-3){$H_{(1)} ^j (N^{\varepsilon})$} \put(-180,-52){$H^j (M^\varepsilon)$} \put(-90,-52){$H^j (N^\varepsilon)$} \put(-133,-50){\vector(3,0){40}} \put(-130,0){\vector(3,0){38}} \put(-165,-10){\vector(0,-1){30}} \put(-70,-10){\vector(0,-1){30}} \put(-190,-25){$\psi_{M^\varepsilon}^j$} \put(-60,-25){$\psi_{N^\varepsilon}^j$} \end{picture} \end{center} \vskip 1.8cm By Proposition \ref{Poincare_lem_l1_simple} (see diag $1.$), the horizontal arrows are isomorphisms for any $\varepsilon$ small enough. Observe also that $N^\varepsilon$ is of dimension less than $m$. By induction on $m$, $\psi_{N^\varepsilon}^j$ induces an isomorphism on the cohomology groups and thus, the above commutative diagram clearly shows that the mapping $\psi_{M^\varepsilon} ^j$ induces an isomorphism as well for any $j$. \end{proof} \section{Poincar\'e duality for $L^1$ cohomology}\label{sect_cor_poinc} We draw some consequences of Theorem \ref{thm_intro}, stating some duality results between $L^1$ and $L^\infty$ cohomology. We start by recalling some results and providing basic definitions. We recall that, except the differential forms, all the sets and mappings are assumed to be (globally) subanalytic. \subsection{Intersection homology.}\label{sect_ih} We recall the definition of intersection homology as it was introduced by Goresky and Macpherson \cite{ih1,ih2}. \begin{defs}\label{del pseudomanifolds} A subset $X\subset \R^n$ is an {\bf $m$-dimensional pseudomanifold} if $X_{reg}$ is an $m$-dimensional manifold which is dense in $X$ and $\dim X_{sing}<m-1$. A {\bf stratified pseudomanifold} is the data of a pseudomanifold together with a filtration: $$X_0\subset \dots \subset X_{m-2}= X_{m-1}\subset X_m=X,$$ such that $X_i\setminus X_{i-1}$ is either empty or a smooth manifold of dimension $i$. Throughout this section, the letter $X$ will denote a stratified pseudomanifold. A {\bf perversity} is a sequence of integers $p = (p_2, p_3,\dots, p_m)$ such that $p_2 = 0$ and $p_{k+1} = p_k$ or $p_k + 1$. A subspace $Y \subset X$ is called {\bf $(p;i)$-allowable} if $\dim Y \cap X_{m-k} \leq p_k+i-k$. Define $I^{p}C_i (x)$ as the subgroup of $C_i(X)$ consisting of those chains $\sigma$ such that $|\sigma|$ is $(p, i)$-allowable and $|\partial \sigma|$ is $(p, i - 1)$-allowable. The {\bf $i^{th}$ intersection homology group of perversity $p$}, denoted $I^{p}H_j (X)$, is the $i^{th}$ homology group of the chain complex $I^{p}C_\bullet(X).$ The {\bf $i^{th}$ intersection cohomology group of perversity $p$}, denoted $I^{p}H^j (X)$, is defined as $Hom (I^{p}H_j (X);\R)$. \end{defs} In \cite{ih1,ih2} Goresky and MacPherson have proved that, if the stratification is sufficiently nice (i. e. if topological triviality holds along strata) then these homology groups are finitely generated and independent of the stratification. Since such stratifications exist for subanalytic sets \cite{tesc} we will admit this fact and shall work without specifying the stratification. Furthermore, Goresky and MacPherson also proved that their theory satisfy a generalized version of Poincar\'e duality. We denote by $t$ {\bf the maximal perversity}, i. e. $t=(0;1;\dots;m-2)$. \begin{thm}(Generalized Poincar\'e duality \cite{ih1,ih2})\label{thm_poincare_ih} Let $X$ be a compact oriented pseudomanifold and let $p$ and $q$ be perversities with $p+q=t$. Then: $$I^p H^j(X)=I^q H^{m-j}(X).$$ \end{thm} \begin{exa} We will be interested in the cases of the zero perversity $0=(0;\dots;0)$ and the maximal perversity, which are complement perversities. By the above theorem, we have for any pseudomanifold $X$ of dimension $m$: $$I^0 H^j(X)=I^t H^{m-j}(X).$$ \end{exa} \subsection{$L^\infty$-cohomology.}\label{sect_linfty} We recall the definition of the $L^\infty$ cohomology groups that have been introduced by the author of the present paper in \cite{vlinfty}. Let $M\subset \R^n$ be a smooth oriented submanifold. \begin{dfn} We say that a form $\omega$ on $M$ is $L^\infty$ if there exists a constant $C$ such that for any $x \in M$: $$|\omega(x)| \leq C. $$ We denote by $\Omega^j_\infty (M)$ the cochain complex constituted by all the $C^\infty$ $j$-forms $\omega$ such that $\omega $ and $d\omega$ are both $L^\infty$. The cohomology groups of this cochain complex are called the {\bf $L^\infty$-cohomology groups of $M$} and will be denoted by $H^\bullet _\infty(M)$. We may endow this cochain complex with the norm:$$|\omega|_\infty:=\sup_{M} |\omega|+\sup_M |d\omega|. $$ We also introduce the {\bf locally $L^\infty$ forms} as follows. Given an open subset $U$ of $cl(M)$, let $\Omega_{\infty,loc}^j (U\cap M)$ be the de Rham complex constituted by the smooth forms on $U\cap M$ locally bounded in $U$ which have a locally bounded (in $U$) exterior derivative. This gives rise to a cohomology theory that we shall denote $H^j _{\infty,loc}(U\cap M)$. Similarly we define the de Rham complex $\overline{\Omega}_\infty ^\bullet (M)$ as the $L^\infty$ forms weakly smooth and almost everywhere continuous. \end{dfn} By Theorem \ref{thm_intro_linfty}, we know that the $L^\infty$ cohomology of a pseudomanifold coincides with its intersection cohomology in the maximal perversity. We shall need the following theorem of \cite{vlinfty}. Set again $M^\varepsilon = M\cap B^n(x_0;\varepsilon)$ for some $x_0\in cl(M)$ fixed. \begin{thm} \label{thm_poincare}\cite{vlinfty}(Poincar\'{e} Lemma for $L^\infty$ cohomology) For $\varepsilon$ positive small enough and any positive integer $j$: $$H_\infty ^j(M^\varepsilon)\simeq 0. $$ \end{thm} \subsection{Poincar\'e duality for $L^1$ cohomology}We give some corollaries of Theorem \ref{thm_intro}. Thanks to Goresky and MacPherson generalized Poincar\'e duality, we get an explicit topological criterion on the singularity to determine whether $L^1$ cohomology is Poincar\'e dual to $L^\infty$ cohomology. \begin{cor} Let $X$ be a compact oriented pseudomanifold. If $H^j(X_{reg})\simeq I^0H^j(X)$ then $L^\infty$ cohomology is Poincar\'e dual to $L^1$ cohomology in dimension $j$, i. e. $$H^j_\infty(X_{reg})\simeq H_{(1)}^{m-j}(X_{reg}).$$ \end{cor} \begin{proof} This is a consequence of Theorems \ref{thm_intro}, \ref{thm_intro_linfty} and Goresky and MacPherson's generalized Poincar\'e duality. \end{proof} \begin{cor}Let $M \subset \R^n$ be an oriented bounded $C^\infty$ submanifold. If $\dim \delta M=k$ then $L^1$ cohomology is Poincar\'e dual to $L^\infty$ cohomology in dimension $j<m-k-1$, i. e. for any positive integer $j<m-k-1$: $$H_{(1)} ^j(M) \simeq H_\infty^{m-j} (M).$$ \end{cor} \begin{proof} We may assume $k<m-1$ since otherwise the result is trivial. Set $X=cl(M)$ and observe that $X$ is a pseudomanifold. Fix a Whitney $(b)$ stratification of $X$ (see \cite{bcr, tesc} for the construction of such stratifications) such that $X$ is a stratified pseudomanifold. By definition of $0$-allowable chains (see section \ref{sect_ih}), the support of a singular chain $\sigma \in I^0 C_{j} (X)$ may not intersect the strata of the singular locus of dimension less than $m-j$. If $j<m-k$ (and hence $k<m-j$) then there is no stratum of dimension bigger or equal to $(m-j)$ and thus $|\sigma|$ must lie entirely in $X_{reg}$ and therefore $$I^0C_{j} (X)=C_{j}(X_{reg}).$$ Hence if $j<m-k-1$, the same applies to $(j+1)$ and therefore $$I^0H_{j} (X)=H_{j}(X_{reg}).$$ The result follows from the preceding corollary. \end{proof} This corollary clearly implies Corollary \ref{cor_poincare_duality_intro}. \section{Lefschetz duality for $L^1$ cohomology.}\label{sect_dirichlet} We are going to investigate Lefschetz duality. It means that we are going to consider $L^1$ forms satisfying boundary conditions. Our duality result will relate the cohomology of these forms to the cohomology of $L^\infty$ forms (Theorem \ref{thm_Poincare duality_dirichlet}). We first define and study the de Rham complex of Dirichlet $L^1$ forms. In section \ref{sect_pd_dirichlet}, we establish Lefschetz duality for $L^1$ cohomology. \subsection{Dirichlet $L^1$-cohomology groups.}\label{sect_Dirichlet_L^1-cohomology_groups.} In this section, $M$ is an orientable submanifold of $\R^n$ (not necessarily bounded) and $X$ will stand for its topological closure. We are going to consider $L^1$ forms with compact support. We recall that the support in $X$ of a $L^1$ form on $M$ is defined as the closure {\it in} $X$ of the set of points at which this form is nonzero. Let $V\subset X$ be open. \begin{dfn} We shall say that $\omega \in \overline{\Omega} _{(1)} ^j(M)$ has the {\bf $L^1$ Stokes' property in $V$} if for any $\alpha \in \overline{\Omega} _{\infty,V}^{m-j-1}(M)$ we have: \begin{equation}\label{eq_l1_stokes_property}\int_{M} \omega\wedge \overline{d} \alpha=(-1)^{j+1} \int_{M} \overline{d} \omega\wedge \alpha.\end{equation} The de Rham complex of weakly smooth $L^1$ forms of $M$ satisfying this property (and whose weak exterior derivative satisfy this property as well) is called the complex of (weakly smooth) {\bf Dirichlet $L^1$ forms on $M$} and is denoted $\overline{\Omega}_{(1)}^j(M ; V\cap \delta M)$. The subcomplex of the $C^\infty$ such forms is denoted $\Omega_{(1)}^j(M ; V \cap \delta M)$. As before, we denote by $\overline{\Omega}_{(1),X}^j(M ; V \cap \delta M)$ and $\Omega_{(1),X}^j(M ; V \cap \delta M)$ the subcomplexes of the forms having compact support in $X$. \end{dfn} \begin{rem}\label{rem_cpct_support} If $\omega$ has compact support in $V$ and satisfies the $L^1$ Stokes' property in $V$ then clearly (\ref{eq_l1_stokes_property}) holds for any $\alpha \in \overline{\Omega} _{\infty}^{m-j-1}(M)$. \end{rem} If $ K$ denotes a compact manifold with boundary $\partial K$, the relative de Rham complex of differential forms $\Omega^j (K;\partial K)$ is usually defined as the set of $j$-forms $\omega$ on $K$ such that $\omega_{|\partial K}\equiv 0$. However, the smooth forms of the pair $(K;\partial K)$ may also be characterized as the smooth forms satisfying (\ref{eq_l1_stokes_property}) for any smooth $L^\infty$ form $\alpha$ on $M$. The Dirichlet $L^1$ cohomology defined above is therefore completely analogous to the one of compact smooth manifolds. In the case of non-compact manifolds, it is not possible to require that the forms vanish at the singularities since the forms are not defined on $\delta M$. If one wants a similar characterization as in the case of compact manifolds with boundaries, we have to require a condition near $\delta M$ and pass to the limit. For this purpose, choose an exhaustion function $\rho:X\to \R^+$, that is to say, a positive $C^2$ function on $M$ tending to zero as we approach $\delta M$. Then $\{\rho\geq \varepsilon\}$ is a manifold with boundary $\{\rho=\varepsilon\}$. Given $\omega \in \overline{\Omega}_{(1)}^{j}(M)$, we may define an operator on $\overline{\Omega}_{\infty,X} ^{m-j-1}(M)$ by:\begin{equation}\label{eq_def de lomega}l_\omega (\alpha):=\lim_{\varepsilon \to 0} \int_{\rho=\varepsilon} \omega \wedge \alpha, \end{equation} for $\alpha \in \overline{\Omega}_{\infty,X} ^{m-j-1} (M)$. It is easy to see (by Stokes' formula) that if $\alpha\in \overline{\Omega}_{\infty,X} ^{m-j-1} (M)$ and $\omega \in \overline{\Omega}_{(1)} ^j (M)$ then the latter limit exists and that: $$\int_{M} \omega\wedge \overline{d} \alpha=(-1)^{j+1}\int_{M} \overline{d} \omega\wedge \alpha+l_\omega(\alpha).$$ In particular the limit in (\ref{eq_def de lomega}) is independent of the exhaustion function $\rho$. Observe also that $l_\omega$ is a bounded operator on $(\overline{\Omega}_{\infty,X} ^j (M);|.|_\infty)$. \begin{dfn}\label{dfn_norme_partial}Set:$$|\omega|_{1,\delta}:=|l_\omega|,$$ where $|l_\omega|$ denotes the operator norm of $l_\omega$.\end{dfn} Now, it follows from the definitions that if $|\omega|_{1,\delta}=0$ if and only if the $L^1$ Stokes' property holds for $\omega$. Hence, we get the following characterization of Dirichlet $L^1$ forms: \begin{equation}\label{eq_l1_rho}\overline{\Omega}_{(1)} ^\bullet (M;\delta M) = \{\omega \in \overline{\Omega}_{(1)}^{\bullet}(M): |\omega|_{1,\delta}=|\overline{d} \omega|_{1,\delta}=0 \}.\end{equation} This characterization will be very useful to check that the $L^1$ Stokes property holds later on. \begin{pro}\label{pro l1 isom smooth_dir} The inclusions $\Omega_{(1),X} ^j(M;\delta M) \hookrightarrow \overline{\Omega}_{(1),X} ^j(M;\delta M)$ and $\Omega_{(1)} ^j(M;\delta M) \hookrightarrow \overline{\Omega}_{(1)} ^j(M;\delta M)$ induce isomorphisms in cohomology. \end{pro} \begin{proof} The argument used in the proof of Proposition \ref{pro l1 isom smooth} also applies for Dirichlet cohomology. \end{proof} Let $\delta M^\varepsilon:=\delta M \cap X^\varepsilon$. We can also make use of Proposition \ref{pro_neigh} in the same way as in section \ref{sect_cohomo_cpct_supp_local} to get the following exact sequence: \begin{equation}\label{eq_long_dirichlet_local} \dots \to H^{j-1}_{(1)} (N^\varepsilon;\delta N^\varepsilon ) \to H^j_{(1),X^\varepsilon} (M^\varepsilon;\delta M^\varepsilon ) \to H^j_{(1)} (M^\varepsilon;\delta M^\varepsilon ) \to \dots \end{equation} \subsection{Lefschetz duality for Dirichlet $L^1$ cohomology and the de Rham theorem.}\label{sect_pd_dirichlet} Let $M\subset \R^n$ be an orientable submanifold of dimension $m$, set $X=cl(M)$ and take $x_0\in X$. Set again $M^{\varepsilon}:=M \cap B^n(x_0;\varepsilon)$ and $N^\varepsilon:=M\cap S^{n-1}(x_0;\varepsilon)$. \subsection*{The operator $\mathcal{K}_0$.} We are going to construct a homotopy operator: $$\mathcal{K}_0:\overline{\Omega}_{(1)} ^m (M^\varepsilon;\delta M^\varepsilon) \to \overline{\Omega}_{(1)} ^{m-1} (M^\varepsilon;\delta M^\varepsilon),$$ ($m=\dim M$) based on the operator $\mathcal{K}_\nu$ introduced in section \ref{sect_hom_hop}. \begin{pro} On $\overline{\Omega}_{(1)} ^m (M^\varepsilon)$: $$\lim_{\nu,t \to 0} |\mathcal{K}_\nu \omega -\mathcal{K}_t \omega|_{1} =0, $$ and consequently $\lim_{\nu \to 0} \mathcal{K}_\nu$ defines a homotopy operator $\mathcal{K}_0 :\overline{\Omega}^m _{(1)} (M^\varepsilon)\to \overline{\Omega}^{m-1} _{(1)} (M^\varepsilon)$. \end{pro} \begin{proof} Let $\omega \in \overline{\Omega}^m _{(1)} (M^\varepsilon)$. Let $h$ be the homeomorphism used to define $\mathcal{K}_\nu$ (see section \ref{sect_hom_hop}). As $\omega$ is an $m$-form, $h^*\omega$ is $L^1$. Clearly we have: $$ \lim_{t, \nu \to 0} \int_{M^\varepsilon} |\mathcal{K}_t \omega -\mathcal{K}_\nu \omega| =\lim_{t, \nu \to 0, t\leq \nu} \int_t ^\nu \int_{N^\varepsilon} | \omega _2|=0,$$ since, as observed, $h^* \omega$ is $L^1$ on $h^{-1} (M^\varepsilon)$. As $\omega$ is an $m$-form, it is identically zero in restriction to $N^\nu$ since this is an $(m-1)$-dimensional manifold. Consequently $\pi_\nu^* \omega$ is zero and, as $\overline{d}\omega=0$, by (\ref{eq_K_nu_homot_operator}) we have: $$\overline{d}\mathcal{K}_\nu = Id_{ \Omega^m _{(1)} (M^\varepsilon)}. $$ Passing to the limit we get that $\mathcal{K}_0\omega$ is weakly differentiable and that: $$\overline{d}\mathcal{K}_0\omega =\omega, $$ as required. \end{proof} \begin{pro}\label{pro_Komega_satisfait_l1} Let $\omega \in \overline{\Omega}_{(1)}^j(M^\varepsilon)$ satisfying the $L^1$ Stokes' property in $X^\varepsilon$.\begin{enumerate} \item[(i)] If $0< j<m$ and $\omega$ has compact support in $X^\varepsilon$ then $\mathcal{K}\omega$ satisfies the $L^1$ Stokes' property in $X^\varepsilon$. \item[(ii)] If $j=m$, then $\mathcal{K}_0 \omega$ satisfies the $L^1$ Stokes' property in $X^\varepsilon$.\end{enumerate} \end{pro} \begin{proof} Let $\omega \in \overline{\Omega}_{(1),X^\varepsilon}^j(M^\varepsilon)$ be a form satisfying the $L^1$ Stokes' property. We have to check that $|\mathcal{K} \omega|_{1,\delta}=0$ (see (\ref{eq_l1_rho})). Consider a $C^2$ nonnegative function $\rho_1 (x):N^\varepsilon \to \R$ zero on $\delta N^\varepsilon$ and positive on $N^\varepsilon$. Set $\rho_2=\rho_1 \circ h$ and denote by $\rho$ the Euclidian distance to $x_0$. For $\mu$ and $\nu$ positive real numbers, let $$M_{\mu,\nu}:= \{ x \in M^\varepsilon: \rho_2(x)\geq \mu,\; \rho(x) \geq \nu \}.$$ Then $M_{\mu,\nu}$ is a manifold with corners whose boundary is the union of $\{x\in N^\nu: \rho_2(x)\geq \mu\} $ with $$W_{\mu,\nu}=\{ x \in M^\varepsilon : \rho_2(x)=\mu, \; \rho(x)\geq \nu \}.$$ Define $Z_{\mu,\nu}:=\partial W_{\mu\nu}$. Denote by $M_{\mu,\nu}'$, $W'_{\mu,\nu}$ and $Z'_{\mu,\nu}$ the respective images by $h^{-1}$ of $M_{\mu,\nu}$, $W_{\mu,\nu}$ and $Z_{\mu,\nu}$. For the convenience of the reader, we gather all these notations on a picture: \begin{figure} \caption{ The Lipschitz conic structure of $M^\varepsilon$. Here $Z_{\mu,\nu}$ and $Z_{\mu,\nu}'$ are reduced to two points. } \end{figure} Observe that by construction (recall that $\rho(h(t;x))=t$) we have $W'_{\mu,\nu}=Z'_{\mu,\nu} \times [\mu;\varepsilon)$. By Proposition \ref{pro_proprietes_de_K}, we already know that:$$ \lim_{t \to 0} \int_{N^t} |\mathcal{K} \omega| =0.$$ Therefore it is enough to check that for every positive real number $\nu$: \begin{equation}\label{eq_lim_X_K}\lim_{\mu \to 0_+} \int_{ W_{\mu,\nu}} \mathcal{K}\omega \wedge \alpha =0, \end{equation} for any $\alpha \in \overline{\Omega}^{m-j-1} _{\infty,X^\varepsilon}(M^\varepsilon)$. Fix such a form $\alpha$. Write $\beta =h^*\alpha$ for simplicity, and decompose $\beta=\beta_1+dt \wedge \beta_2$ as well as $h^*\omega=\omega_1+dt \wedge \omega_2$. Observe that:\begin{equation}\label{eq_beta_1_et_omega_2} \beta_1\wedge \omega_2=0 \qquad \mbox{on } W'_{\mu,\nu},\end{equation} since this differential $(m-1)$-form does not involve $dt$. \begin{eqnarray}\label{eq_K_adj_de_K'} \int_{W_{\mu,\nu}} \mathcal{K} \omega \wedge \alpha&=& \int_{(t;x) \in W'_{\mu,\varepsilon}} (\int_{s=t} ^\varepsilon \omega_2(s;x) ds ) \wedge \beta(t;x)\nonumber\\ &=& \int_{x\in Z_{\mu,\varepsilon}'} \int_{t=\nu} ^\varepsilon \int_{s=t} ^\varepsilon \omega_2(x;s)\wedge \beta_2(t;x) \,ds\, dt\nonumber \quad \mbox{(by (\ref{eq_beta_1_et_omega_2}))} \\ &=& \int_{Z_{\mu,\varepsilon}'} \int_{s=\nu} ^\varepsilon \int_{t=\nu} ^s \omega_2(s;x)\wedge \beta_2(t;x) \,dt \,ds \quad \mbox{(by Fubini)}\nonumber\\ &=& \int_{s=\nu} ^\varepsilon\int_{Z_{\mu,\nu}'} h^*\omega(x;s) \wedge \int_{t=\nu} ^s\beta_2(t;x)\,dt. \end{eqnarray} Define a form $\mathcal{K}'_\nu\alpha$ on $ (0;\varepsilon)\times N^\varepsilon$ by $$\mathcal{K}'_\nu \alpha(s;x):=\int_{t=\nu} ^s\beta_2(t;x)\,dt$$ if $s\geq \nu$, and set $\mathcal{K}'_\nu\alpha (s;x)$ to be zero if $s\leq \nu$. By $(2)$ of definition \ref{dfn conical}, $h$ induces a quasi-isometry on $[\nu;\varepsilon)\times N^\varepsilon$ (see Remark \ref{rem_conical} $(3)$) and therefore $h^{-1*}\mathcal{K}'_\nu\alpha$ is an $L^\infty$ form. Moreover, in view of (\ref{eq_K_adj_de_K'}), we clearly have: \begin{equation}\label{eq_K_K'_adj_enonce}\int_{W_{\mu,\nu}} \mathcal{K} \omega \wedge \alpha= \int_{W_{\mu,\nu}'} h^*\omega \wedge \mathcal{K}'_\nu \alpha.\end{equation} Now, as by definition $\mathcal{K}'_\nu \alpha$ is zero on $\partial M_{\mu,\nu}'\setminus W_{\mu,\nu}'$, this amounts to: $$ \int_{ W_{\mu,\nu}} \mathcal{K} \omega \wedge \alpha = \int_{\partial M_{\mu,\nu}} \omega \wedge h^{-1*}\mathcal{K}' _\nu \alpha'$$ which tends to zero as $\mu$ goes to zero for $\omega$ satisfies the $L^1$ Stokes property and $\mathcal{K}'_\nu\alpha$ is an $L^\infty$ form (see Remark \ref{rem_cpct_support}), yielding (\ref{eq_lim_X_K}) and establishing $(i)$. For a proof of $(ii)$, observe that for any $L^\infty$ $(m-j-1)$-form $\alpha$ with compact support in $X^\varepsilon$: $$ \lim_{t \to 0}\int_{N^t} |\mathcal{K}_0 \omega \wedge \alpha|\leq C \lim_{t \to 0} \int_{ (0;t)\times N^\varepsilon} |h^*\omega |=0 $$ (with $C=\sup |\alpha|$). Therefore, like in the proof of $(i)$, it is enough to show (\ref{eq_lim_X_K}) for $\mathcal{K}_0$. By definition, $\mathcal{K} \omega$ is an $(m-1)$-form with no differential term involving $dt$. Thus $\mathcal{K}_0 \omega$ must be identically zero on $W_{\mu,\nu}$ and consequently (\ref{eq_lim_X_K}) is trivial in this case. \end{proof} \begin{pro}\label{pro_poinc_lemma_dirichlet} (Poincar\'e Lemma for Dirichlet $L^1$ cohomology) For $j<m$ and $\varepsilon >0$ small enough $$H_{(1),X^\varepsilon} ^j (M^\varepsilon;\delta M ^\varepsilon) \simeq 0\simeq H_{(1)} ^m(M^\varepsilon;\delta M^\varepsilon) .$$ \end{pro} \begin{proof} The case $j=0$ is clear. Let $0<j<m$ and let $\omega \in \Omega_{(1),X^\varepsilon} ^j(M^\varepsilon;\delta M ^\varepsilon)$ be a closed form. Then, by the preceding proposition $\mathcal{K} \omega$ satisfies the $L^1$ Stokes' property. Furthermore, $\overline{d} \mathcal{K} \omega=\omega$ and, by Proposition \ref{pro l1 isom smooth_dir}, $\mathcal{K} \omega$ satisfies the $L^1$ Stokes' property. The first isomorphism ensues. To compute $H_{(1)} ^m(M^\varepsilon;\delta M ^\varepsilon)$, just use $\mathcal{K}_0$ and $(ii)$ of the preceding proposition exactly in the same way. \end{proof} \subsection*{Lefschetz duality for $L^1$ cohomology.} The setting is still the same as in section \ref{sect_pd_dirichlet}. \begin{thm}\label{thm_Poincare duality_dirichlet} The pairing $$H_{(1),X}^{m-j}(M;\delta M) \otimes H_{\infty,loc} ^{j}(M)\to \R$$ $$(\alpha;\beta) \mapsto \int_{M}\alpha \wedge \beta $$ is nondegenerate. \end{thm} By ``nondegenerate'' we mean that for any $L^1$ differential form with compact support $\beta$ there is a locally $L^\infty$ differential form $\alpha$ such that $\int_M\alpha \wedge \beta =1$ and for any closed locally $L^\infty$ form $\alpha$ there is a form $\beta \in \Omega_{(1),X} ^{m-j}(M;\delta M)$ for which the latter integral is nonzero as well. \begin{proof} We shall apply an argument which is similar to the one used in the proof of Theorem \ref{thm_l1_psi_M_isom}. As we may argue by induction on $m$, we shall assume that the theorem holds for manifolds of dimension $(m-1)$, $m\geq 1$. Consider the complex of presheaves on $X$ defined by $\Omega_{(1),U}^j(U\cap M;U\cap \delta M)^*$ (where $*$ denotes the algebraic dual vector space), if $U$ is an open subset of $X$, and denote by $\mathcal{L}_{(1)}^j$ the resulting differential sheaf. Let $\mathcal{H}^\bullet (\mathcal{L}_{(1)}^\bullet)$ be the derived sheaves. Similarly, denote by $\mathcal{L}^j_{\infty}$ the differential sheaf resulting from the presheaf $\Omega_{\infty,loc}^j(U\cap M)$. For every subset $U\subset X$ and $j\leq m$, consider the mappings $$\varphi_U ^j : \Omega_{\infty,loc}^{j}(U\cap M)\to \Omega_{(1),U}^{m-j}(M\cap U;\delta M\cap U)^*,$$ defined by $\varphi_U^j (\alpha):\beta \mapsto \int_{U\cap M} \alpha \wedge \beta$. It follows from the theory of spectral sequences (see for instance \cite{bredon} IV Theorem 2.2) that, if the mapping of complex of differential sheaves induced by $\varphi_U ^j$ is a local isomorphism, then $\varphi_M^j$ induces an isomorphism between the cohomology groups of the respective global sections of $\mathcal{L}_{(1)}^{m-j}$ and $\mathcal{L}_\infty ^{j}$, as required. Thus, we simply have to make sure that the mappings $\varphi_U ^j$'s induce local isomorphisms at any $x_0 \in cl(M)$. Notice that by Theorem \ref{thm_poincare} and Proposition \ref{pro_poinc_lemma_dirichlet}, this is clear for $j>0$. It remains to deal with the case where $j=0$. As we can work separately on the connected components of $M^\varepsilon$ we will assume that $M^\varepsilon$ is connected. By Theorem \ref{thm_Poincare duality_dirichlet} we have: $$H^m_{(1)}(M^\varepsilon ;\delta M^\varepsilon) \simeq 0.$$ By induction on the dimension, we know that Lefschetz duality holds for $N^\varepsilon$. Since $N^\varepsilon$ is connected, by Theorem \ref{thm_poincare_ih} we get: $$H^{m-1}_{(1)}(N^\varepsilon;\delta N^\varepsilon) \simeq H_\infty ^{m-1}(N^\varepsilon)\simeq I^t H^{m-1}(N^\varepsilon) \simeq \R,$$ (see \cite{ih1,ih2} for the local computations of the intersection homology groups). Thanks to the long exact sequence (\ref{eq_long_dirichlet_local}), we deduce that:$$H^m_{(1),X^\varepsilon}(M^\varepsilon ;\delta M^\varepsilon)\simeq H^{m-1}_{(1)}(N^\varepsilon;\delta N^\varepsilon) \simeq \R. $$ Hence, it is enough to show that $\varphi^m_{M^{\varepsilon}}$ is onto. As the $L^\infty$ closed $0$-forms are reduced to the constant forms, it suffices to prove that for $x_0 \in cl(M)$ and $\varepsilon >0$ small enough, we can find $\omega \in \Omega_{(1),X^\varepsilon} ^m (M^\varepsilon;\delta M^\varepsilon)$ such that $\int _{M^\varepsilon} \omega\neq 0$. As $M^\varepsilon$ is orientable we can find a volume form on $M^\varepsilon$. We may multiply this form by a bump function to get a form with compact support in $X^\varepsilon$. The integral on $M^\varepsilon$ of this form is then necessarily nonzero. This shows that $\varphi^m_{M^{\varepsilon}}$ is onto. \end{proof} Of course, when $M$ is bounded, $H_{\infty, loc}^j (M)$ (resp. $H_{(1),X}^{j}(M;\delta M)$) and $H_{\infty} ^j(M)$ (resp. $H_{(1)}^{j}(M;\delta M)$) coincide so that the latter paring induces in the case of bounded manifold an isomorphism between $H_\infty ^j (M)$ and the dual vector space of $H_{(1)} ^j (M;\delta M)$, establishing Theorem \ref{thm_intro_poincare_dirichlet}. \begin{rem}As explained in the introduction, Theorem \ref{thm_intro_poincare_dirichlet} and Generalized Poincar\'e duality imply the de Rham theorem for Dirichlet $L^1$ cohomology (Corollary \ref{cor_dirichlet_de_rahm_intro}). In this section we assumed that $M$ is orientable. This is necessary to prove Lefschetz duality for $L^1$ cohomology (Theorem \ref{thm_Poincare duality_dirichlet}). Nevertheless, the de Rham theorem for $L^1$ cohomology could be proved directly (independently of Lefschetz duality) and then orientability is unnecessary. \end{rem} \section{On the $L^1$ Stokes' property}\label{sect_l1sp} Let $M\subset \R^n$ be a bounded orientable submanifold. The latter theorem raises a natural question: when do we have the $L^1$ Stokes' property on a subanalytic manifold ? This amounts to wonder when the Dirichlet $L^1$ forms and the $L^1$ forms coincide not only as cohomology groups, but also as cochains complexes. The following theorem answers very explicitly. The $L^1$ Stokes' property holds for $j$-forms iff $\delta M$ is of dimension less than $(m-j-1)$. In particular, if a subanalytic compact set $X\subset \R^n$ has only isolated singularities, then the $L^1$ Stokes' property holds for any $L^1$ $j$-form on $X_{reg}$, $j<m-1$. Below we adopt the convention that $\dim \emptyset =-1$. \begin{thm} Let $j<m$. The $L^1$ Stokes' property holds for $j$-forms iff $\dim \delta M < m-j-1$. In this case, $L^1$ cohomology is naturally dual to $L^\infty$ cohomology in dimension $j$, i. e. the pairing: $$ H_{(1)}^{j}(M) \otimes H_\infty ^{m-j}(M) \to \R$$ $$(\alpha;\beta) \mapsto \int_{M}\alpha \wedge \beta $$ is (well defined and) nondegenerate. \end{thm} \begin{proof} We first focus on the if part. Write $X:=cl(M)$. As pointed out in section \ref{sect_Dirichlet_L^1-cohomology_groups.} (see (\ref{eq_l1_rho})), it is enough to show that for any $\omega \in \overline{\Omega}_{(1)}^j (M)$ we have $|\omega|_{1,\delta}=0$. We shall prove by induction the following statements. {\bf$(\textrm{A}_k)$} Let $a<b$ be real numbers and let $k$ and $l$ be integers. Let $M$ be a bounded manifold with $\dim \delta M=k$. Set $\mathbb{D}:=[a;b]^l$. Write $\overline{\Omega}_{(1),X\times \mathbb{D}}^j (M \times \mathbb{D})$ for the weakly smooth forms $\omega$ on $M \times \mathbb{D}$, with compact support in $X\times \mathbb{D}$, such that $\omega$ and $\overline{d} \omega$ are continuous near almost every point of $M \times \partial \mathbb{D}$ and $L^1$ on $M \times \mathbb{D} $ and on $M \times \partial \mathbb{D} $. Let $\theta:X\to \R$ be a $C^2$ nonegative function with $\theta^{-1}(0)=\delta M$. For $\omega \in \overline{\Omega}_{(1),X\times \mathbb{D}}^j (M \times \mathbb{D})$ and $\alpha \in \overline{\Omega}_{\infty}^{m-j+l-1} (M \times \mathbb{D})$ we have: $$\lim_{\nu \to 0} \int_{\{\theta=\nu\} \times \mathbb{D}} \omega \wedge \alpha = 0.$$ The 'if part' of the theorem follows from the case where $l$ is zero. The product by $\mathbb{D}$ will be useful to perform the induction step. Note that the case where $\dim \delta M=-1$ is obvious since in this case $\{\theta=\nu\}$ is empty for $\nu$ small enough. Fix $\omega$ and $\alpha$ like in {\bf$(\textrm{A}_k)$}, $k\geq 0$. It suffices to prove {\bf$(\textrm{A}_k)$} for the forms $\varphi_i\omega$, if $\varphi_i$ is a partition of unity. This means that we can work locally and assume that the support of $\omega$ in $X$ is included in a little ball $B^n(x_0;\varepsilon)\times \mathbb{D}$ with $\varepsilon>0$ and $x_0\in X$. We adopt the same notations as in the proof of Proposition \ref{pro_Komega_satisfait_l1} that we recall (see fig. $1$). Consider a $C^2$ nonnegative function $\rho_1 (x):N^\varepsilon\to \R$ zero on $\delta N^\varepsilon$ and positive on $N^\varepsilon$. Set $\rho_2=\rho_1 \circ h^{-1}$ (recall that $h$ is the local mapping provided by Theorem \ref{thm Lipschitz conic structure}) and denote by $\rho$ the Euclidian distance to $x_0$. For $\mu$ and $\nu$ positive real numbers, let $$M_{\mu,\nu}:= \{ x \in M^\varepsilon: \rho_2(x)\geq \mu,\; \rho(x) \geq \nu \}.$$ Then $M_{\mu,\nu}$ is a manifold with corners (for $\mu$ and $\nu$ generic) whose boundary is the union of the set $\{x\in N^\nu: \rho_2(x)\geq \mu\} $ with the set $$W_{\mu,\nu}=\{ x \in M^\varepsilon : \rho_2(x)=\mu, \; \rho(x)\geq \nu \}.$$ Denote by $Z_{\mu}$ the set $\{ x\in N^\varepsilon: \rho_2(x)=\mu\}$. We shall show that \begin{equation}\label{eq_lim_X} \lim_{\nu\to 0}\,\lim_{\mu \to 0} \int_{\partial M_{\mu,\nu}\times \mathbb{D}} \omega \wedge \alpha =0. \end{equation} Extend trivially the mapping $h$ to a mapping $h':N^\varepsilon \times [0;\varepsilon] \times \mathbb{D} \to M^\varepsilon \times \mathbb{D}$ and let $\omega':=h'^*(\omega)$ and $ \alpha':=h'^*(\alpha)$. Note that as $h^{-1}( W_{\mu,\nu})=Z_\mu \times [\nu;\varepsilon]$: $$\lim_{\mu \to 0} \int_{W_{\mu,\nu}\times \mathbb{D} } \omega\wedge \alpha = \lim_{\mu \to 0} \int_{Z_{\mu}\times [\nu;\varepsilon]\times \mathbb{D} } \omega'\wedge \alpha',$$ which tends to zero thanks to the induction hypothesis (since $\dim \delta N^\varepsilon<k$). It thus remains to show that: \begin{equation}\label{eq_lim_Y} \lim_{\nu \to 0} \int_{N^\nu\times \mathbb{D}} \omega \wedge \alpha = 0. \end{equation} We shall again make use of the homotopy operator $\mathcal{K}$. We extend $\mathcal{K}$ to a operator on $\overline{\Omega}_{(1),X\times \mathbb{D}}^{j+l} (M^\varepsilon \times \mathbb{D})$, considering the extra variables in $\mathbb{D}$ as parameters (if a form $\omega(x;t)$ on $M^\varepsilon \times \mathbb{D}$ is $L^1$ then the form $\omega_t(x):=\omega (x;t)$ is $L^1$ on $M^\varepsilon$ for almost every $t\in\mathbb{D}$). For almost every $t$, $\mathcal{K} \omega_t$ is a $L^1$ form of $M^\varepsilon$. Moreover, by remark \ref{rem_K}, the forms $\beta(x;t):=\mathcal{K} \omega_t(x)$ and $\beta'(x;t):= \mathcal{K} d\omega_t (x)$ are $L^1$ forms on $M^\varepsilon \times \mathbb{D}$. Then (\ref{eq_K_homot_operator}) continue to hold for $L^1$ forms with compact support in $X^\varepsilon \times \mathbb{D}$. This identity entails that (\ref{eq_lim_Y}) splits into: \begin{equation}\label{eq_lim_YK}\lim_{\nu \to 0} \int_{N^\nu\times \mathbb{D}} \mathcal{K} \overline{d}\omega_t \wedge \alpha=0.\end{equation} and \begin{equation}\label{eq_lim_YD}\lim_{\nu \to 0}\int_{N^\nu\times \mathbb{D}}\overline{d}\mathcal{K} \omega_t \wedge \alpha=0.\end{equation} In virtue of {\bf$(\textrm{A}_{k-1})$} the $L^1$ Stokes' property holds on $N^\nu\times \mathbb{D}$ and, integrating by parts, the latter equation may be rewritten as: \begin{equation}\label{eq_lim_YDbis}\lim_{\nu \to 0}\;[ \int_{N^\nu\times \mathbb{D}} \mathcal{K} \omega \wedge \overline{d}\alpha+ \int_{N^\nu\times \partial \mathbb{D}} \mathcal{K} \omega \wedge \alpha]\;=0.\end{equation} Observe that (\ref{eq_int_komega_link}) holds for $\omega_t$ and $\overline{d}\omega_t$ for almost every $t$, i. e. that we have for almost every $t$ in $\mathbb{D}$: $$\lim_{\nu \to 0} \int_{N^\nu}|\mathcal{K} \omega_t|=\lim_{\nu \to 0} \int_{N^\nu}|\mathcal{K}\overline{d}\omega_t|=0.$$ Therefore, as $\alpha$ and $\overline{d} \alpha$ are $L^\infty$, (\ref{eq_lim_YK}) and (\ref{eq_lim_YD}) (via (\ref{eq_lim_YDbis})) both come down from the Lebesgue dominated convergence theorem. For the statement on Poincar\'e duality, observe now that the condition $\dim \delta M <m-j-1$ ensures that $(j-1)$ and $j$ forms satisfy the $L^1$ Stokes' property. Hence, $$H_{(1)} ^j (M) \simeq H_{(1)} ^j(M;\delta M)$$ and the statement follows from Theorem \ref{thm_Poincare duality_dirichlet}. It remains to prove that if the $L^1$ Stokes' property holds for all $j$-forms then $\dim \delta M < m-j-1$. Fix $j<m$. We shall indeed establish the contraposition. Let $k:=\dim \delta M$. Assume $k \geq m-j-1$ and take a regular point $x_0$ of $\delta M$. Up to a local diffeomorphism we may identify a neighborhood $W$ of $x_0$ in $\delta M$ with an open subset of $\R^{k}$ (that we will still denote $W$). Also, thanks to subanalytic bi-Lipschitz triviality \cite{vlt}, there is a subanalytic by-Lipschitz map $H$ sending a contractible neighborhood $U$ of $x_0$ in $X$ onto a product $W \times X'$, with $X'$ having only an isolated singularity. We can also assume that $H(M\cap U)$ is a product $W \times M'$. By Proposition \ref{prop_pullback_weakly smooth forms}, subanalytic bi-Lipschitz maps induce a one-to-one correpondence between weakly smooth forms and consequently, $M$ satifies the $L^1$ Stokes' property if and only if so does $W \times M'$. Therefore it is enough to show the result on $ W\times M' $. Observe that $$H_{(1),X'}^{m-k} (M') \simeq 0,$$ while $H_{(1),X'}^{m-k} (M';\delta M \cap X')$ is nonzero (by Corollary \ref{cor_dirichlet_de_rahm_intro}). Consequently there must be a form $\omega \in \Omega_{(1),X'} ^{(m-1-k)} (M')$ which does not satisify the $L^1$ Stokes' property. Define an $L^1$ $j$-form on $M$ by: $$\alpha:=\omega \wedge d x_1 \wedge \dots \wedge d x _{j-m+k+1},$$ where $d x_1, \dots ,d x_k$ is the canonical basis of $1$-forms on $W$ ($(j-m+k+1)$ is nonnegative by assumption). We claim that $\alpha$ does not satisfy the $L^1$ Stokes' property in $W \times X'$. We will exhibit a form $\beta \in \Omega_{\infty, W \times X'} ^{m-j-1} (W \times M')$ such that $l_\alpha (\beta)\neq 0$. For this purpose, recall that since the $L^1$ Stokes' property fails for $\omega$ on $M'$, there exists a form $\theta \in \overline{\Omega}_{\infty , X'} ^{0}(M')$ for which $l_\omega (\theta) \neq 0.$ Define a form on $W\times M'$ by: $$\theta ':=\theta \,d x _{j-m+k+2} \wedge \dots \wedge d x _{m}.$$ As $\theta '$ does not have compact support in $W \times X'$, we shall multiply it by a bump function. Let $\psi: W \to [0;1] $ be a smooth nonegative compactly supported function which takes value $1$ at $x_0$ and set $\beta:=\psi \theta '$. By Fubini $$l_\omega (\beta) =l_\omega (\theta)\int_{W} \psi(y)dy \neq 0,$$ as required. \end{proof} \begin{rem} The argument used in the above proof was essentially local. Therefore, if we replace $L^\infty$ by $L^\infty_{loc}$ and $L^1$ by $L^1$ with compact support in $X$ the theorem goes over unbounded manifolds as well. \end{rem} \section{An example.} We end this paper by an example on which we discuss all the results of this paper. Let $X$ be the suspension of the torus. This is the set constituted by two cones over a torus that are attached along this torus. It is the most basic example on which Poincar\'e duality fails for singular homology but holds for intersection homology \cite{ih1}. Let $x_0$ and $x_1$ be the two isolated singular points. This set is a pseudomanifold. It has very simple singularities (metrically conical). However, the results of this paper show that if they were not conical (say cuspidal), this would not affect the cohomology groups which only depend on the topology of the underlying singular space. This simple example is already enough to illustrate how the singularities affect Poincar\'e duality for $L^1$ cohomology. The different cohomology groups considered in this paper are gathered in the table below. \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline {Cohomology groups}{$\qquad\qquad\quad j=$} & $0$& $1$& $2$ & $3$ \\ \hline $ I^t H^j (X)$ and $ H_\infty ^j (X_{reg})$ & $\R$ & $0$ & $\R^2$ & $\R$ \\ \hline $ I^0 H^j (X)$ and $ H_{(1)} ^j (X_{reg};X_{sing})$ & $\R$ &$\R^2$ & $0$ &$\R$ \\ \hline $H^j(X_{reg})$ and $H_{(1)}^j (X_{reg})$& $\R$ & $\R^2$ & $\R$ & $0$ \\ \hline \end{tabular} \end{center} All these results may be obtained from the isomorphisms given in the introduction and a triangulation. Below, we interpret them geometrically. Let $T \subset X$ be the original torus and let $\sigma$ and $\tau$ be the suspension of the (support of the) two generators of $H_1 (T)$. Write $\sigma^\varepsilon :=\{x \in |\sigma|: d(x;\{x_0,x_1\}) \leq \varepsilon \} $. If $\omega$ is an $L^\infty$ $2$-form zero near the singular points and satisfying \begin{equation}\label{eq_ex_sigma}\int_\sigma \omega=1,\end{equation} and if $\omega=d \alpha$ then $\int_{\sigma^\varepsilon} \alpha\equiv 1$ (by Stokes' formula). As the volume of $\sigma^\varepsilon$ tends to zero, $\alpha$ cannot be bounded. Consequently if $\omega$ is a $L^\infty$ closed $2$-form zero near the singularities satisfying (\ref{eq_ex_sigma}), it must represent a nontrivial class. In fact, every nontrivial class may be represented by a shadow form \cite{bgm}. However, the form $\alpha$ may be $L^1$. The only nontrivial $L^1$ class of $2$ forms is actually provided by those forms whose integral on $T$ is nonzero, but these forms obviously do not satisfy the $L^1$ Stokes property (see (\ref{eq_l1_rho})). We see that the singularities induce a gap between $L^1$ and Dirichlet $L^1$ cohomology, making the $L^1$ Stokes' property fail. We also see that $L^\infty$ cohomology is dual to $L^1$ cohomology in dimension $2$ and $3$ (as it is established by Theorem \ref{cor_poincare_duality_intro}). However, $H_{\infty} ^1 (X_{reg})$ is not isomorphic to $H_{(1)}^{2} (X_{reg})$. \end{document}
arXiv
Tetsuji Shioda Tetsuji Shioda (塩田 徹治) is a Japanese mathematician who introduced Shioda modular surfaces and who used Mordell–Weil lattices to give examples of dense sphere packings. He was an invited speaker at the ICM in 1990. References • Home page of Tetsuji Shioda • Tetsuji Shioda at the Mathematics Genealogy Project • Tetsuji Shioda in the Oberwolfach photo collection Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • Belgium • United States • Japan • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Treating differentials as fractions Earlier in our tutorials, we discussed the treatment of differentials like $dx$ and $dy$, and whether you could simply manipulate the differentials as you would ordinary numbers. From a rigorous sense, separating differentials should not be done until you can develop the theory of how to do it. This leads to a whole new theory called Non-standard analysis. But let's not do that. Let's focus on where things go wrong in a practical sense. At least for the physicists, you may not care that it's not rigorous mathematics as long as it gives the right answer. So let's see where it gives the wrong answer. Definitions of limits It's obvious that in a rigorous sense, you can't! The definition of the limit is \[ y'(x) = \lim_{\delta x\to 0} \frac{y(x + \delta x) - y(x)}{\delta x} \equiv \frac{dy}{dx}. \] However, because both numerator and denominator are tending to zero, then you would have to identify $dx = 0$ and $dy = 0$. So a statement like \begin{equation} \label{dy} dy = y'(x) dx \end{equation} is vacuous because it is expressing $0 = 0$. The nature of $y'(x)$ is that it is a very finely tuned quantity which corresponds to both numerators and denominators tending to zero in a very particular way. So $dx$ and $dy$ are intimately linked: one cannot exist without the other. But it works! But then again, "it works!" many students will claim. For example, if $y = y(x)$ was our original curve, and we now parameterised $x = x(t)$, then by the chain rule, \[ \frac{dy}{dt} = \frac{dx}{dt}\frac{dy}{dx} \] and we point to the above formula and say "see? it works!". It works here, too! We might also make an example of the rule for inverting functions. If $y = f(x)$ and we invert this using $x = f^{-1}(y)$, then the derivative of the inverse function is given by \[ \frac{df^{-1}(y)}{dy} = \frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}. \] See? They're basically fractions! It works here, too (again) Another example is separation of variables when solving differential equations. If we have \[ \frac{dy}{dx} = \frac{1}{y}, \] then we simply manipulate the differentials \[ y \ dy = dx, \] integrate both sides \[ \int y \, dy = \int dx \] and voila: \[ \tfrac{1}{2} y^2 = x + C. \] It works, see? When stuff hits the fan Blindly treating differentials like fractions works well enough when you're in first year and working with functions of a single variable. But treating $dy/dx$ like a fraction is a gateway drug to treating $\partial y/\partial x$ like a fraction. Here's an obvious example. Let $F(x,y) = 0$ be a function that defines $y$ implicitly. For example, suppose that \[ F(x,y) = x + y. \] Then blindly treating the differentials as fractions, we have \[ \frac{dy}{dx} = \frac{{\partial F}}{\partial x}\frac{\partial y}{\partial F} = \frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}} = \frac{1}{1} = 1. \] But obviously, $y = -x$ and so $dy/dx = -1$. In fact, you can show that the correct formula is: \[ \frac{dy}{dx} = -\frac{\frac{\partial F}{\partial x}}{\frac{\partial F}{\partial y}} \] so treating differentials like fractions will be problematic even for the simplest of problems. Why it doesn't work The reason why it works in 1D is because there is only one object being varied (the $dx$) and one object who's variation you are concerned about ($dy$). In 2D, for example, such as the case of $F(x,y)$, then the variation of $F$ depends on how we choose to vary both $x$ and $y$. So while it makes sense to think of $dy/dx$ as dividing by a number, $dx$, it doesn't make sense to think of derivatives of $F$ as obtained by dividing by a vector $[dx, dy]$. You will learn that there is a difference between $\partial F/\partial x$, which is the variation of $F$ as $x$ changes but $y$ is fixed, and the total variation of $F$ with respect to $x$, $dF/dx$. The latter is \begin{equation} \label{dFdx} \frac{dF}{dx} = \frac{dx}{dx} \frac{\partial F}{\partial x} + \frac{dy}{dx} \frac{\partial F}{\partial y}. \end{equation} The difference is that by considering $y = y(x)$, then when we vary $x$, we must also take in account the change in $y$. Thus $F$ is changed in response to variations of both $x$ and $y$, but there is only a single quantity being varied ($x$). If we are interested in the curve $F(x,y) = 0$, then we can set the \eqref{dFdx} to zero, and this gives the proper value of $dy/dx$. If you fall into the trap of constantly thinking of differentials as equivalent to ordinary numbers, then what is the difference between $dx$, $\Delta x$, $\delta x$, and $\partial x$? Similarly, what is the difference between $dF$, $\Delta F$, $\delta F$, and $\partial F$? Is $\delta x = dx$? Is $\partial x = dx$? If it's not true then do we say that $dx > \partial x$ or $dx < \partial x$? These are all very silly questions because the infinitesimals are ill-defined by themselves and should not be thought-of as real numbers. Treating differentials like fractions is a gateway drug to further misunderstanding. And in more than one dimension, it's just plain wrong.
CommonCrawl
Home 1 › NTC Thermistor 2 › NTC Thermistor 3 What are NTC Thermistors? NTC stands for "Negative Temperature Coefficient". NTC thermistors are resistors with a negative temperature coefficient, which means that the resistance decreases with increasing temperature. They are primarily used as resistive temperature sensors and current-limiting devices. The temperature sensitivity coefficient is about five times greater than that of silicon temperature sensors (silistors) and about ten times greater than that of resistance temperature detectors (RTDs). NTC sensors are typically used in a range from −55 to +200 °C. The non-linearity of the relationship between resistance and temperature exhibited by NTC resistors posed a great challenge when using analog circuits to accurately measure temperature. However, rapid development of digital circuits solved that problem through enabling computation of precise values by interpolating lookup tables or by solving equations which approximate a typical NTC curve. NTC Thermistor Definition An NTC thermistor is a thermally sensitive resistor for which the resistance exhibits a large, precise and predictable decrease as the core temperature of the resistor increases over the operating temperature range. Characteristics of NTC Thermistors Unlike RTDs (Resistance Temperature Detectors), which are made from metals, NTC thermistors are generally made of ceramics or polymers. Different materials used in the manufacture of NTC thermistors result in different temperature responses, as well as other different performance characteristics. Temperature response Most NTC thermistors are typically suitable for use within a temperature range between −55 and 200 °C, where they give their most precise readings. There are special families of NTC thermistors that can be used at temperatures approaching absolute zero (-273.15 °C) as well as those specifically designed for use above 150 °C. The temperature sensitivity of an NTC sensor is expressed as "percentage change per degree C" or "percentage change per degree K". Depending on the materials used and the specifics of the production process, the typical values of temperature sensitivities range from -3% to -6% / °C. Characteristic NTC curve As can be seen from the figure, the NTC thermistors have a much steeper resistance-temperature slope compared to platinum alloy RTDs, which translates to better temperature sensitivity. Even so, RTDs remain the most accurate sensors with their accuracy being ±0.5 % of the measured temperature, and they are useful in the temperature range between -200 and 800 °C, a much wider range than that of NTC temperature sensors. Comparison to other temperature sensors Compared to RTDs, the NTC thermistors have a smaller size, faster response, greater resistance to shock and vibration at a lower cost. They are slightly less precise than RTDs. The precision of NTC thermistors is similar to thermocouples. However thermocouples, can withstand very high temperatures (in the order of 600 °C) and are used in these applications instead of NTC thermistors. Even so, NTC thermistors provide greater sensitivity, stability and accuracy than thermocouples at lower temperatures and are used with less additional circuitry and therefore at a lower total cost. The cost is additionally lowered by the lack of need for signal conditioning circuits (amplifiers, level translators, etc.) that are often needed when dealing with RTDs and always needed for thermocouples. Self-heating effect The self-heating effect is a phenomenon that takes place whenever there is a current flowing through the NTC thermistor. Since the thermistor is basically a resistor, it dissipates power as heat when there is a current flowing through it. This heat is generated in the thermistor core and affects the precision of the measurements. The extent to which this happens depends on the amount of current flowing, the environment (whether it is a liquid or a gas, whether there is any flow over the NTC sensor and so on), the temperature coefficient of the thermistor, the thermistor's total area and so on. The fact that the resistance of the NTC sensor and therefore the current through it depends on the environment is often used in liquid presence detectors such as those found in storage tanks. The heat capacity represents the amount of heat required to increase the temperature of the thermistor by 1 °C and is usually expressed in mJ/°C. Knowing the precise heat capacity is of great importance when using an NTC thermistor sensor as an inrush-current limiting device, as it defines the response speed of the NTC temp sensor. Curve Selection and Calculation The thermistor selection process must take care of the thermistor's Dissipation Constant, Thermal Time Constant, Resistance value, Resistance-Temperature curve and Tolerances, to mention the most important factors. Since the relationship between resistance and temperature (the R-T curve) is highly nonlinear, certain approximations have to be utilized in practical system designs. First-order approximation One approximation, and the simplest to use, is the first-order approximation which states that: $$\Delta R = k · \Delta T$$ Where k is the negative temperature coefficient, ΔT is the temperature difference, and ΔR is the resistance change resulting from the change in temperature. This first-order approximation is only valid for a very narrow temperature range, and can only be used for such temperatures where k is nearly constant throughout the whole temperature range. Beta formula Another equation gives satisfying results, being accurate to ±1 °C over the range of 0 to +100°C. It is dependent on a single material constant β which can be obtained by measurements. The equation can be written as: $$R(T) = R(T_0) · e^{\beta (\frac{1}{T} - \frac{1}{T_0})}$$ Where R(T) is the resistance at the temperature T in Kelvin, R(T0) is a reference point at temperature T0. The Beta formula requires a two-point calibration, and it is typically not more accurate than ±5 °C over the complete useful range of the NTC thermistor. Steinhart-Hart equation The best approximation known to date is the Steinhart-Hart formula, published in 1968: $$\frac{1}{T} = A + B · ln(R) + C · (ln(R))^3$$ Where ln R is the natural logarithm of the resistance at temperature T in Kelvin, and A, B and C are coefficients derived from experimental measurements. These coefficients are usually published by thermistor vendors as part of the datasheet. The Steinhart-Hart formula is typically accurate to around ±0.15 °C over the range of -50 to +150 °C, which is plenty for most applications. If superior accuracy is required, the temperature range must be reduced and accuracy of better than ±0.01 °C over the range of 0 to +100 °C is achievable. Choosing the right approximation The choice of the formula used to derive the temperature from the resistance measurement needs to be based on available computing power, as well as actual tolerance requirements. In some applications, a first-order approximation is more than enough, while in others not even the Steinhart-Hart equation fulfills the requirements, and the thermistor has to be calibrated point by point, making a large number of measurements and creating a lookup table. Construction and Properties of NTC Thermistors Materials typically involved in the fabrication of NTC resistors are platinum, nickel, cobalt, iron and oxides of silicon, used as pure elements or as ceramics and polymers. NTC thermistors can be classified into three groups, depending on the production process used. Bead thermistors These NTC thermistors are made from platinum alloy lead wires directly sintered into the ceramic body. They generally offer fast response times, better stability and allow operation at higher temperatures than Disk and Chip NTC sensors, however they are more fragile. It is common to seal them in glass, to protect them from mechanical damage during assembly and to improve their measurement stability. The typical sizes range from 0.075 – 5 mm in diameter. Disk and Chip thermistors These NTC thermistors have metalized surface contacts. They are larger and, as a result, have slower reaction times than bead type NTC resistors. However, because of their size, they have a higher dissipation constant (power required to raise their temperature by 1 °C). Since the power dissipated by the thermistor is proportional to the square of the current, they can handle higher currents much better than bead type thermistors. Disk type thermistors are made by pressing a blend of oxide powders into a round die and then sintering at high temperatures. Chips are usually fabricated by a tape-casting process where a slurry of material is spread out as a thick film, dried and cut into shape. The typical sizes range from 0.25 to 25 mm in diameter. Glass encapsulated NTC thermistors These are NTC temperature sensors sealed in an airtight glass bubble. They are designed for use with temperatures above 150 °C, or for printed circuit board mounting, where ruggedness is a must. Encapsulating a thermistor in glass improves the stability of the sensor and protects the sensor from the environment. They are made by hermetically sealing bead type NTC resistors into a glass container. The typical sizes range from 0.4 to 10 mm in diameter. NTC thermistors are used in a broad spectrum of applications. They are used to measure temperature, control temperature, and compensate for temperature. They can also be used to detect the absence or presence of a liquid, as current limiting devices in power supply circuits, for temperature monitoring in automotive applications, and in many more applications. NTC sensors can be divided into three groups, depending on the electrical characteristic exploited in an application. Resistance-temperature characteristic Applications based on the resistance-temperature characteristic include temperature measurement, control, and compensation. These also include situations in which an NTC thermistor is used so that the temperature of the NTC temp sensor is related to some other physical phenomena. This group of applications requires that the thermistor operates in a zero-power condition, meaning that the current through it is kept as low as possible, to avoid heating the probe. Current-time characteristic Applications based on current-time characteristic are: time delay, inrush-current limiting, surge suppression, and many more. These characteristics are related to the heat capacity and dissipation constant of the NTC thermistor used. The circuit usually relies on the NTC thermistor heating up due to the current passing through it. At one point it will trigger some sort of change in the circuit, depending on the application in which it is used. Voltage-current characteristic Applications based on the voltage-current characteristic of a thermistor generally involve changes in the environmental conditions or circuit variations which result in changes in the operating point on a given curve in the circuit. Depending on the application, this can be used for current limiting, temperature compensation or temperature measurements. NTC Thermistor Symbol The following symbol is used for a negative temperature coefficient thermistor, according to the IEC standard. NTC thermistor (IEC standard) If you need NTC Thermistors: 👉 100K Ohm NTC Thermistors(5Pcs) 👉 100K Ohm NTC Thermistors(10Pcs) 👉 NTC 3950 100K Thermistor(5Pcs) 👉 NTC Thermistor Sensor Probe(5Pcs) More Electronic Component,welcome to HALJIA!
CommonCrawl
Ruth I. Michler Memorial Prize The Ruth I. Michler Memorial Prize is an annual prize in mathematics, awarded by the Association for Women in Mathematics to honor outstanding research by a female mathematician who has recently earned tenure. The prize funds the winner to spend a semester as a visiting faculty member at Cornell University, working with the faculty there and presenting a distinguished lecture on their research.[1][2] It is named after Ruth I. Michler (1967–2000), a German-American mathematician born at Cornell, who died young in a construction accident.[3] The award was first offered in 2007. Its winners and their lectures have included:[1][2][4] • Rebecca Goldin (2007), "The Geometry of Polygons" • Irina Mitrea (2008), "Boundary-Value Problems for Higher-Order Elliptic Operators" • Maria Gordina (2009), "Lie's Third Theorem in Infinite Dimensions" • Patricia Hersh (2010), "Regular CS Complexes, Total Positivity and Bruhat Order" • Anna Mazzucato (2011), "The Analysis of Incompressible Fluids at High Reynolds Numbers" • Ling Long (2012), "Atkin and Swinnerton-Dyer Congruences" • Megumi Harada (2013), "Newton-Okounkov bodies and integrable systems" • Sema Salur (2014), "Manifolds with G2 structure and beyond" • Malabika Pramanik (2015), "Needles, Bushes, Hairbrushes, and Polynomials" • Pallavi Dani (2016), "Large-scale geometry of right-angled Coxeter groups" • Julia Gordon (2017), "Wilkie's theorem and (ineffective) uniform bounds" • Julie Bergner (2018), "2-Segal structures and the Waldhausen S-construction" • Anna Skripka (2019), "Untangling noncommutativity with operator integrals" • Shabnam Akhtari (2021), "Representation of integers by binary forms" • Emily E. Witt (2022), "Local cohomology: An algebraic tool capturing geometric data" See also • List of awards honoring women • List of mathematics awards References 1. Ruth I. Michler Memorial Prizes, Association for Women in Mathematics, retrieved 2019-10-26 2. The Ruth I Michler Memorial Prize of the AWM, MacTutor History of Mathematics Archive, retrieved 2019-10-26 3. O'Connor, John J.; Robertson, Edmund F., "Ruth Ingrid Michler", MacTutor History of Mathematics Archive, University of St Andrews 4. Michler Lecture Series, Cornell University, retrieved 2019-10-26. See also Department of Mathematics Michler Lecture Series - Julie Bergner, The University of Virginia. Talk Title: 2-Segal structures and the Waldhausen S-construction, Cornell University, retrieved 2019-10-26
Wikipedia
Let $a$ and $b$ be the roots of $x^2 - 4x + 5 = 0.$ Compute \[a^3 + a^4 b^2 + a^2 b^4 + b^3.\] by Vieta's formulas, $a + b = 4$ and $ab = 5.$ Then \begin{align*} a^3 + b^3 &= (a + b)(a^2 - ab + b^2) \\ &= (a + b)(a^2 + 2ab + b^2 - 3ab) \\ &= (a + b)((a + b)^2 - 3ab) \\ &= 4 \cdot (4^2 - 3 \cdot 5) \\ &= 4, \end{align*}and \begin{align*} a^4 b^2 + a^2 b^4 &= a^2 b^2 (a^2 + b^2) \\ &= (ab)^2 ((a + b)^2 - 2ab) \\ &= 5^2 (4^2 - 2 \cdot 5) \\ &= 150, \end{align*}so $a^3 + a^4 b^2 + a^2 b^4 + b^3 = \boxed{154}.$
Math Dataset
10 Jan Singular Matrix Pencils and the QZ Algorithm, Update 5 Jan Singular Matrix Pencils and the QZ Algorithm 9 Dec Color Cube Meets Rubik's Cube 23 Oct Christian Reinsch, Roland Bulirsch, and the SVD 21 Oct An Interactive Version of colorcubes 141History 27Precision 85People 39Eigenvalues 87Numerical Analysis 42Algorithms 21Calculus 26Color 1conservation 1Cryptography 16Differential Equations 12Fractals 114Fun 74Graphics 5Graphs 9Logo 15Magic Squares 94Matrices 1Primes 6Puzzles 7Random Numbers 10Simulation 14Singular Values 7Special Functions 12Supercomputing 12Symbolic 10Travel 7Uncategorized Cleve's Corner Book: Experiments with MATLAB Book: Numerical Computing with MATLAB Collection: Newsletter Cleve Moler is the author of the first MATLAB, one of the founders of MathWorks, and is currently Chief Mathematician at the company. He is the author of two books about MATLAB that are available online. He writes here about MATLAB, scientific computing and interesting mathematics. Scientific computing, math & more < Friday the 13th < Previous A Balancing Act for the... >Next > Splines and Pchips Posted by Cleve Moler, July 16, 2012 129 views (last 30 days) | 0 Likes | 2 comments MATLAB has two different functions for piecewise cubic interpolation, spline and pchip. Why are there two? How do they compare? plip The PCHIP Family sppchip spline vs. pchip interp1 Here is the data that I will use in this post. x = 1:6 y = [16 18 21 17 15 12] 1 2 3 4 5 6 16 18 21 17 15 12 Here is a plot of the data. set(0,'defaultlinelinewidth',2) plot(x,y,'-o') axis([0 7 7.5 25.5]) title('plip') With line type '-o', the MATLAB plot command plots six 'o's at the six data points and draws straight lines between the points. So I added the title plip because this is a graph of the piecewise linear interpolating polynomial. There is a different linear function between each pair of points. Since we want the function to go through the data points, that is interpolate the data, and since two points determine a line, the plip function is unique. A PCHIP, a Piecewise Cubic Hermite Interpolating Polynomial, is any piecewise cubic polynomial that interpolates the given data, AND has specified derivatives at the interpolation points. Just as two points determine a linear function, two points and two given slopes determine a cubic. The data points are known as "knots". We have the y-values at the knots, so in order to get a particular PCHIP, we have to somehow specify the values of the derivative, y', at the knots. Consider these two cubic polynomials in $x$ on the interval $1 \le x \le 2$ . These functions are formed by adding cubic terms that vanish at the end points to the linear interpolatant. I'll tell you later where the coefficients of the cubics come from. $$ s(x) = 16 + 2(x-1) + \textstyle{\frac{49}{18}}(x-1)^2(x-2) - \textstyle{\frac{89}{18}}(x-1)(x-2)^2 $$ $$ p(x) = 16 + 2(x-1) + \textstyle{\frac{2}{5}}(x-1)^2(x-2) - \textstyle{\frac{1}{2}}(x-1)(x-2)^2 $$ These functions interpolate the same values at the ends. $$ s(1) = 16, \ \ \ s(2) = 18 $$ $$ p(1) = 16, \ \ \ p(2) = 18 $$ But they have different first derivatives at the ends. In particular, $s'(1)$ is negative and $p'(1)$ is positive. $$ s'(1) = - \textstyle{\frac{53}{18}}, \ s'(2) = \textstyle{\frac{85}{18}} $$ $$ p'(1) = \textstyle{\frac{3}{2}}, \ \ \ p'(2) = \textstyle{\frac{12}{5}} $$ Here's a plot of these two cubic polynomials. The magenta cubic, which is $p(x)$, just climbs steadily from its initial value to its final value. On the other hand, the cyan cubic, which is $s(x)$, starts off heading in the wrong direction, then has to hurry to catch up. x = 1:1/64:2; s = 16 + 2*(x-1) + (49/18)*(x-1).^2.*(x-2) - (89/18)*(x-1).*(x-2).^2; p = 16 + 2*(x-1) + (2/5)*(x-1).^2.*(x-2) - (1/2)*(x-1).*(x-2).^2; axis([0 3 15 19]) box on line(x,s,'color',[0 3/4 3/4]) line(x,p,'color',[3/4 0 3/4]) line(x(1),s(1),'marker','o','color',[0 0 3/4]) line(x(end),s(end),'marker','o','color',[0 0 3/4]) If we piece together enough cubics like these to produce a piecewise cubic that interpolates many data points, we have a PCHIP. We could even mix colors and still have a PCHIP. Clearly, we have to be specific when it comes to specifying the slopes. One possibility that might occur to you briefly is to use the slopes of the lines connecting the end points of each segment. But this choice just produces zeros for the coefficients of the cubics and leads back to the piecewise linear interpolant. After all, a linear function is a degenerate cubic. This illustrates the fact that the PCHIP family includes many functions. By far, the most famous member of the PCHIP family is the piecewise cubic spline. All PCHIPs are continuous and have a continuous first derivative. A spline is a PCHIP that is exceptionally smooth, in the sense that its second derivative, and consequently its curvature, also varies continuously. The function derives its name from the flexible wood or plastic strip used to draw smooth curves. Starting about 50 years ago, Carl de Boor developed much of the basic theory of splines. He wrote a widely adopted package of Fortran software, and a widely cited book, for computations involving splines. Later, Carl authored the MATLAB Spline Toolbox. Today, the Spline Toolbox is part of the Curve Fitting Toolbox. When Carl began the development of splines, he was with General Motors Research in Michigan. GM was just starting to use numerically controlled machine tools. It is essential that automobile parts have smooth edges and surfaces. If the hood of a car, say, does not have continuously varying curvature, you can see wrinkles in the reflections in the show room. In the automobile industry, a discontinuous second derivative is known as a "dent". The requirement of a continuous second derivative leads to a set of simultaneous linear equations relating the slopes at the interior knots. The two end points need special treatment, and the default treatment has changed over the years. We now choose the coefficients so that the third derivative does not have a jump at the first and last interior knots. Single cubic pieces interpolate the first three, and the last three, data points. This is known as the "not-a-knot" condition. It adds two more equations to set of equations at the interior points. If there are n knots, this gives a well-conditioned, almost symmetric, tridiagonal $n$ -by- $n$ linear system to solve for the slopes. The system can be solved by the sparse backslash operator in MATLAB, or by a custom, non-pivoting tridiagonal solver. (Other end conditions for splines are available in the Curve Fitting Toolbox.) As you probably realized, the cyan function $s(x)$ introduced above, is one piece of the spline interpolating our sample data. Here is a graph of the entire function, produced by interpgui from NCM, Numerical Computing with MATLAB. x = 1:6; y = [16 18 21 17 15 12]; interpgui(x,y,3) I just made up that name, sppchip. It stands for shape preserving piecewise cubic Hermite interpolating polynomial. The actual name of the MATLAB function is just pchip. This function is not as smooth as spline. There may well be jumps in the second derivative. Instead, the function is designed so that it never locally overshoots the data. The slope at each interior point is taken to be a weighted harmonic mean of the slopes of the piecewise linear interpolant. One-sided slope conditions are imposed at the two end points. The pchip slopes can be computed without solving a linear system. pchip was originally developed by Fred Fritsch and his colleagues at Lawrence Livermore Laboratory around 1980. They described it as "visually pleasing". Dave Kahaner, Steve Nash and I included some of Fred's Fortran subroutines in our 1989 book, Numerical Methods and Software. We made pchip part of MATLAB in the early '90s. Here is a comparison of spline and pchip on our data. In this case the spline overshoot on the first subinterval is caused by the not-a-knot end condition. But with more data points, or rapidly varying data points, interior overshoots are possible with spline. interpgui(x,y,3:4) Here are eight subplots comparing spline and pchip on a slightly larger data set. The first two plots show the functions $s(x)$ and $p(x)$. The difference between the functions on the interior intervals is barely noticeable. The next two plots show the first derivatives. You can see that the first derivative of spline, $s'(x)$, is smooth, while the first derivative of pchip, $p'(x)$, is continuous, but shows "kinks". The third pair of plots are the second derivatives. The spline second derivative $s''(x)$ is continuous, while the pchip second derivative $p''(x)$ has jumps at the knots. The final pair are the third derivatives. Because both functions are piecewise cubics, their third derivatives, $s'''(x)$ and $p'''(x)$, are piecewise constant. The fact that $s'''(x)$ takes on the same values in the first two intervals and the last two intervals reflects the "not-a-knot" spline end conditions. splinevspchip pchip is local. The behavior of pchip on a particular subinterval is determined by only four points, the two data points on either side of that interval. pchip is unaware of the data farther away. spline is global. The behavior of spline on a particular subinterval is determined by all of the data, although the sensitivity to data far away is less than to nearby data. Both behaviors have their advantages and disadvantages. Here is the response to a unit impulse. You can see that the support of pchip is confined to the two intervals surrounding the impulse, while the support of spline extends over the entire domain. (There is an elegant set of basis functions for cubic splines known as B-splines that do have compact support.) y = zeros(1,8); y(4) = 1; The interp1 function in MATLAB, has several method options. The 'linear', 'spline', and 'pchip' options are the same interpolants we have been discussing here. We decided years ago to make the 'cubic' option the same as 'pchip' because we thought the monotonicity property of pchip was generally more desirable than the smoothness property of spline. The 'v5cubic' option is yet another member of the PCHIP family, which has been retained for compatibility with version 5 of MATLAB. It requires the x's to be equally spaced. The slope of the v5 cubic at point $x_n$ is $(y_{n+1} - y_{n-1})/2$. The resulting piecewise cubic does not have a continuous second derivative and it does not always preserve shape. Because the abscissa are equally spaced, the v5 cubic can be evaluated quickly by a convolution operation. Here is our example data, modified slightly to exaggerate behavior, and interpgui modified to include the 'v5cubic' option of interp1. The v5 cubic is the black curve between spline and pchip. interpgui_with_v5cubic(x,y,3:5) A extensive collection of tools for curve and surface fitting, by splines and many other functions, is available in the Curve Fitting Toolbox. doc curvefit "NCM", Numerical Computing with MATLAB, has more mathematical details. NCM is available online. Here is the interpolation chapter. Here is interpgui. SIAM publishes a print edition. Here are the script splinevspchip.m and the modified version of interpgui interpgui_with_v5cubic.m that I used in this post. Get the MATLAB code (requires JavaScript) Published with MATLAB® 7.14 Makima Piecewise Cubic Interpolation Fitting and Extrapolating US Census Data Piecewise Linear Interpolation SLM - Shape Language Modeling Hermite cubic interpolating polynomial with specified derivatives
CommonCrawl
Find the sum of the slope and $y$-intercept of the line through the points $(7,8)$ and $(9,0)$. The slope of the line through $(7,8)$ and $(9,0)$ is $\frac{8-0}{7-9}=\frac{8}{-2}=-4$. Thus, the line has equation $y=-4x+b$ for some $b$. Since $B(9,0)$ lies on this line, we have $0=-4(9)+b $, so $b=36$. Hence, the equation of the line is $y=-4x+36$, and the desired sum is $-4+36=\boxed{32}$.
Math Dataset
The crown-root morphology of central incisors in different skeletal malocclusions assessed with cone-beam computed tomography Xiao-ming Wang1, Ling-zhi Ma2, Jing Wang3 & Hui Xue4 To determine the discrepancy of crown-root morphology of central incisors among different types of skeletal malocclusion using cone-beam computed tomography (CBCT) and to provide guidance for proper torque expression of anterior teeth and prevention of alveolar fenestration and dehiscence. In this retrospective study, a total of 108 CBCT images were obtained (ranging from 18.0 to 30.0 years, mean age 25.8 years). Patients were grouped according to routine sagittal and vertical skeletal malocclusion classification criteria. The patients in sagittal groups were all average vertical patterns, with Class I comprised 24 patients—14 females and 10 males; Class II comprised 20 patients—13 females and 7 males; and Class III comprised 22 subjects—13 females and 9 males. The patients in vertical groups were all skeletal Class I malocclusions, with low angle comprised 21 patients—12 females and 9 males; average angle comprised 24 patients; and high angle comprised 21 patients—11 females and 10 males. All the CBCT data were imported into Invivo 5.4 software to obtain a middle labio-lingual section of right central incisors. Auto CAD 2007 software was applied to measure the crown-root angulation (Collum angle), and the angle formed by a tangent to the central of the labial surface of the crown and the long axis of the crown (labial surface angle). One-way analysis of variance (ANOVA) and Scheffe's test were used for statistical comparisons at the P < 0.05 level, and the Pearson correlation analysis was applied to investigate the association between the two measurements. The values of Collum angle and labial surface angle in maxillary incisor of Class II and mandibular incisor of Class III were significantly greater than other types of sagittal skeletal malocclusions (P < 0.05); no significant difference was detected among vertical skeletal malocclusions. Notably, there was also a significant positive correlation between the two measurements. The maxillary incisor in patients with sagittal skeletal Class II malocclusion and mandibular incisor with Class III malocclusion present remarkable crown-root angulation and correspondingly considerable labial surface curvature. Equivalent deviation during bracket bonding may cause greater torque expression error and increase the risk of alveolar fenestration and dehiscence. Adequate labial or lingual inclination of anterior teeth is important to establish the ideal anterior occlusal relationship and satisfying esthetic effect in orthodontics. However, orthodontists cannot always achieve the expected extent of tooth movement in alveolar bone. Researchers paid plenty of attention to the alveolar height and thickness in the past two decades, while the tooth morphological variation was frequently ignored (Fig. 1). In 1984, Bryant firstly analyzed the variability in the permanent incisor morphology by establishing three anatomic features and investigated the discrepancy among different malocclusions [1], two of which adopted by the following studies [2]. a, b The inclination of the root and crown in maxillary and mandibular incisors are inconsistent with each other in the surface view, which indicates the crown-root angulation phenomenon One feature was the crown-root angulation (Collum angle, CA) in a labiolingual direction, which was formed by the long axis of crown and root and might limit the degree to which the roots of incisor could be torqued lingually for relating to the lingual cortical plate of bone. Later, several recent studies suggested that the CA caused abnormal stress distribution of periodontal ligament when tooth movement [3, 4]. Moreover, researchers found the mean value of CA for Angle Class II division 2 malocclusion was significantly larger than Class II division 1 and Class III malocclusions [4,5,6,7,8,9,10]. The above research implied us to furtherly investigate the diversity among different skeletal malocclusions. The other feature was the labial surface angle (LSA), which was formed by a tangent to the bracket site on the labial surface and the long axis of the crown from a proximal view, and the significant amount of variation in LSA potentially affected the precision of torque expression and axial inclination [2]. Kong drew a tangent to the labial surface of the crown 3.5–5.0 mm gingivally from the incisal edge and measured the LSA of 77 incisors [2]. He demonstrated that the significant variation in LSA was greater than the variations between different types of preadjusted appliances, and the brackets still needed to be custom-made when using the straight-wire approach [2]. Thus, the preoperative judgment about the individual LSA was essential for achieving optimal torque expression. Moreover, the developmental tooth proved to be closely affected by environmental and genetic factors, which seemed coincident with the determinants of the facial growth pattern, while little was known about the correlation to the different skeletal malocclusions [11, 12]. Previous research primarily based on cephalometric radiographs and the disadvantages of magnifying distortion and unclear manual tracing of the tooth boundary might sacrifice the accuracy. Currently, CBCT is widely used in the clinic with abundant sample sources, clear three-dimensional imaging of tooth bone structure, and precise measurement via digital software, but the application in tooth morphometry remains rare [13,14,15]. The main purposes of the present study were to investigate the variations in the morphology of maxillary and mandibular central incisors, including the CA and LSA, using CBCT images and Invivo 5.4 software to capture images, and analyzing via AutoCAD. Finally, we discussed the effect on torque expression for the variable anatomic feature among different types of skeletal malocclusions. Firstly, a power analysis established by G*Power (version 3.1.9.4, Franz Faul, Universita¨t Kiel, Kiel, Germany) software, based on 1:1 ratio between groups, with sample size of 108 cases, would give more than 70% power to detect significant differences with 0.40 effect size and at the α = 0.05 significance level. Sample selection and classification The study was carried out on the CBCT scans of three classifications of the sagittal skeletal malocclusions selected from the archives of the Department of Stomatology, the Affiliated Suzhou Hospital of Nanjing Medical University. By August 2018, 2855 sets of images were stored in the database of the department. Because our study was a retrospective case-control study using the archive, no ethical approval was gained, and all the patients took CBCT for clinical orthodontic needs. The CBCT images of 108 patients (mean age 25.8 years, 18 to 30 years) were selected as the criteria presented in Table 1. CBCT images were obtained using the GALILEOS (SIRONA, Germany), with a visual range of 150 × 150 mm2, tube voltage of 90 kV, tube current of 7.0 mA, slice thickness of 0.20 mm, exposure time of 20 s, and radiation dose of 0.029 mSv. During scanning, patients should parallel the interpupillary line and Frankfurt plane to the ground, and the facial midline coincided to the median reference line of the machine, with central occlusion and no swallow. Table 1 Criteria for sample selection Lateral cephalometric radiographs were captured using Invivo 5.4 software and then classified into three groups on the basis of sagittal skeletal malocclusion using Dolphin 11.0 for cephalometric analysis (Fig. 2). The grouping criteria and sample distribution were presented in Table 2 [2, 16, 17]. Measurements to classify sagittal and vertical skeletal malocclusion. A, A-point, deepest bony point on the contour of the premaxilla below ANS; B, B-point, deepest bony point on the contour of the mandible above pogonion; ANB, angle between point A, B and point N; 1. Wits, perpendicular lines are dropped from points A and B onto the occlusal plane, Wits is measured from Ao to Bo; 2. S, sella, center of sella turcica; N, nasion, the most anterior limit of the frontonasal suture on the frontal bone in the facial midline; SN, connection between S and N, stands for anterior cranium base plane; Go, gonion, the most posterior inferior point of mandible angle; Me, menton, most inferior point of the bony chin; MP, connection between Me and Go, stands for mandibular plane; SN-MP, angle between SN and MP; 3. S-Go, the distance between lines parallel to FH plane passing through S and Go, represents the posterior facial height; N-Me, the distance between lines parallel to FH plane passing through N and Me, represents for the anterior facial height; FHI(S-Go/N-Me), facial height index, the ratio of posterior and anterior height, stands for vertical growth pattern of individual Table 2 The distribution of samples Measuring image capture The CBCT images underwent a three-dimensional adjustment with Invivo 5.4 software (Anatomage Dental) to orient the head in natural head position in three planar views. Firstly for the horizontal view, the horizontal line located rightly at the frontal edges of the bilateral ramus, and the vertical line was perpendicular to it and passed through the center of the incisive canal (Fig. 3a). Then for the coronal view, the vertical line should be parallel to the mid-sagittal reference line at crista galli (Fig. 3b). Lastly for the sagittal view, the horizontal line connecting the anterior nasal spine to the posterior nasal spine should be parallel to the bottom of the monitor (Fig. 3c). Measuring image capture. The natural position of the head is adjusted in three dimensions. a The horizontal view. b The coronal view. c The sagittal view. A bunch of cutting lines (green) was vertical to incisor labial surface (d) and located at the central coronal view (e). The median sagittal views were established with nine layers (f–n), interval 0.10 mm, and the middle one was the measuring image (j) Then, the median sagittal tomographic images of incisors in labio-lingual direction were adjusted to capture using the Arch Section tab. In detail, the bunch of cutting lines (green) should be vertical to the labial surface and pass through the center in horizontal view (Fig. 3d) and divide incisor equally in coronal view (Fig. 3e). Thus, the median one (Fig. 3j) of the nine images (Fig. 3f–n) in sagittal direction was selected for angular measurement. The thickness of sectional slices was 2.0 mm with the interval set at 0.1 mm. Marker and measurement The measuring images were marked and measured via AutoCAD (Autodesk, San Rafael, CA) as follows (Fig. 4a). "CEJ" represented the labial or lingual cementoenamel junction. Point A was the incisor superior, and point R was the root apex. Point B was labial cementoenamel junctions, point L was lingual cementoenamel junctions, and point O was the midpoint between points B and L. a The Collum angle is formed by the extension of the long axis of the crown and the long axis of the root. b Tangent L passes through upper and lower intersections of labial surface of crown and circle with the T center and radius of 0.5 mm. c The measuring example of Collum angle and labial surface angle The straight line "AO" represented the long axis of the crown, and "RO" was the long axis of the root. Point T was the tangent point on the labial surface of the crown, which was the intersection of the perpendicular line of "AO" and the labial surface of the crown, with the foot point V. The tangent line via T was defined approximately by the line passing through points T1 and T2, which were the intersections of a circle with the point T center and 0.5 mm radius on the labial surface of the crown (Fig. 4b). "Collum angle (CA)" was an acute angle between the line RO and reverse extension line AO. When line RO located lingual side to the extension line, the CA was defined as a positive value; otherwise, the labial side was negative, and the coincidence was zero. "Labial surface angle (LSA)" was formed by the tangent line and forward extension line of AO, with point P as the vertex. For example, the CA was − 6.89° and LSA was 18.59° (Fig. 4c). All statistical analyses were performed with the SPSS software (version 13.0, SPSS, Chicago). The normality test of Kolmogorov-Smirnov and Levene's variance homogeneity test with all the data were found to be normally distributed with the homogeneity of variance among groups. Further statistical comparisons of CA and LSA in different malocclusion groups were undertaken by one-way analysis of variance (ANOVA) and Scheffe's test. At last, the Pearson correlation analysis was applied to investigate the association between CA and LSA in the same incisor ("r" was the Pearson correlation coefficient). The level of statistical significance was set at P < 0.05(*), P < 0.01(**), and P < 0.001(***). Error in measurements To assess the intra-observer and inter-observer error, repeated measurements performed on all the samples were measured by two operators on two occasions at a 2-week interval and analyzed with Student's t test for paired samples adopting an α-level of 0.05. The mean values calculated by combining the measurements of both operators were used for inter-group difference analysis. The technical error of measurement (TEM) was assessed with the formula [18], $$ \mathrm{TEM}=\sqrt{\sum {d}_i^2/2n} $$ in which di was the difference between the first and second measurement on the ith sample and n was the whole sample number. As a result, all the measurements presented no significant difference according to the t test (P > 0.05). The technical error of measurement was 0.35°. Comparison of CA and LSA among different sagittal skeletal malocclusion groups (Table 3) In the maxilla, according to ANOVA, the mean values of CA in Class I, Class II, and Class III respectively achieved − 1.02 ± 6.30°, 5.18 ± 4.97°, and 0.43 ± 5.44°, and LSA were 14.44 ± 4.06°, 17.78 ± 3.74°, and 14.18 ± 4.20°. There were significant differences in both of the two measurements among different types of sagittal skeletal malocclusions (P = 0.002 < 0.01 and P = 0.008 < 0.01). Further Scheffe's test was conducted for multiple comparisons. As a result, Class II patients had greater mean values of CA and LSA than patients in the other groups (I vs II: P = 0.003 < 0.01 and P = 0.028 < 0.05; II vs III: P = 0.030 < 0.05 and P = 0.019 < 0.05). No significant difference was noted between the Class I and Class III groups (P = 0.688 > 0.05 and P = 0.977 > 0.05) (Fig. 5a, b). Table 3 Collum angle/labial surface angle of central incisors among different sagittal skeletal malocclusions (°) The value of CA and LSA in maxillary incisor of Class II (a, b) and mandibular incisor of Class III (c, d) are significantly greater than other groups. There is no statistical difference among different vertical skeletal classifications In the mandible, the mean values of CA in Class I, Class II, and Class III were 0.40 ± 5.80°, 0.82 ± 5.78°, and 5.59 ± 5.64°, and LSA were 11.32 ± 3.91°, 12.18 ± 4.39°, and 15.32 ± 3.05°, respectively. Both the two measurements were also detected to be significantly different (P = 0.006 < 0.01 and P = 0.002 < 0.01). Furthermore, Class III groups had greater CA and LSA than the other two groups (I vs III: P = 0.013 < 0.05 and P = 0.003 < 0.01; II vs III: P = 0.033 < 0.05 and P = 0.034 < 0.05), while no difference was detected between Class I and Class II (P = 0.970 > 0.05 and P = 0.759 > 0.05) (Fig. 5c, d). The consistency of the significant difference distribution implied us that there might be some extent correlation between the two measurements within the same jaw. Thus, we furtherly analyzed the association between CA and LSA within the same incisor by adapting the data from all the samples. As a result, the Pearson correlation test indicated that the CA and LSA were strongly positively correlated both in maxilla and mandible (upper jaw: r = 0.723, P = 0.000; lower jaw: r = 0.752, P = 0.000) (Fig. 6). Both in the maxilla (a) and mandible (b), the CA and LSA are significantly and positively correlated Comparison of CA and LSA among different vertical skeletal malocclusion groups (Tables 4 and 5) We detected no statistically significant differences in both CA and LSA among different vertical skeletal malocclusion groups (upper jaw: P = 0.915 > 0.05 and P = 0.347 > 0.05; lower jaw: P = 0.609 > 0.05; P = 0.217 > 0.05). Table 4 Collum angle/labial surface angle of central incisors among different vertical skeletal malocclusions (°) Table 5 Pearson correlation analysis indicated the significant positive correlation between CA and LSA (maxillary: r = 0.723, P = 0.000 < 0.001; mandibular: r = 0.752, P = 0.000 < 0.001) The precise expression of anterior torque is essential to obtain normal overjet and overbite and achieve the satisfying esthetic effect and stable occlusal relationship. The ideal preadjusted torque in straight wire brackets is hard to accomplish adequately because of the material properties of wire, slot width, ligature selection, operation experience, individual tooth, and alveolar morphology [19]. Lots of studies found that the height and thickness of local alveolar predominantly restricted the range of anterior teeth movement [20], while less attention was paid to the limitation caused by the morphology. However, some orthodontists demonstrated that the variations in tooth morphology should be taken into deep consideration, which proved to be more important than the variations between the different types of preadjusted brackets [18]. The research about the influence of variability in incisor morphology on torque expression was first conducted by Bryant, who proposed three anatomic features of the maxillary central incisor [1]. The three features from a proximal view were the crown-root angulation (supplementary angle of the Collum angle) formed by the intersection of the longitudinal axis of the crown and the longitudinal axis of the root, the labial surface angle formed by a tangent to the bracket bonding point on the labial surface of the crown and the long axis of the crown, and the lingual curvature of the crown. The following morphological studies of anterior teeth mainly focused on the first two features [2, 19, 21]. Before the introduction of CBCT, visualization of the Collum angle and labial surface angle mainly depended on the lateral cephalogram, which might provide a magnified image with virtual distortion and controversial conclusion [5, 22,23,24]. The use of high-resolution CBCT enables us to measure the two anatomical features convincingly in three-dimension with quantitative and qualitative evaluating software [25]. Recently, researchers have used CBCT to examine the morphology of the anterior teeth, including the Collum angle and labial surface angle [2, 7]. Nevertheless, none of them investigated the differences among various skeletal malocclusions, even though the values of Collum angle of maxillary central incisors were found great differences among various Angle malocclusions [1, 5, 26, 27]. For the Collum angle (CA), our observation furtherly confirmed the widespread existence of the crown-root phenomenon, which was consistent with previous lateral cephalography studies [1, 4, 5, 18, 21, 26, 28, 29] (Fig. 7a–c). Generally, the morphology was susceptible during development, for the genetic and environmental factors, and the physiological mineralization of crown preceded that of root [12]. Thus, when erupting, forces from peroral muscles, mastication, and orthodontic appliance integrally changed the developmental direction or position [30, 31]. Previous studies had indicated that the CA differs among groups with different types of Angle malocclusion and notable lingual side bending of the long axis of crown relative to long axis of root in upper incisor in Angle Class II division 2 patient [1, 8, 10]. Hence, we hypothesized that the formation of CA might associate with facial growth pattern for the common environmental and genetic determinants. In addition, we excluded samples of Angle Class II division 2 because of the proved apparent CA in maxillary central incisor. In the current study, Class II (5.18 ± 4.97°) samples had significantly greater CA compared with Class I (− 1.02 ± 6.30°) and Class III (0.43 ± 5.44°) in maxillary, while in mandibular, the Class III (5.59 ± 5.64°) samples presented significantly greater CA compared with Class I (0.40 ± 5.80°) and Class II (0.82 ± 5.7°). Combining with previous viewpoints, we suggested that remarkable CA in the maxillary incisor of skeletal Class II and the mandibular incisor of Class III could cause the root to be closer to the lingual cortical alveolar compared with the other types of skeletal malocclusion, which increased the risk of dehiscence and fenestration, root resorption, and torque limitation in the process of labial inclination [1, 5, 10]. The various Collum angle in central incisor, the long axis of the root can deviate to the labial side (a) or lingual side (c) of the long axis of the crown, or coincidence (b). The schematic diagram indicates that the root bends toward lingual cortical alveolar because of Collum angle (d). The schematic diagram elucidates that the more obvious Collum angle accompanies with the greater labial surface curvature of the crown (e) Labial surface angle (LSA) was another anatomical feature of the tooth, standing for the labial surface curvature of the crown [2]. Fredericks observed a variation of 21° when LSA measured at the point 4.2 mm apart from the incisor edge in the occlusal-gingival direction using 30 extracted incisors [1]. Thus, the individual variety of labial surface curvature led to elusive torque control on preadjusted appliances. Miethke indicated that there was considerable variation of labial surface curvature among teeth in different positions. The curvature of lower incisor was the smallest while the lower first molar was the largest, which was consistent with our results on LSA in maxillary and mandibular incisor (15.37 ± 4.27° vs 12.92 ± 4.14°). The significant discrepancy of LSA caused a wide range of torque 12.3 to 24.9° when detecting it at 4.5 mm apart from the occlusal surface [26]. Kong also found the value of LSA was significantly different at different heights from incisor edge, and the tangent point at a height from 3.5 to 5 mm, each 0.5 mm increase, the torque reduced by 1.5° [2]. Our study indicated that the values of LSA were greater in maxillary incisor of sagittal skeletal Class II malocclusion and mandibular incisor of Class III than other facial groups. Hence, when treating the same type of incisor with brackets with the same prefabricated torque at the same vertical height from the incisal edge, greater torque expression deviation might occur in the two groups of patients. Interestingly, our study also detected a significant positive correlation between the value of CA and LSA, meaning the labial surface curvature was correspondingly greater in cases with remarkable crown-root angulation. Hence, the root tip became easier to contact the lingual cortical alveolar and more challenging to avoid dehiscence and fenestration when labially inclined. Consistent with the previous study, we detected no statistical difference in both CA and LSA among the vertical skeletal classifications. Harris found no correlation between CA and PP-FH, OP-FH, FH-MP, and lower face height ratio measurements standing for vertical growth pattern [5]. However, CA still affected the stress distribution of the periodontal ligament in the vertical direction with CA increasing and the center of tooth rotation gradually approached the dental cervix, which prevented the teeth from intruding into the alveolar bone [19, 32]. The cause of excessive lingual bending of incisor is still controversial at present, but more scholars prefer environmental factors. Harris reported that the mandibular incisor erupts earlier and provided restriction and guidance for the eruption of maxillary incisor when establishing occlusal contact. The remarkable CA of maxillary incisor usually accompanied by obvious anterior retroclination in Class III patients. In fact, these incisors presented excessive labial inclined feature due to compensatory reason. Moreover, other studies and present study found no significant difference compared with the Class I, so the conclusion of Harris was debatable [5]. Srinivasan furtherly discussed the relationship between the position of the lower lip line and CA and demonstrated that CA positive and increased when lower lip line ranged from the incisal 1/3 to middle 1/3, while the CA was negative and decreased when the lower lip line located at the crown cervix [8]. Mcintyre also agreed with the oral environmental contributors for the root tip 1/3 was still under mineralization after the eruption, which was sensitive to external forces [27]. Unlike the above views, Ruf and Pancherz reported no morphological difference in upper incisor between twins, one of whom belonged to Angle Class II division 1, another to Angle Class II division 2, even though with the higher located lower lip line [6], which illustrated the determinant role of genetic factors. Summing up the former viewpoints, we suggested that when the anterior occlusal relationship was initially established, neither the bite force conducting along the long axis of incisors was enough to resist tooth over eruption, nor balanced the perioral forces from tongue and lip. As a result, the crown-root angulation formed for the eruption direction of crown changed, while the root still mineralized along the assumptive pattern. Only when the incisor continued to erupt and balance with perioral muscle force, could the crown-root morphology be stabilized. Thus, it was important to coordinate oral and maxillofacial muscle function in preventing tooth abnormal morphology at the occlusion establishing stage. The maxillary incisor in sagittal skeletal Class II and mandibular incisor in Class III present greater crown-root angulation (Fig. 7d) and labial surface curvature than other types of malocclusion (Fig. 7e). There is a significant positive correlation between the two anatomical features. The above findings indicate that the morphologies of these teeth do play vital roles in torque variations, dehiscence, fenestration, and root resorption because of the root bending toward lingual cortical alveolar. Thus, when positioning a bracket, the variability of crown-root morphology is essential to be assessed before the operation. ANB: A, A-point, deepest bony point on the contour of the premaxilla below ANS; B, B-point, deepest bony point on the contour of the mandible above pogonion; ANB, angle between point A, B and point N ANOVA: One-way analysis of variance Collum angle CBCT: Cone-beam computed tomography CEJ: Cementoenamel junction FHI: S-Go, the distance between lines parallel to Frankfurt plane passing through S and Go, represents the posterior facial height; N-Me, the distance between lines parallel to Frankfurt plane passing through N and Me, represents for the anterior facial height; FHI (S-Go/N-Me), facial height index, the ratio of posterior and anterior height, stands for vertical growth pattern of individual FH-MP: The angle formed by the mandibular and Frankfurt plane, representing the extent of the vertical growth pattern LSA: Labial surface angle OP-FH: The angle formed by the occlusal and Frankfurt plane, representing the extent of the vertical growth pattern PP-FH: The angle formed by the Palatal and Frankfurt Plane, representing the extent of the vertical growth pattern SN-MP: S, sella, center of sella turcica; N, nasion, the most anterior limit of the frontonasal suture on the frontal bone in the facial midline; SN, connection between S and N, stands for anterior cranium base plane; Go, gonion, the most posterior inferior point of mandible angle; Me, menton, most inferior point of the bony chin; MP, connection between Me and Go, stands for mandibular plane; SN-MP, angle between SN and MP SPSS software: Statistical Product and Service Solutions software TEM: Technical error of measurement Wits: The perpendicular lines are dropped from points A and B onto the occlusal plane, Wits is measured from Ao to Bo Bryant RM, Sadowsky PL, Hazelrig JB. Variability in three morphologic features of the permanent maxillary central incisor. Am J Orthod. 1984;86:25–32. Kong WD, Ke JY, Hu XQ, Zhang W, Li SS, Feng Y. Applications of cone-beam computed tomography to assess the effects of labial crown morphologies and Collum angles on torque for maxillary anterior teeth. Am J Orthod Dentofacial Orthop. 2016;150:789–95. Heravi F, Salari S, Tanbakuchi B, Loh S, Amiri M. Effects of crown-root angle on stress distribution in the maxillary central incisors' PDL during application of intrusive and retraction forces: a three-dimensional finite element analysis. Prog Orthod. 2013;14:26. Shen YW, Hsu JT, Wang YH, Huang HL, Fuh LJ. The Collum angle of the maxillary central incisors in patients with different types of malocclusion. J Dent Sci. 2012;7:72–6. Harris EF, Hassankiadeh S, Harris JT. Maxillary incisor crown-root relationships in different angle malocclusions. Am J Orthod Dentofac Orthop. 1993;103:48–53. Ruf S, Pancherz H. Class II division 2 malocclusion: genetics or environment? A case report of monozygotic twins. Angle Orthod. 1999;69:321–4. Ma ESW. Differential CBCT analysis of Collum angles in maxillary and mandibular anterior teeth in patients with different malocclusions, UNLV Theses, Dissertations, Professional Papers, and Capstones; 2016. p. 2880. Srinivasan B, Kailasam V, Chitharanjan A, Ramalingam A. Relationship between crown-root angulation (Collum angle) of maxillary central incisors in Class II, division 2 malocclusion and lower lip line. Orthodontics (Chic). 2013;14:e66–74. Williams A, Woodhouse C. The crown to root angle of maxillary central incisors in different incisal classes. Br J Orthod. 1983;10:159–61. Delivanis H, Kuftinec M. Variation in morphology of the maxillary central incisors found in Class II, division 2 malocclusions. Am J Orthod. 1980;78:438–43. Cobourne MT, Sharpe PT. Tooth and jaw: molecular mechanisms of patterning in the first branchial arch. Arch Oral Biol. 2003;48:1–14. Li J, Parada C, Chai Y. Cellular and molecular mechanisms of tooth root development. Development. 2017;144:374–84. Oz U, Orhan K, Abe N. Comparison of linear and angular measurements using two-dimensional conventional methods and three-dimensional cone beam CT images reconstructed from a volumetric rendering program in vivo. Dentomaxillofac Radiol. 2014;40:492–500. Lione R, Franchi L, Fanucci E, Laganà G, Cozza P. Three-dimensional densitometric analysis of maxillary sutural changes induced by rapid maxillary expansion. Dentomaxillofac Radiol. 2013;42:79–82. Ferreira JB, Christovam IO, Alencar DS, da Motta AFJ, Mattos CT, Cury-Saramago A. Accuracy and reproducibility of dental measurements on tomographic digital models: a systematic review and meta-analysis. Dentomaxillofac Radiol. 2017;46:20160455. Celikoglu M, Kamak H. Patterns of third-molar agenesis in an orthodontic patient population with different skeletal malocclusions. Angle Orthod. 2012;82:165. Tang N, Zhao ZH, Liao CH, Zhao MY. Morphological characteristics of mandibular symphysis in adult skeletal Class II and Class III malocclusions with abnormal vertical skeletal patterns. West China J Stomatol. 2010;28:395–8. Knösel M, Jung K, Attin T, et al. On the interaction between incisor crown-root morphology and third-order angulation. Angle Orthod. 2009;79:454–61. Papageorgiou SN, Sifakakis I, Keilig L, Patcas R, Affolter S, Eliades T et al. Torque differences according to tooth morphology and bracket placement: a finite element study. Eur J Orthod. 2017;39:411–18. Sun B, Tang J, Xiao P, Ding Y. Presurgical orthodontic decompensation alters alveolar bone condition around mandibular incisors in adults with skeletal Class III malocclusion. Int J Clin Exp Med. 2015;8:12866–73. Israr J, Bhutta N, Rafique Chatha M. Comparison of Collum angle of maxillary central incisors in Class II div 1 & 2 malocclusions. Pakistan Oral Dent J. 2016;36:91–94. Edwards JG. A study of the anterior portion of the palate as it relates to orthodontic therapy. Am J Orthod. 1976;69:249–73. Ramirez-Sotelo LR, Almeida S, Ambrosano GM, Boscolo F. Validity and reproducibility of cephalometric measurements performed in full and hemifacial reconstructions derived from cone beam computed tomography. Angle Orthod. 2012;82:827–32. Kanj AH, Bouserhal J, Osman E, El Sayed AAM. The inflection point: a torque reference for lingual bracket positioning on the palatal surface curvature of the maxillary central incisor. Prog Orthod. 2018;19:39. Kapila S, Conley RS, Jr HW. Current status of cone beam computed tomography imaging in orthodontics. Dentomaxillofac Radiol. 2011;40:24–34. Van LM, Degrieck J, De PG, Dermaut L. Anterior tooth morphology and its effect on torque. Eur J Orthod. 2005;27:258. Mcintyre GT, Millett DT. Crown-root shape of the permanent maxillary central incisor. Angle Orthod. 2003;73:710. Bauer TJ. Maxillary central incisor crown-root relationships in Class I normal occlusions and Class II division 2 malocclusions. MS (Master of Science) thesis. University of Iowa; 2014. https://ir.uiowa.edu/etd/4572/. Feres MFN, Rozolen BS, Alhadlaq A, Alkhadra TA, El-Bialy T. Comparative tomographic study of the maxillary central incisor collum angle between Class I, Class II, division 1 and 2 patients. J Orthod Sci. 2018;7:1–5. Sarrafpour B, Swain M, Li Q, Zoellner H. Tooth eruption results from bone remodelling driven by bite forces sensed by soft tissue dental follicles: a finite element analysis. PLoS One. 2013;8:e58803. Kong X, Cao M, Ye R, Ding Y. Orthodontic force accelerates dentine mineralization during tooth development in juvenile rats. Tohoku J Exp Med. 2010;221:265–70. Pai S, Panda S, Pai V, Anandu M, Vishwanath E, Suhas AS. Effects of labial and lingual retraction and intrusion force on maxillary central incisor with varying Collum angles: a three-dimensional finite elemental analysis. J Indian Orthod Soc. 2017;51:28. This study was supported by the Science and Technology Department of Guangdong Province, China (No.2011A030300012), for the case selection and the Youth Science and Technology Foundation of Suzhou, Jiangsu Province, China (No. KJXW2016033), for the analysis and paper writing. Please contact the author for data requests. State Key Laboratory of Oral Diseases, Department of Cleft Lip and Palate Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, China Xiao-ming Wang Department of Orthodontics, Stomatological Hospital of Kunming Medical University, Kunming, 650032, China Ling-zhi Ma Department of Orthodontics, Xi'an JiaoTong University Hospital of Stomatology, Xi'an, 710004, Shaanxi Province, China Department of Stomatology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, 215000, Jiangsu Province, China Hui Xue Search for Xiao-ming Wang in: Search for Ling-zhi Ma in: Search for Jing Wang in: Search for Hui Xue in: XM-w carried out the statistical analysis and writing. LZ-m carried out the cases collection. J -w participated in image measurement. H-x conceived of the study and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript. Correspondence to Hui Xue. The experiment was independently reviewed and approved by our hospital ethics committee before the experiment and the registration number of the clinical research is K2016051 by the Ethics Committee Reviewing Biomedical Research of the Suzhou Municipal Hospital of Nanjing Medical University. All the processes were conducted in full accordance with the World Medical Association Declaration of Helsinki. At the same time, all the patients were informed of the study destination and processes, and all of them were consent to sign the treatment consent form voluntarily. That is to say, the signed consents were obtained from the parents/guardians of all participants involved in our study. Finally, the consent form was approved by the hospital ethics committee. The individual person's data of CBCT and intraoral images were consent to publish by the patient. Wang, X., Ma, L., Wang, J. et al. The crown-root morphology of central incisors in different skeletal malocclusions assessed with cone-beam computed tomography. Prog Orthod. 20, 20 (2019). https://doi.org/10.1186/s40510-019-0272-2 Crown-root morphology Skeletal malocclusion Cone-beam CT
CommonCrawl
Article | Open | Published: 29 August 2017 Electronic properties of (Sb;Bi)2Te3 colloidal heterostructured nanoplates down to the single particle level Wasim J. Mir1,2 na1, Alexandre Assouline3 na1, Clément Livache1,3, Bertille Martinez1, Nicolas Goubet1,3, Xiang Zhen Xu3, Gilles Patriarche4, Sandrine Ithurria3, Hervé Aubin3 & Emmanuel Lhuillier ORCID: orcid.org/0000-0003-2582-14221 Scientific Reportsvolume 7, Article number: 9647 (2017) | Download Citation We investigate the potential use of colloidal nanoplates of Sb2Te3 by conducting transport on single particle with in mind their potential use as 3D topological insulator material. We develop a synthetic procedure for the growth of plates with large lateral extension and probe their infrared optical and transport properties. These two properties are used as probe for the determination of the bulk carrier density and agree on a value in the 2–3 × 1019 cm−3 range. Such value is compatible with the metallic side of the Mott criterion which is also confirmed by the weak thermal dependence of the conductance. By investigating the transport at the single particle level we demonstrate that the hole mobility in this system is around 40 cm2V−1s−1. For the bulk material mixing n-type Bi2Te3 with the p-type Sb2Te3 has been a successful way to control the carrier density. Here we apply this approach to the case of colloidally obtained nanoplates by growing a core-shell heterostructure of Sb2Te3/Bi2Te3 and demonstrates a reduction of the carrier density by a factor 2.5. Bismuth and antimony chalcogenides (tetradymite group with formula such as (Sb;Bi)2(Se;Te)3) have attracted great interest in the past for their thermoelectric properties1,2,3,4,5. The heavy mass of these materials leads to a large spin orbit coupling which results in an inverted band structure. Over this past decade, it is the original electronic structure of bismuth and antimony chalcogenides which has driven most of interest in these compounds. Indeed they appear as model 3D topological insulator6,7,8,9, with conducting surface-states and an insulating core, as long as the material can be obtained under an intrinsic form. Sb2Te3 is a 0.3 eV band gap semiconductor. This material has common antisite defects10,11,12 where Sb atoms replace Te atoms, which tends to result in a p-type doping13. Controlling the bulk carrier density in topological insulator compounds is a key challenge since the Fermi level of the material needs to be close to its Dirac point for electronic transport to be dominated by topologically protected surface states. Moreover, the conductance of the material is the sum of the surface and bulk contribution. Because of the narrow band gap nature of these topological insulator materials and their deviation from stoichiometry, the bulk is generally not as insulating as desired. By reducing the bulk carrier density and the associated conductance, the weight of the surface contribution in transport is expected to increase and make the surface observation more likely to occur. The Mott criterion can be used to estimate whether the material will behave as a metal or as an insulator. Metallic behavior is expected to occur if \({a}_{0}{n}^{1/3} > 0.25\) 14, where n is the carrier density and \({a}_{0}=\frac{{h}^{2}{\varepsilon }_{0}{\varepsilon }_{r}}{\pi {m}^{\ast }{e}^{2}}\) the Bohr radius with h the Planck constant, \({\varepsilon }_{0}\) the vacuum permitivity, \({\varepsilon }_{r}\) the material dielectric constant, \({m}^{\ast }\) the effective mass and e the proton charge. Due to large dielectric constant (ε > 50) of Sb2Te3, the Bohr radius is large which makes the Mott criterion fulfilled even for low carrier densities. Typically the threshold carrier density is estimated to be ≈2 × 1016 cm−3 assuming an effective mass of 0.1 m0 where mo is rest mass of an electron. As a result, Sb2Te3 typically behaves as a metal6. Alloying n-type (Bi2Se3 and Bi2Te3) and p-type (Sb2Te3) materials is a possible way to obtain a charge compensation and reduce the overall bulk carrier density. While this type of approaches have been extensively studied for bulk and thin film materials15,16,17 almost no work has been dedicated to colloidally synthesized materials. The tetradymites are layered 2D materials18, 19. Each layer is 1 nm thick and is composed of 5 atoms (quintuplet). So far, most of the efforts towards growth of these materials have been focused on physical methods such as molecular beam epitaxy20, chemical vapor deposition21, pulsed laser deposition and exfoliation22. Chemical solvothermal methods have been proposed23, 24, however all these works were driven by the investigation of the thermoelectric properties and very little work was dedicated to the understanding of the electronic and spectroscopic properties of (Sb;Bi)2Te3 obtained under colloidal form. In this letter, we develop a colloidal synthesis of Sb2Te3 nanoplates and investigate their transport properties from a thin film down to the single particle level. We demonstrate that the material is indeed p-type. In the last part of the paper, we focus on the control of the carrier density within these plates by growing in solution a heterostructure combining an n-type (Bi based) and a p-type (Sb based) layer. We obtain by this way a decrease of the bulk carrier density by a factor ≈ 3. This work paves the way for the use of colloidal heterostructure as model 3D topological insulator. The chemical synthesis of Sb2Te3 nanoplates has been investigated using solvothermal methods in aqueous25,26,27,28, or polar organic solvents29,30,31. We see two main limitations to these approaches, which are (i) the risk of oxidation of the material, and (ii) the final thickness of the nanoplate being limited to thick sheets (>50 nm26, 27). Hot injection methods in organic solvents are well established and lead to high monodispersity. Synthesis of Sb2Te3 nanoparticles in organic medium has also been proposed32. The latter is based on the thermal decomposition of single precursor containing antimony and tellurium at high temperature33,34,35. In this report, we rather use a bulky antimony precursor of antimony oleate, prepared from antimony acetate in presence of excess of oleic acid at a temperature where the acetic acid can be removed under vacuum (85 °C). In a typical synthesis, the temperature is raised around 200 °C under Ar and the Te precursor (trioctylphosphine complexed with Te) is quickly injected in the flask. The reaction is conducted in non-coordinating solvent such as octadecene since we observe that coordinating solvent such as oleylamine leads to the formation of oxide instead of the telluride. The solution rapidly darkens, and after 1 min a grey metallic appearance is observed. The product is cleaned via addition of polar solvent and by the help of centrifugation. The particles can be stored in non-polar solvents such as hexane and toluene, but typically an immediate precipitation of the suspension is observed. $$Sb{(OAc)}_{3}+OA\mathop{\mathop{\longrightarrow }\limits^{{80}^{\circ }C}}\limits_{vacuum}Sb{(OA)}_{3}+TOPTe\mathop{\mathop{\longrightarrow }\limits^{{200}^{\circ }C}}\limits_{Ar}S{b}_{2}T{e}_{3}$$ The obtained nanoplates typically present a hexagonal structure with lateral size ranging from 200 nm to 1 µm and a thickness from a few quintuplets (QL) up to 50 nm, see Figs 1a and S7. The detailed investigation of the effects of temperature, synthesis duration and stoichiometry on the final product is discussed in the SI, see Figs S1 to S5. The diffraction peaks from the XRD pattern are fully consistent with the trigonal phase (R \(\bar{3}\) m) of Sb2Te3 (00-015-0874), see Fig. 1b. Energy dispersive X-ray (EDX) analysis (Fig. S6 and Table S1) confirmed the presence of both antimony and tellurium in the final compound and showed that the material is very close to stoichiometry Sb2Te3+x with x = −0.1 ± 0.05, but is systematically Te deficient, consistent with previous report of this material10,11,12. This non stoichiometry of the compound is responsible for the metallic aspect of the solution and is further confirmed by the reflectance measurement, see Fig. 1c. The IR spectrum in Fig. 1c is poorly structured which suggests that the absorption results from free electrons. (a) TEM image of Sb2Te3 nanoplates. (b) X-ray diffraction pattern for a film of Sb2Te3 compared with reference. (c) Reflectance spectrum of a film of Sb2Te3 nanoplates and its empirical fit. To confirm this hypothesis, we can model the reflectivity assuming a Drude model36, 37 for the free electrons. In this case, the expression of the real \(({\varepsilon }_{1})\) and imaginary \(({\varepsilon }_{2})\) part of the dielectric constant are given by $${\varepsilon }_{1}(\omega )={\varepsilon }_{\infty }-\frac{{\omega }_{p}^{2}}{{\omega }^{2}+{\gamma }^{2}}$$ $${\varepsilon }_{2}(\omega )=\frac{\gamma {\omega }_{p}^{2}}{\omega ({\omega }^{2}+{\gamma }^{2})}$$ where \({\varepsilon }_{\infty }\) is the dielectric constant at high frequency, \({\omega }_{p}\) the plasmon frequency and γ the damping rate. The reflectivity signal is given for a semi-infinite medium, by $$R(\omega )=\frac{{(n-1)}^{2}+{k}^{2}}{{(n+1)}^{2}+{k}^{2}}$$ with n and k respectively the real and imaginary part of the optical index. The latter can be related to the dielectric constant by $$n(\omega )=\sqrt{\frac{{\varepsilon }_{1}+\sqrt{{\varepsilon }_{1}^{2}+{\varepsilon }_{2}^{2}}}{2}}$$ $$k(\omega )=\sqrt{\frac{\sqrt{{\varepsilon }_{1}^{2}+{\varepsilon }_{2}^{2}}-{\varepsilon }_{1}}{2}}.$$ We obtain a reasonable agreement with obtained experimental data, see Fig. 1c, assuming \({\omega }_{p}=2460\,c{m}^{-1}\) and \(1/\gamma =14fs\). From the plasmon frequency, we can estimate the carrier density n from the equation $$\hslash {\omega }_{p}=\sqrt{\frac{n{e}^{2}}{{\varepsilon }_{\infty }{m}^{\ast }}}$$ to be \(n=3.6\times {10}^{19}c{m}^{-3}\) which is consistent with the realization of the Mott criterion of metallic nature. Transport in Sb2Te3 nanoplates In the following section, we investigate the transport properties of the single Sb2Te3 nanoplate and correlate our observations with optical method of deducing the carrier density. We first start with ensemble measurements by conducting transport on nanoplates films. The films are conductive and present an ohmic behavior at room temperature, see Fig. 2a. The temperature dependence of the current presents a small decrease of the conductance as the temperature is reduced, see Fig. 2b. between room temperature and 77 K, the temperature dependence is nicely fitted with the Arrhenius law, with a small activation energy of ≈30 meV. This is typical behaviour of thin films made of poorly coupled metallic grains38, 39. (a) Current as a function of applied bias for a thin film of Sb2Te3 nanoplates. The measurement is made in vacuum. (b) Current as a function of temperature for a thin film of Sb2Te3 nanoplates. (c) SEM image of a single Sb2Te3 nanoplate connected to two Al electrodes. (d) Transfer curve (conductance as a function of gate bias) for a single Sb2Te3 nanoplate. Ultimately, the goal is to make single nanoparticle devices to observe the signature of surface states. In the next step, we switch from ensemble measurement to single particle measurement. Connecting a single nanocrystal can be especially difficult40,41,42, however the large lateral extension of the Sb2Te3 nanoplate makes possible the connection of a single particle using careful e-beam lithography, see Fig. 2c. After wire-bonding of the connections to the single nanoplate, the sample is immediately cooled down to low-temperature (2.3 K). The conductance at zero drain voltage is measured with a lock-in (IAC = 10 nA) as function of the gate voltage, Fig. 2d. We observe a p-type behavior with a rise of the conductance as holes (VGS < 0) are injected in the nanoplate. The conductance as a function of gate voltage show reproducible fluctuations which are not simply due to electrical noise. These fluctuations are most likely related to Coulomb blockade or possibly universal conductance fluctuations41, 42. Indeed, while regular Coulomb peaks as a function of gate voltage are usually observed in nano-sized devices weakly coupled to the electrodes, however, when the device is more strongly coupled to electrodes as in the measurement presented in this paper, the Coulomb peaks become fainter oscillations. Furthermore, because the conductance in a nanoplatelet is not averaged on a macroscopic number of disorder configurations, the conductance fluctuates with the gate voltage because of the changing electrostatic potential responsible for electron scattering. From the curve, we can also extract the hole mobility thanks to the relation $$\mu =\frac{L}{W{C}_{\sum }{V}_{DS}}\frac{\partial i}{\partial {V}_{GS}}$$ with L the inter electrode spacing (≈120 nm), W the width of the film (≈420 nm), \({C}_{\sum }\) the sheet capacitance (11.5 nF.cm−2), VDS the applied bias and \(\frac{\partial i}{\partial {V}_{GS}}\) the transconductance. We estimate the mobility in the single plane to be in the 30–50 cm2V−1s−1 range which is only one decade below the typical values obtained for molecular beam epitaxy (MBE) grown film43. We can then use this mobility value to estimate the transport carrier density \({n}_{trans}\). The latter relates to the conductance (G) through the relation $${n}_{trans}=\frac{L}{eWt\mu }G$$ where t is the nanoplate thickness (≈10 nm). We estimate the value to be 1.8 × 1019 cm−3 in good agreement with our estimation based on optical measurements. Control of carrier density Transport and optical measurement agree over a bulk hole carrier density in the 2–3 × 1019 cm−3 range. We can use this value to determine the position of the Fermi level with respect to the Dirac point: \({E}_{D}-{E}_{F}\). The Fermi vector is estimated to be $${k}_{F}={(3{\pi }^{2}n)}^{1/3}=0.9n{m}^{-1}$$ we can thus estimate $${E}_{D}-{E}_{F}=\hslash {v}_{F}{k}_{F}=295meV$$ using \({v}_{F}\) the Fermi velocity7 taken as 5 × 105 m.s−1. To reduce this energy shift between the Fermi level and the Dirac point of this material we then investigate the mixing of the p-type Sb2Te3 nanoplate with n-type Bi2Te3 material. To do so, we conduct the same reaction as before and replace a part of the antimony oleate by bismuth oleate. The reaction leads to the formation of (Sb;Bi)2Te3 heterostructured nanoplates, see Fig. 3a–c. Their lateral extension is reduced as the Bi content rises. Typically, 500 nm nanoplates are obtained for Sb rich material, while nanoplates with lateral extension below 200 nm are obtained with Bi rich condition. The lamellar aspect of the material is highlighted by conducting high resolution TEM on nanoplates lying on the side, see Fig. 3d. TEM images of (a) Sb2Te3 nanoplates, (b) BiSbTe3 nanoplates, (c) Bi2Te3 nanoplates. (d) High resolution TEM image of a Bi2Te3 nanoplate lying on the edge. (e) Optical carrier density as a function of Bi content in (Sb;Bi)2Te3 heterostructure nanoplates. The error bars have been obtained by repeating the measurement on several samples of a given composition. (f) Current as a function of temperature for a thin film of (Sb70;Bi30)2Te3 nanoplates. (g) Current as a function of applied bias for a thin film of (Sb70;Bi30)2Te3 nanoplates. The measurements are made in vacuum. We synthetize a series of (Sb;Bi)2Te3 nanoplates with various Bi content and then use the same fitting approach as for the reflectance of the film of Sb2Te3 to determine for each Bi ratio the value of the plasma frequency and the associated carrier density, see Figs 3e and S11, Table S3. As the Bi content is increased, we observe that the carrier density drops and passes by minimum around 30%. Here the carrier density is reduced by a factor 2.5 compared to pure Sb2Te3 nanoplates. The dependence of the carrier density with the Bi content is not simply a V shape curve as it may have been expected while switching from a p-type to n-type material. This results because of combination of carrier density change and effective mass change while estimating the carrier density from the plasmon frequency. We further confirm the reduction of the metallic character by measuring the transport properties of the film, see Fig. 3f,g. (Sb;Bi)2Te3 nanoplates present a stronger temperature dependence with a drop of the conductance by a factor 10 between 300 K and 15 K, while the drop was only of 20% for pure Sb2Te3 nanoplates in the same range of temperature. The high temperature activation energy extracted from the Arrhenius fit is typically twice larger and equal to 56 meV (compared to 30 meV for Sb2Te3). To unveil the exact nature of the formed (Sb;Bi)2Te3 nanoplates we use scanning transmission electron microscopy coupled with X-ray energy dispersive spectroscopy to combine nm scale resolution with chemistry composition, see Figs 4, S12 and S13. The (Sb;Bi)2Te3 nanoplates are actually not forming an homogeneous alloy, but rather from a core shell structure. The core is made of Bi2Te3, while the shell is made of Sb2Te3. This suggest a higher reactivity of the bismuth compared to antimony towards tellurium, and is consistent with our observation that for similar growth conditions smaller nanoplates of Bi2Te3 are formed. Bi is more reactive towards Te which favors the nucleation step and leads to the formation of lots of small seeds. Sb, which is less reactive than Bi will react immediatelty with left over Te and grow a shell on the Bi2Te3 nanoplate, which behave as nucleation center. The doping control which has been demonstrated here thus differs from the approach developed for bulk or thin film in this way that charge compensation occurs at the atomic scale in the heterostructure. (a) HAADF STEM image of the (Bi,Sb)2Te3 nanoplate. The composition map of the Bi, Te and Sb of the same area are shown as separate images on part (b–d) respectively. In this paper, we investigate the optical and transport properties of Sb2Te3 nanoplates with in mind their possible use as nanosize topological insulator material. Both reflectance and transport measurement agree on the metallic character of these objects with a carrier density in the 2–3 × 1019 cm−3 range. We then demonstrate the feasibility to conduct transport at the single particle level and determine the mobility to be between 30 and 50 cm2V−1s−1. Finally we demonstrate that building an heterostructure of (Sb;Bi)2Te3 with a core shell structure can be reliably used to tune the carrier density by a factor 2.5, down to the low 1019 cm−3 range. Experimental Section Antimony acetate (Sb(OAc)3, 99.99% metal basis, Aldrich), bismuth acetate (Bi(OAc)3, 99.999% metal basis, Aldrich), selenium powder (99.99% Strem Chemicals), tellurium powder (99.997% (trace metals basis), Alfa Aesar), Na2S nonahydrate (99,99%, Aldrich), Oleic acid (90% technical grade, Aldrich), Trioctylphosphine (TOP 380, 98%, Cytec), octadecene (ODE, 90% technical grade, Aldrich), hexane (95% RPE-ACS, Carlo Erba), ethanol (anhydrous 99.9%, Carlo Erba), N-methyl formamide (NMFA, 99%, Alfa Aesar.) Preparation of Sb(OA)3 In a 100 mL three neck flask, 1 g (3.35 mmol) of Sb(OAc)3 and 40 mL of oleic acid are loaded and put under vacuum at 85 °C for 30 min. The final solution is clear yellowish and used as a stock solution. Preparation of Bi(OA)3 In a 100 mL three neck flask, 0.5 g (1.3 mmol) of Bi(OAc)3 and 20 mL of oleic acid are loaded and put under vacuum at 85 °C for 30 min. The final mixture is used as a stock solution. Preparation of 1 M TOPTe Trioctylphosphine complexed with tellurium is obtained by mixing 2.54 g of Te powder with 20 mL of TOP in a 50 mL three-neck flask. The solution is then degassed under vacuum for 30 min at 80 °C. The mixture is further heated under Ar at 270 °C until the powder gets fully dissolved. At this temperature the solution is orange and becomes yellow once cooled. The stock solution is kept in the glove box. Synthesis of Sb2Te3 In a 25 mL three neck flask, 4 mL of the antimony oleate in ODE (0.33 mmol Sb) are diluted with 10 mL of additional ODE. The flask is degassed under vacuum at 85 °C for 30 min. Then the atmosphere is switched to Ar and the temperature is raised to 200 °C. 0.5 mL of 1 M TOPTe is quickly injected and the solution rapidly turns metallic grey. The heating is continued for 5 min before the heating mantle is removed and air flow on the outside of the flask is used to cool the solution. The nanoparticles are precipitated by addition of ethanol and centrifuged for 3 min. The clear supernatant is discarded and the pellet is redispersed in hexane. The cleaning procedure is repeated two additional times. Synthesis of Bi2Te3 4 mL of the bismuth oleate solution (0.25 mmol Bi) and 10 mL of ODE are added to a 25 mL 3 neck flask. The flask is degassed under vacuum at 85 °C for 30 min. The atmosphere is then switched to Ar and the temperature raised to 200 °C. 0.4 mL of TOPTe (1 M) are quickly injected and the solution rapidly turns metallic grey. The heating is continued for 5 min before the reaction is cooled down. The nanoparticles are precipitated by addition of ethanol and centrifuged for 3 min. The cleaning procedure is repeated two additional times. Material characterization Transmission electron microscopy (TEM) images were captured on JEOL 2010 and FEI Titan Themis microscopes. For X-ray diffraction (XRD), the nanoparticles were drop casted on a Si substrate from a hexane solution. Data was collected on a Philips X'Pert diffractometer equipped with Cu Kα line at 0.154 nm. Infrared spectra were measured on a Bruker Vertex 70 FTIR used in an ATR configuration with a ~700 °C globar source and a DTGS detector. The spectra were averaged 32 times with a resolution of 4 cm−1. Energy dispersive X-ray analysis was conducted on an Oxford probe in a FEI Magellan scanning electron microscope at 10 kV and 100 pA. Transport measurement For ensemble transport measurements, we prepared, using standard lithography methods, gold electrodes on Si/SiO2 wafers (400 nm of oxide). The electrodes are interdigitated and include 25 pairs. Each electrodes is 2.5 mm long with a 20 µm spacing. Thin film of nanoplates over this interdigitated gold electrodes are subjected to ligand exchange with S2− ions by dipping the film of nanoplatelets within a solution of Na2S in N-methylformamide44. The film is then rinsed in ethanol and dried. Measurements are made with a Keithley 2400 source-meter, using a probe station operated in air at room temperature. For single particle measurements, the solution of Sb2Te3 nanoparticle is first drop casted on a wafer. On this wafer, the level of aggregation is high and prevents single particle connection. To obtain isolated single nanoplates, this film is transferred onto a Si/SiO2 wafer (300 nm of oxide) using a PDMS stamp. The film is then dipped into a 1% Na2S in N-methyl formamide for 45 s. Two electrodes are deposited using standard e-beam approach. Just before the metal deposition the surface is cleaned using Ar ion beam. Finally, 5 nm of titanium and 80 nm of aluminum are evaporated using an e-beam evaporator. Ethics declarations Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Poudel, B. et al. High-thermoelectric performance of nanostructured bismuth antimony telluride bulk alloys. Science 320, 634–638 (2008). Talapin, D. V., Lee, J. S., Kovalenko, M. V. & Shevchenko, E. V. Prospects of colloidal nanocrystals for electronic and optoelectronic applications. Chem. Rev. 110, 389–458 (2010). Venkatasubramanian, R., Colpitts, T., Watko, E., Lamvik, M. & El-Masry, N. MOCVD of Bi2Te3, Sb2Te3 and their superlattice structures for thin-film thermoelectric applications. J. Cryst. Growth 170, 817–821 (1997). Jeon, H. W., Ha, H. P., Hyun, D. B. & Shim, J. D. Electrical and thermoelectrical properties of undoped Bi2Te3-Sb2Te3 and Bi2Te3-Sb2Te3-Sb2Se3 single crystals. J. Phys. Chem. Solids 52, 579–585 (1991). Chatterjee, A. & Biswas, K. Solution-based synthesis of layered intergrowth compounds of the homologous PbmBi2nTe3n+m series as nanosheets. Angew. Chem. Int. Ed. 54, 5623 (2015). Ando, Y. Topological Insulator Materials. J. Phys. Soc. Jpn. 82, 102001 (2013). Zhang, H. et al. Topological insulators in Bi2Se3, Bi2Te3 and Sb2Te3 with a single Dirac cone on the surface. Nat. Phys 5, 438 (2009). Hasan, M. Z. & Kane, C. L. Topological insulators. Rev. Mod. Phys. 82, 3045–3067 (2010). Wei, Z., Rui, Y., Hai-Jun, Z., Xi, D. & Zhong, F. First-principles studies of the three-dimensional strong topological insulators Bi2Te3, Bi2Se3 and Sb2Te3. New J. Phys. 12, 065013 (2010). Miller, G. R. & Li, C. Y. Evidence for the existence of antistructure defects in bismuth telluride by density measurements. J. Phys. Chem. Solids 26, 173 (1965). Horak, J., Cermak, K. & Koudelka, L. Energy formation of antisite defects in doped Sb2Te3 and Bi2Te3 crystals. J. Phys. Chem. Solids 47, 805 (1986). Drasar, C., Lostak, P. & Uher, C. Doping and defect structure of tetradymite-type crystals. J. Electron. Mater. 39, 2162 (2010). Horak, J., Drasar, C., Novotny, R., Karamazov, S. & Lostak, P. Non-stoichiometry of the crystal lattice of antimony telluride. Phys. Status Solidi A 149, 549 (1995). Mott, N. F. Metal-insulator transition. Rev. Mod. Phys. 40, 677–683 (1968). Yang, F. et al. Top gating of epitaxial (Bi1-xSbx)2Te3 topological insulator thin films. Appl. Phys. Lett. 104, 161614 (2014). He, X. et al. Highly tunable electron transport in epitaxial topological insulator (Bi1- xSbx)2Te3 thin films. Appl. Phys. Lett. 101, 123111 (2012). Hong, S. S., Cha, J. J., Kong, D. & Cui, Y. Ultra-low carrier concentration and surface-dominant transport in antimony-doped Bi2Se3 topological insulator nanoribbons. Nat. Commun. 3, 757 (2012). Nasilowski, M., Mahler, B., Lhuillier, E., Ithurria, S. & Dubertret, B. Two-dimensional colloidal nanocrystals. Chem. Rev. 116, 10934–10982 (2016). Saha, S., Banik, A. & Biswas, K. Few-layer nanosheets of n-Type SnSe2. Chem. Eur. J. 22, 15634–15638 (2016). Vidal, F. et al. Photon energy dependence of circular dichroism in angle-resolved photoemission spectroscopy of Bi2Se3 Dirac states. Phys. Rev. B 88, 241410 (2013). Lee, J., Brittman, S., Yu, D. & Park, H. Vapor-liquid-solid and vapor-solid growth of phase-change Sb2Te3 nanowires and Sb2Te3/GeTe nanowire heterostructures. J. Am. Chem. Soc. 130, 6252–6258 (2008). Teweldebrhan, D., Goyal, V. & Balandin, A. A. Exfoliation and characterization of bismuth telluride atomic quintuples and quasi-two-dimensional crystals. Nano Lett. 10, 1209–1218 (2010). Konstantatos, G., Levina, L., Tang, J. & Sargent, E. H. Sensitive solution-processed Bi2S3 nanocrystalline photodetectors. Nano Lett. 8, 4002 (2008). Scheele, M. et al. Synthesis and thermoelectric characterization of Bi2Te3 nanoparticles. Adv. Funct. Mater. 19, 3476 (2009). Wang, W., Poudel, B., Yang, J., Wang, D. Z. & Ren, Z. F. High-yield synthesis of single-crystalline antimony telluride hexagonal nanoplates using a solvothermal approach. J. Am. Chem. Soc. 127, 13792–13793 (2005). Shi, W., Zhou, L., Song, S., Yang, J. & Zhang, H. Hydrothermal synthesis and thermoelectric transport properties of impurity-free antimony telluride hexagonal nanoplates. Adv. Mater. 20, 1892 (2008). Shi, W., Yu, J., Wang, H. & Zhang, H. Hydrothermal synthesis of single-crystalline antimony telluride nanobelts. J. Am. Chem. Soc. 128, 16490–16491 (2006). Zhou, N. et al. Size-controlled synthesis and transport properties of Sb2Te3 nanoplates. RSC Adv 4, 2427 (2014). Zhou, B., Ji, Y., Yang, Y., F. Li, X. & H. Zhu, J. J. Rapid microwave-assisted synthesis of single-crystalline Sb2Te3 hexagonal nanoplates. Cryst. Growth Des. 8, 4394–4397 (2008). Yang, H. Q. et al. Facile surfactant-assisted reflux method for the synthesis of single-crystalline Sb2Te3 nanostructures with enhanced thermoelectric performance. ACS Appl. Mater. Interfaces 7, 14263–14271 (2015). Fei, F. et al. Solvothermal synthesis of lateral heterojunction Sb2Te3/Bi2Te3 nanoplates. Nano Lett. 15, 5905–5911 (2015). Zhao, Y. & Burda, C. Chemical synthesis of Bi(0.5)Sb(1.5)Te3 nanocrystals and their surface oxidation properties. ACS Appl. Mater. Interfaces 1, 1259–1263 (2009). Gupta, G. & Kim, J. Facile synthesis of hexagonal Sb2Te3 nanoplates using Ph2SbTeR (R = Et, Ph) single source precursors. Dalton Trans. 42, 8209 (2013). Garje, S. S. et al. A new route to antimony telluride nanoplates from a single-source precursor. J. Am. Chem. Soc. 128, 3120–3121 (2006). Schulz, S. et al. Synthesis of hexagonal Sb2Te3 nanoplates by thermal decomposition of the single-source precursor (Et2Sb)2Te3. Chem. Mater. 24, 2228–2234 (2012). Stepanov, N. P., Kalashnikov, A. A. & Ulashkevich, Yu. V. Optical functions of Bi2Te3-Sb2Te3 solid solutions in the range of plasmon excitation and interband transitions. Opt. Spectrosc. 109, 893–898 (2010). Lucovsky, G., White, R. M., Benda, J. A. & Revelli, J. F. Infrared-reflectance spectra of layered Group-IV and Group-VI transition-metal dichalcogenides. Phys. Rev. B 7, 3859 (1973). Moreira, H. et al. Electron cotunneling transport in gold nanocrystal arrays. Phys. Rev. Lett. 107, 176803 (2011). Tran, T. B. et al. Multiple cotunneling in large quantum dot arrays. Phys. Rev. Lett. 95, 076806 (2005). Kuemmeth, F., Bolotin, K. I., Shi, S. F. & Ralph, D. C. Measurement of discrete energy-level spectra in individual chemically-synthesized gold nanoparticles. Nano Lett. 8, 4506 (2008). Wang, H. et al. Effects of electron-phonon interactions on the electron tunneling spectrum of PbS quantum dots. Phys. Rev. B 92, 041403 (2015). Wang, H. et al. Transport in a single self-doped nanocrystal. ACS Nano 11, 1222–1229 (2017). Kim, Y. et al. Structural and thermoelectric transport properties of thin films grown by molecular beam epitaxy. J. Appl. Phys. 91, 715 (2002). Lhuillier, E., H. Liu, Guyot-Sionnest, P. & Heng, L. A mirage study of CdSe colloidal quantum dot films, Urbach tail, and surface states. J. Chem. Phys. 137, 15704 (2012). We thank Emmanuelle Lacaze for AFM imaging. We thank Yves Borenstein and Paola Aktinson for fruitful discussions. We acknowledge the use of cleanroom facilities from the consortium "Salles Blanches Paris Centre - SBPC". We thank Agence Nationale de la Recherche for funding through grant Nanodose and H2DH. This work has been supported by the Region Ile-de-France in the framework of DIM Nano-K via the grant dopQD. This work was supported by French state funds managed by the ANR within the investments d'Avenir programme under reference ANR-11-IDEX-0004-02, and more specifically within the framework of the Cluster of Excellence MATISSE. Wasim J. Mir acknowledges CEFIPRA for Raman-Charpak fellowship. Wasim J. Mir and Alexandre Assouline contributed equally to this work. Sorbonne Universités, UPMC Univ. Paris 06, CNRS-UMR 7588, Institut des NanoSciences de Paris, 4 place Jussieu, 75005, Paris, France Wasim J. Mir , Clément Livache , Bertille Martinez , Nicolas Goubet & Emmanuel Lhuillier Department of Chemistry, Indian Institute of Science Education and Research (IISER), Pune, 411008, India Laboratoire de Physique et d'Étude des Matériaux, PSL Research University, CNRS UMR 8213, Sorbonne Universités UPMC Univ Paris 06, ESPCI ParisTech, 10 rue Vauquelin, 75005, Paris, France Alexandre Assouline , Xiang Zhen Xu , Sandrine Ithurria & Hervé Aubin Laboratoire de Photonique et de Nanostructures (CNRS- LPN), Route de Nozay, 91460, Marcoussis, France Gilles Patriarche Search for Wasim J. Mir in: Search for Alexandre Assouline in: Search for Clément Livache in: Search for Bertille Martinez in: Search for Nicolas Goubet in: Search for Xiang Zhen Xu in: Search for Gilles Patriarche in: Search for Sandrine Ithurria in: Search for Hervé Aubin in: Search for Emmanuel Lhuillier in: W.M., S.I., N.G. and E.L. synthetized the material, W.M., C.L. and B.M. prepare the electrodes and conduct the ensemble transport measurements. A.A. and H.A. performed the single particle measurements. X.-Z.X. and G.P. did the T.E.M. imaging. All authors analyses the data and E.L. write the manuscript. Correspondence to Emmanuel Lhuillier. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. High Thermoelectric Figure of Merit in p-Type (Bi2Te3)x − (Sb2Te3)1−x Alloys Made from Element-Mechanical Alloying and Spark Plasma Sintering Babu Madavali , Hyo-Seob Kim , Chul-Hee Lee , Dong-soo Kim & Soon-Jik Hong Journal of Electronic Materials (2019) A novel high-performance self-powered UV-vis-NIR photodetector based on a CdS nanorod array/reduced graphene oxide film heterojunction and its piezo-phototronic regulation Xiang-Xiang Yu , Hong Yin , Hai-Xia Li , Han Zhao , Chong Li & Ming-Qiang Zhu Journal of Materials Chemistry C (2018) Sb2Te3 Growth Study Reveals That Formation of Nanoscale Charge Carrier Domains Is an Intrinsic Feature Relevant for Electronic Applications Martin Lewin , Lars Mester , Tobias Saltzmann , Seung-Jae Chong , Marvin Kaminski , Benedikt Hauer , Marc Pohlmann , Antonio M. Mio , Matti Wirtssohn , Peter Jost , Matthias Wuttig , Ulrich Simon & Thomas Taubner ACS Applied Nano Materials (2018) Scientific Reports menu Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Guest Edited Collections Editorial Board Highlights Author Highlights Scientific Reports rigorous editorial process Open Access Funding Support
CommonCrawl
Regional patterns and trends of hearing loss in England: evidence from the English longitudinal study of ageing (ELSA) and implications for health policy Dialechti Tsimpida ORCID: orcid.org/0000-0002-3709-56511, Evangelos Kontopantelis2, Darren M. Ashcroft3 & Maria Panagioti3 Hearing loss (HL) is a significant public health concern globally and is estimated to affect over nine million people in England. The aim of this research was to explore the regional patterns and trends of HL in a representative longitudinal prospective cohort study of the English population aged 50 and over. We used the full dataset (74,699 person-years) of self-reported hearing data from all eight Waves of the English Longitudinal Study of Ageing (ELSA) (2002–2017). We examined the geographical identifiers of the participants at the Government Office Region (GOR) level and the geographically based Index of Multiple Deprivation (IMD). The primary outcome measure was self-reported HL; it consisted of a merged category of people who rated their hearing as fair or poor on a five-point Likert scale (excellent, very good, good, fair or poor) or responded positively when asked whether they find it difficult to follow a conversation if there is background noise (e.g. noise from a TV, a radio or children playing). A marked elevation in HL prevalence (10.2%) independent of the age of the participants was observed in England in 2002–2017. The mean HL prevalence increased from 38.50 (95%CI 37.37–39.14) in Wave 1 to 48.66 (95%CI 47.11–49.54) in Wave 8. We identified three critical patterns of findings concerning regional trends: the highest HL prevalence among samples with equal means of age was observed in GORs with the highest prevalence of participants in the most deprived (IMD) quintile, in routine or manual occupations and misusing alcohol. The adjusted HL predictions at the means (APMs) showed marked regional variability and hearing health inequalities between Northern and Southern England that were previously unknown. A sociospatial approach is crucial for planning sustainable models of hearing care based on actual needs and reducing hearing health inequalities. The Clinical Commissioning Groups (CCGs) currently responsible for the NHS audiology services in England should not consider HL an inevitable accompaniment of older age; instead, they should incorporate socio-economic factors and modifiable lifestyle behaviours for HL within their spatial patterning in England. Hearing loss (HL) is a significant public health concern that costs the UK economy £25 billion a year in productivity and unemployment [1], an amount that equates to one-fifth of the total annual health spending in England in 2018/19 [2]. HL affects over nine million people in England, and it is estimated that, by 2035, the number of people with HL will rise to around 13 million. The above estimates, along with the local hearing needs in England, are calculated by population projections based on the study of Davis [3], who collected and analysed audiological data in the 1980s. This study remains the primary source of local estimates of HL prevalence [4]; recently, these estimates have also been visualised in the form of a hearing map, offering a rough guide to the prevalence of HL among adults across the UK [5]. Despite its importance to the history of hearing care in the UK, Davis's study had some significant limitations. First, the English samples were solely derived from the cities of Nottingham and Southampton, which are very unlikely to be representative of the whole population of England [3]. The role of place in health is well-established [6, 7], and research has shown that it affects health outcomes [6]. Second, scientific thinking in HL research was formed in previous decades around the concepts of older age and the male sex being the main leading causes of HL in adults, with little or no consideration for modifiable risk factors for hearing acuity. However, recent findings have suggested that socio-economic factors and modifiable lifestyle behaviours are associated with the likelihood of HL as firmly as well-established demographic factors such as age and sex [8]. Thus, the study of Davis did not consider in its estimations the effects of place and socio-economic factors such as high occupational noise exposure from manual occupations [9] and differences in regions with strong and weak manufacturing industries [10]. The Clinical Commissioning Groups (CCGs) are currently responsible for the NHS audiology services in England, including the provision of hearing aids [11]. However, the lack of robust hearing data makes it difficult to plan efficient, effective and sustainable models of hearing care based on patient needs [10]. Exploratory spatial data analysis of hearing data from a representative population sample in England would reveal regional patterns and trends of HL, shedding light on potential socio-economic inequalities in hearing health. This updated analysis of HL prevalence could inform the health policy strategies of the NHS England and Department of Health, particularly in respect of the new governmental programme, 'Action Plan on Hearing Loss' [1]. The aim of this study was, therefore, to explore regional patterns and trends of HL in a representative longitudinal prospective cohort study of the English population aged 50 and over. Study population The study utilised data from the English Longitudinal Study of Ageing (ELSA). The ELSA is a longitudinal prospective cohort study that collects multidisciplinary data from a nationally representative sample of community-dwelling middle-aged and older (aged 50 and above) adults in England [12]. The study started in 2002 and is collecting responses every 2 years on participants' health, social, wellbeing and economic circumstances. The current sample contains data from eight Waves, covering the period 2002–2017 [13]. As the ELSA follows a longitudinal design, the sample is comprised of a sequence of observations on the same individuals across Waves and the refreshment samples (Cohorts 3, 4, 6 and 7) [13]. Proxy interviews were carried out in case an ELSA panel member refused to further participate [14]. In our analyses, we used the full dataset (74,699 person-years) of self-reported hearing data from all eight Waves of the ELSA. The ELSA follows the sampling strategy of the Health Survey for England (HSE), which ensures that every address on the small users' Postcode Address File (PAF) in England has an equal chance of inclusion. Field household contact rates of over 96% were achieved. The study excluded cases not belonging to the target population through 'terminating events', such as deaths, institutional moves and moves out of England since taking part in the HSE [15]. Hearing acuity Self-rated hearing data was collected from participants across all Waves. According to the study's documentation, self-reported HL was defined as declarations of fair or poor hearing on a five-point Likert scale (excellent, very good, good, fair or poor) or 'Yes' responses to the question concerning whether or not the participants find it difficult to follow a conversation if there is background noise (e.g. noise from a TV, a radio or children playing) [13, 16]. Geographical variables The geographically related information of the ELSA dataset was in the form of identifiers such as the Government Office Region (GOR) [17], and indices that are used as measure of poverty of different geographical areas, such as the Index of Multiple Deprivation (IMD). The geographical variables were provided to the first author under a Special License and Secure Access agreement (UK Data Service Project Number: 121175). Each respondent's geography is determined by their residence postcode at the time of the survey interview date. Different versions of the IMD were provided for the eight Waves of the ELSA: IMD 2004 [18] for Waves 1–3, IMD 2007 [19] for Wave 4, IMD 2010 [20] for Waves 5–7 and IMD 2015 [21] for Wave 8. The IMD was provided in quintiles (the first quintile being the least deprived, the fifth being the most deprived). The nine GORs represent the highest tier of sub-national division in England (North East, North West, Yorkshire and the Humber, East Midlands, West Midlands, East of England, London, South East, South West). Covariates For covariates, we examined non-modifiable factors (age, sex), partly modifiable indicators of socio-economic position (SEP) (education, occupation, income, wealth) and alcohol consumption as a fully modifiable lifestyle risk factor for HL. Age was assessed both as a discrete (as only certain values could be taken) and categorical variable in three groups (50–64, 65–74, 75–89). We used this categorisation to allow for a comparison with Benova et al. [22], who examined the association of SEP with self-reported hearing difficulties in Wave 2 of the ELSA. We considered five categories regarding highest educational attainment: no qualifications, foreign or other, O level Certificate of Secondary Education, A level (Level 3 Qualification of the National Qualifications Framework) and a degree or higher education. Tertiles of self-reported occupation were based on the National Statistics Socio-economic Classification (NS-SEC): routine and manual occupations; intermediate; managerial and professional. The relative financial position of the participants was captured by quintiles of net household income (the first quintile being the lowest, the fifth being the highest). Wealth was examined in quintiles of the net total non-pension wealth reported at the household unit level (the first quintile being the highest, the fifth being the lowest). Alcohol consumption was selected as the only lifestyle factor that was consistently recorded in all Waves. We constructed a continuous variable to represent the sum of units of alcohol that each participant consumed during the last 7 days. This variable was dichotomised into those that consumed more than 14 units of alcohol in the last 7 days and those that did not, using the Chief Medical Officer's Drinking Guidelines [23]. Categorical variables are presented as absolute (n) and relative (%) frequencies, while continuous variables are presented through their mean and standard deviation. We used the full dataset from the eight Waves (74,699 person-years) to strengthen the argument that there is a correlation between spatial variables and HL over time. A small number of cases (one in Wave 0 and eight in Wave 2) in the geographical identifiers had missing values because the address was located within Wales (which uses its own deprivation index). Due to the low proportion of missingness in the variables, records with missing data were excluded from analyses (3.2% of all records in listwise deletion). We used Bartlett's test for homogeneity of variances to test that age variances were equal for all samples. Following this, we applied one-way analysis of variance (ANOVA) to compare the means of age among GOR samples in all Waves. We also computed adjusted predictions at the means (APMs) and the marginal effects at the means (MEMs) [24] for the HL prevalence in each Wave of the ELSA, with age, sex, education, occupation, income, wealth, IMD and alcohol consumption as the factor variables. We used local spatial analysis statistical tools for analysing spatial distributions, patterns, processes and relationships in the geographical data. We used the Spatial Join tool to aggregate the number of cases of self-reported HL to total responses of hearing acuity in each polygon (GOR) in order to visualise the prevalence rates of HL per GOR. We used the Natural Breaks (Jenks) classification to optimise the arrangement of the sets of HL values into 'natural' classes, a method also known as the goodness of variance fit (GVF). Furthermore, we used the Hot Spot Analysis (Getis-Ord Gi*) as a mapping cluster tool to identify the locations of statistically significant Hot Spots and Cold Spots. The Getis-Ord Gi* is an inferential statistic for the conceptualisation of spatial relationships, used when one is looking for unexpected spatial spikes of high values. In essence, this tool works by looking at each feature within the context of neighbouring features and assessing whether high or low values cluster spatially. Due to the small scale of the analysis, we chose this local spatial statistic tool so that the value of each feature could be included in its own analysis, along with the neighbouring features. The Getis-Ord local statistic is given as: $$ {G}_i^{\ast }=\frac{\sum_{j=1}^n{w}_{i,j}{x}_j-\overline{X}{\sum}_{j=1}^n{w}_{i,j}}{\sqrt[S]{\frac{\left[n{\sum}_{j=1}^n{w}_{i,j}^2-{\left({\sum}_{j=1}^n{w}_{i,j}\right)}^2\right]}{n-1}}} $$ Here, xj is the attribute value for feature j, wi, j is the spatial weight between feature i and j, n is equal to the total number of features and: $$ \overline{X}=\frac{\sum_{j=1}^n{x}_j}{n} $$ $$ S=\sqrt{\frac{\sum_{j=1}^n{x}_j^2}{n}-{\left(\overline{X}\right)}^2} $$ The \( {G}_i^{\ast } \) statistic is a z-score, so no further calculations are required. The spatial relationship was defined according to the 'Contiguity Edges Corners', a method that was selected in order to allow all neighbouring polygon features that share a boundary or node to influence the target polygon feature's computations. Confidence levels of 90, 95 and 99% were considered in the calculations of Getis-Ord Gi*. Data were analysed using Stata version 14 [25] and ESRI ArcGIS Desktop 10.7.1 [26]. The results of one-way ANOVA indicated that the null hypothesis was not rejected in Waves 2, 6, 7 and 8 (as p > 0.05), which means that there is sufficient evidence to conclude that the means of age among GORs' samples were equal [27]. In addition, the means of age across Waves were significantly equal for all samples (p = 0.996). Using Bartlett's test, we found that the variances of the means of age among GORs were equal in Waves 3, 5, 6, 7 and 8 and across all Waves. A table presenting the one-way ANOVA test results – including sums of squares, mean squares, degrees of freedom and the F-values and p-values of means of age across the nine GORs in eight Waves of the ELSA – is provided in Additional File 1. Table 1 shows the participants' non-modifiable demographic factors and HL prevalence in England in eight Waves of the ELSA. We observed considerable variation in the prevalence rate of HL among GORs (normalised per GOR population), which reached 12.3%. In Wave 5, the prevalence of HL was 39.55 in the South East (95%CI 37.12–42.04) versus 51.85 in the North East (95%CI 47.66–56.02). Table 1 Participants' non-modifiable demographic factors and hearing loss prevalence in England in 8 Waves of English Longitudinal Study of Ageing (ELSA) Table 2 shows participants' socio-economic and lifestyle factors and HL prevalence in England in eight Waves of the ELSA. In Waves 2–8, the highest prevalence of HL was reported in the GORs that had the highest prevalence of participants belonging to the most deprived quintiles (fifth) according to the IMD. Compared to other GORs, the North East had the highest HL prevalence consistently in all Waves, along with the highest percentage of participants in the most deprived IMD quintile. The rates reached the highest in Wave 7 (2015–2017), with 50.12% of the participants self-reporting HL (95%CI 45.26–54.98) and 39.12% for those residing in an area in the most deprived IMD quintile (95%CI 36.62–41.67). Table 2 Participants' socioeconomic and lifestyle factors and hearing loss prevalence in England in 8 Waves of English Longitudinal Study of Ageing (ELSA) Moreover, the highest prevalence of HL was reported in the GORs with the highest prevalence of participants belonging in the group of routine or manual occupations. In Waves 1–5, participants from the North East had both the highest rates of routine or manual occupations and the highest prevalence rates of HL among all GORs. Finally, we observed an increasing trend over time in total alcohol misuse (alcohol consumption above the low-risk level guidelines) in all Waves; the prevalence of alcohol misuse increased in 2002–2017, going from an average of 10.17% in Wave 1 to 33.98% in Wave 8. The South West had one of the highest prevalence rates of alcohol misuse, in parallel with one of the highest prevalence rates of self-reported HL. It is worth mentioning that their sample was of a higher SEP in all Waves (with respect to education, occupation, income, wealth and IMD). Figure 1 illustrates the prevalence of HL in each GOR across the eight Waves of the ELSA. There was an increasing trend over time in the HL prevalence for all five classes. In samples of significantly equal means of age between GORs, the mean HL prevalence increased from 38.50 (95%CI 37.37–39.14) in Wave 1 to 48.66 (95%CI 47.11–49.54) in Wave 8. Map of England by Government Office Regions, showing prevalence rates of self-reported hearing loss in eight Waves of the English Longitudinal Study of Ageing (ELSA). This work by Dialechti Tsimpida is licensed under a Creative Commons Attribution 4.0 International License. Figure 2 depicts the Hot Spot and Cold Spot analyses in England, based on the Getis-Ord Gi* statistic; the analyses identified statistically significant spatial clusters of high values (Hot Spots) and low values (Cold Spots) in all Waves of the ELSA. We observed some statistically significant spatial clusters of HL prevalence covering specific GORs in England as all Hot and Cold Spots were found in the northern and southern parts of England, respectively. In essence, we observed spatial clustering of high (Hot) or low (Cold) values that were more pronounced than one would expect in a random distribution of these same values. In Waves 1–6, the z-score value in the North East GOR was positive, which means that the spatial distribution of high values in this part of England was more spatially clustered than would be expected if the underlying spatial processes were truly random. On the other hand, during the same period the z-score value in the South East GOR was negative, which means that the spatial distribution of low values in the dataset was more spatially clustered than would be expected if the underlying spatial processes were truly random. Map of England by Government Office Regions showing the spatial clusters of hearing loss prevalence according to Hot Spot and Cold Spot analyses a using the Getis-Ord Gi* statistic in eight Waves of the English Longitudinal Study of Ageing (ELSA). a The Hot Spots and Cold Spots indicate unexpected spatial spikes of high or low values, respectively, showing that the distribution of these values in the dataset is more spatially clustered than would be expected if underlying spatial processes were truly random. This work by Dialechti Tsimpida is licensed under a Creative Commons Attribution 4.0 International License. Figure 3. shows the predicted probabilities of HL prevalence in each region and Wave of the ELSA, holding all other variables in the model at their means. The results tell us that if we had two otherwise-average individuals in each Wave, the probability of them having HL would vary significantly among regions. For example, in Wave 1, one's probability of having HL in Yorkshire and the Humber would be 10.2% higher than it would be for an otherwise-comparable participant in London (Yorkshire and the Humber APM = .437, London APM = .335, MEM = .437–.335 = .102) (please also see Additional File 1). The predicted probability of having HL demonstrated an increasing trend over time in all regions. The maximum increase of predicted HL probability among older adults of significantly equal age in the 15-year period was in the South West, which had a 45% increase (Wave 1:37.3 [34.4–40.2], Wave 8: 54.1 [48.9–59.2]). Predicted probabilities and 95% Confidence Intervals of hearing loss (HL) prevalence at Regions of England in eight Waves of the English Longitudinal Study of Ageing (ELSA) a, b. a The x-axis refers to ELSA Wave (Wave 1: 2002–3, Wave 2: 2004–5, Wave 3: 2006–7, Wave 4: 2008–9, Wave 5: 2010–11, Wave 6: 2012–13, Wave 7: 2014–15, Wave 8: 2016–17), and the y-axis refers to prevalence rates of HL per GOR in the specified 2-year period. bThe factor variables (age, sex, education, occupation, income, wealth, IMD and alcohol consumption) were hold at their means for each ELSA Wave. In this study, we examined the regional patterns and trends of HL prevalence in England in the ELSA over 15 years (2002–2017). We found that among samples with equal means of age, there was a 15-year increasing trend in HL prevalence in all five classes. The mean HL prevalence increased from 38.50 (95%CI 37.37–39.14) in Wave 1 to 48.66 (95%CI 47.11–49.54) in Wave 8. We identified three critical patterns of findings concerning regional trends: the highest HL prevalence among samples with equal means of age was observed in GORs with the highest prevalence of participants (a) in the most deprived (IMD) quintile (fifth), (b) in routine or manual occupations and (c) that misused alcohol, irrespective of SEP. The APMs for HL showed marked regional variability and evidence of a North–South divide. Comparison with previous literature Previous research has utilised geographical indices representing social and material disadvantages for identifying health inequalities [7]. Our study provided evidence for the existence of sociospatial inequalities in HL, adding to our previous work that challenged the existing conceptualisation of HL as an inevitable accompaniment of growing old [8]. Globally, there is a dramatic increase in HL cases, going from 42 million people in 1985 to about 360 million in 2011 and over 466 million in 2019 [28]. Our study presented a similar increase pattern but also showed that the increase in HL prevalence is not related to the ageing of the population, as widely believed [29, 30], but could potentially be due to social and lifestyle changes in the population [31]. Supporting our assumption, a previous study found a decline in HL prevalence among US adults aged 20–69 from the 2011–2012 cycle of the US National Health and Nutrition Examination Survey when compared to participants from the previous decade [32]. The explanation given by the authors for the declining prevalence was a reduction in exposure to occupational noise and the beneficial lifestyle changes of the participants, though that population study is not comparable to the ELSA cohort. In our study, a North-South divide was revealed in hearing health inequalities that was previously unknown. The North-South gap is not surprising, as there is a significant history of socio-economic and health disparities between Northern and Southern England [33, 34]. The higher rates of unemployment and no qualifications in the North than in the South are in line with previous research in England [35]. We also found that alcohol misuse was high in areas with a high prevalence of HL, such as the South West, which over time developed one of the highest prevalence rates of alcohol misuse despite its higher socio-economic status compared to other GORs. This finding supports a previous study on the ELSA that found that alcohol intake above the low-risk-level guidelines [23] was significantly associated with HL among older adults in England, along with socio-economic factors [8]. However, the findings from this study indicate that the relationship between SEP and drinking habits is rather complicated; the last statistical release on adult drinking habits in Great Britain showed that those in managerial and professional occupations drink alcohol in higher proportions compared to those in routine and manual occupations. In addition, similarly to our study, it was found that the South East GOR, when compared to other GORs in England, had a higher proportion of adults drinking alcohol the week before the interview [36]. This is the first study to investigate the geographical patterns and trends of HL in a representative cohort of older adults and among adults in general. The findings provide evidence that HL has increased over time, but the increasing trend in HL prevalence is not age-related, as widely believed. We found wide variation in HL prevalence in representative samples from different regions in England that had similar age profiles, and the increase rate of HL ranged from 3.2 to 45%. Thus, the strengths of this study are that HL is highlighted as an increasingly important public health problem in England and a spatial dimension is added to the evidence for the association of socio-economic and lifestyle determinants of HL among samples of older adults. However, there are also important limitations. First, the unit of our analyses (in GORs) had a low geographic resolution, which introduces uncertainty in the observed relations and may fail to reveal geographic details that we could notice with smaller geographic units. Moreover, it was not possible to perform geographically weighted regression analyses; a minimum of 30 input features is required (instead of nine GORs) to explore the relationships between the areas' socio-economic characteristics and HL prevalence. Furthermore, the ELSA's size is regarded as too small to conduct geographic analysis on a larger scale, as numerous participants would be required in each unit. Future research should build on this analysis using small area statistics (such as Lower Layer Super Output Areas) and investigate more localised patterns and determinants of place-to-place HL differences in England [35]. Such research would help to quantify potential 'area effects' on hearing health outcomes, allowing for generalisable results of spatial associations with HL rates. Moreover, the research could help to separate the role of proxies of areas (such as area deprivation) to individual-level determinants of HL (such as lifestyle behavioural choices), as individual choices are rooted in the broader social and economic structural contexts [31]. We were aware that the self-reported measures of HL in the ELSA might underestimate the real HL outcomes; for this reason, we conducted additional work to examine the validity of self-reported data through comparisons with the findings of objective HL measures available only in Wave 7 of the ELSA. We found that the self-reported measures correctly classified seven in every ten people with objectively assessed HL [16]. However, for the scope of our analyses, we assumed the available hearing measure as a suitable indicator of HL. Another limitation is that the ELSA concentrates on individuals living in private households, so individuals living in institutions (e.g. residential and nursing homes) are not included in the samples [15]. Furthermore, ELSA does not capture the type of HL; future analyses examining types of HL would add important value. Finally, the domains of IMD are not provided with the ELSA geography file, thereby not allowing further exploration. There was a small number of respondents moving to a different area between Waves, which resulted in an associated change in the IMD quintile [14]. However, a similar number of respondents experienced an increase or a decline in their IMD quintile, and the total numbers of movers did not exceed 1% for any Wave [14]; thus, we concluded that this would be unlikely to affect the validity of our findings. Research and policy implications According to the Global Burden of Disease Study, HL is the third leading cause of years lived with disability in England [37], and accurate prevalence estimates are needed to inform the strategic planning of hearing health policy and health services. To date, the prevalence of HL estimates in the UK is still based on the Medical Research Council National Hearing Study [3]. In addition, the NHS England has recently published the NHS Hearing Loss data tool [38], which provides estimates of the number of people with HL between 2015 and 2035 in order to help organisations plan services on local authority (LA) and CCG levels. However, according to our study, the above tool is inappropriate for estimating the number of people with HL; this study showed that in a representative cohort, there were important differences across different regions in England, which contradicts the Hearing in Adults study that did not find differences across the only four British cities that it was based on (Cardiff, Glasgow, Nottingham and Southampton) [3]. HL has affected a markedly larger proportion of the UK population in 2002–2017. The high levels of spatial clustering for hearing-related outcomes have significant implications for the planning of health services, including the availability of access to hearing aids. The high-risk regions in England must be expansively recognised based on their spatiotemporal HL profiles [39]. This kind of spatial evidence could provide commissioners with robust data based on actual needs, rather than inaccurate estimates of HL prevalence. Such prior knowledge could potentially have altered the North Staffordshire CCG's decision in 2015 to end the routine free provision of hearing aids for people with mild or moderate HL in their area of duty [40], where according to our analyses, the burden of HL is greatest. This study revealed, therefore, the potential risks from the paucity of robust epidemiological hearing data, which are needed now as much as ever to increase understanding of the impact of social, financial and personal health advantages on HL across the life course [1]. The findings from the time-series analyses in this manuscript might encourage HL preventive strategies, including interventions to promote 'healthier lifestyles' and targeted interventions in areas where there are high levels of deprivation clustering. Future research should also explore spatiotemporal diffusion patterns in the ELSA's international sister studies to acquire a global perspective of socio-spatial inequalities in hearing health. We have identified elevated social and geographical patterning of trends in HL; different levels of exposure to socio-economic and lifestyle factors lead to geographical hearing health variation among English populations of significantly equal age. The socio-economic, lifestyle and regional patterns and trends in HL support the argument that the increase of HL is not 'age-related', as widely believed, and HL, therefore, might be a highly preventable lifestyle-related condition. These findings also point to the need for a stronger health policy response. According to the inextricable link of health and geography, the regional variation in hearing health outcomes should be examined for health policy decisions according to spatial needs. The audiological services may need to be redesigned to take socio-economic and lifestyle risk factors for HL into account in order to prevent the further exacerbation of inequalities in regions with spatial hearing health inequality. The English Longitudinal Study of Ageing dataset is publicly available via the UK Data Service (http://www.ukdataservice.ac.uk). The geographical variables were provided to the first author under a Special License and Secure Access agreement (UK Data Service Project Number: 121175), and so are not publicly available. Statistical code is available from the corresponding author upon reasonable request at [email protected]. APMs: Predictions at the Means CCGs: Clinical Commissioning Groups ELSA: English Longitudinal Study of Ageing GOR: Geographical Office Regions HSE: Health Survey for England IMD: Index of Multiple Deprivation NS-SEC: National Statistics socio-economic classification PAF: Postcode Address File SEP: Socioeconomic position Hill S, Holton K, Regan C. Action plan on hearing loss. London UK: NHS Engl Dep Heal; 2015. Service P, Schemes P. Budget 2018. 2019; March 2016:2018–9. https://www.kingsfund.org.uk/publications/budget-2018-what-it-means-health-and-social-care. Davis A. Hearing in adults: the prevalence and distribution of hearing impairment and reported hearing disability in the MRC Institute of Hearing Research's National Study of Hearing. London: Whurr Publishers; 1995. Akeroyd MA, Foreman K, Holman JA. Estimates of the number of adults in England, Wales, and Scotland with a hearing loss. Int J Audiol. 2014;53:60–1. National Community Hearing Association. Metadata: the hearing map. 2016. https://the-ncha.com/resources/hearing-map/las-uk/. Curtis S, Jones IR. Is there a place for geography in the analysis of health inequality? Sociol Heal Illn. 1998;20:645–72. Cabrera-Barona P, Blaschke T, Gaona G. Deprivation, healthcare accessibility and satisfaction: geographical context and scale implications. Appl Spat Anal Policy. 2018;11:313–32. Tsimpida D, Kontopantelis E, Ashcroft D, Panagioti M. Socioeconomic and lifestyle factors associated with hearing loss in older adults: a crosssectional study of the English Longitudinal Study of Ageing (ELSA). BMJ open. 2019;9(9):e031030. Lie A, Skogstad M, Johannessen HA, Tynes T, Mehlum IS, Nordby K-C, et al. Occupational noise exposure and hearing: a systematic review. Int Arch Occup Environ Health. 2016;89:351–72. England NHS. Commissioning Services for People with Hearing Loss : A Framew Clin Comm groups; 2016. p. 1–76. NICE. Hearing loss in adults: assessment and management. 2018. Steptoe A, Breeze E, Banks J, Nazroo J. Cohort profile: the English longitudinal study of ageing. Int J Epidemiol. 2013;42:1640–8. Zaninotto P, Steptoe A. English longitudinal study of ageing. In: Encyclopedia of Gerontology and Population Aging; 2019. p. 1–7. https://www.elsa-project.ac.uk/study-documentation. Banks J, Nazroo J, Steptoe A. Wave 8: the dynamics of ageing. 2018. http://www.elsa-project.ac.uk/publicationDetails/id/6367. Marmot BJ, Blundell R, Lessof C, Nazroo J. Health, wealth and lifestyles of the older population in England: ELSA 2002; 2003. Tsimpida D, Kontopantelis E, Ashcroft DM, Panagioti M. Comparison of Self-reported Measures of Hearing With an Objective Audiometric Measure in Adults in the English Longitudinal Study of Ageing. Jama Netw Open. 2020;3:e2015009. Office for National Statistics. "Mid Year Population Estimates 2019." https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/bulletins/annualmidyearpopulationestimates/latest. Noble M, Wright G, Dibben C, Smith GA, McLennan D, Anttila C, Barnes H, Mokhtar C, Noble S, Avenell D, Gardner J. The English indices of deprivation 2004 (revised). Report to the Office of the Deputy Prime Minister. London: Neighbourhood Renewal Unit. 2004. Noble M, Wilkinson ME, Barnes MH. The English indices of deprivation 2007; 2008. McLennan D, Barnes H, Noble M, Davies J, Garatt E, Dibben C. The English Indices of Deprivation 2010: Technical Report. Department for Communities and Local Government. 2011. Smith T, Noble M, Noble S, Wright G, McLennan D, Plunkett E. The English indices of deprivation 2015. London: Department for Communities and Local Government; 2015. Benova L, Grundy E, Ploubidis GB. Socioeconomic position and health-seeking behavior for hearing loss among older adults in England. J Gerontol B Psychol Sci Soc Sci. 2015;70:443–52. Department of Health. UK Chief Medical Officers' Low Risk Drinking Guidelines. 2016; August:11. https://www.gov.uk/%0Ahttps://www.gov.uk/government/uploads/system/uploads/attachment_data/file/545937/UK_CMOs__report.pdf. Williams R. Using the margins command to estimate and interpret adjusted predictions and marginal effects. Stata J. 2012;12:308–31. StataCorp LP. Stata statistical software: release 14.[computer program]. StataCorp LP. 2015. Esri R. ArcGIS desktop: release 10. CA: Environ Syst Res Institute; 2011. Kim TK. Understanding one-way ANOVA using conceptual figures. Korean J Anesthesiol. 2017;70:22. Olusanya BO, Neumann KJ, Saunders JE. The global burden of disabling hearing impairment: a call to action. Bull World Health Organ. 2014;92:367–73. Akeroyd MA, Browning GG, Davis AC, Haggard MP. Hearing in adults: a digital reprint of the Main report from the MRC National Study of hearing. Trends Hear. 2019;23:2331216519887614. International Organization for Standardization. Acoustics—Statistical distribution of hearing thresholds related to age and gender (ISO 7029:2017). 2017. https://www.iso.org/standard/42916.html. Marmot M. Health equity in England: the Marmot review 10 years on. Bmj. 2020:368. Hoffman HJ, Dobie RA, Losonczy KG, Themann CL, Flamme GA. Declining prevalence of hearing loss in US adults aged 20 to 69 years. JAMA Otolaryngol Neck Surg. 2017;143:274. https://doi.org/10.1001/jamaoto.2016.3527. Buchan IE, Kontopantelis E, Sperrin M, Chandola T, Doran T. North-south disparities in English mortality1965–2015: longitudinal population study. J Epidemiol Community Health. 2017;71:928–36. Doran T, Drever F, Whitehead M. Is there a north-south divide in social class inequalities in health in Great Britain? Cross sectional study using data from the 2001 census. J Epidemiol Community Health. 2004;58:869. PubMed Central Google Scholar Lloyd CD. Spatial scale and small area population statistics for England and Wales. Int J Geogr Inf Sci. 2016;30:1187–206. https://doi.org/10.1080/13658816.2015.1111377. National Statistics. Adult drinking habits in Great Britain:2017: Off Natl Stat; 2018. May:1–16. https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/drugusealcoholandsmoking/bulletins/opinionsandlifestylesurveyadultdrinkinghabitsingreatbritain/2005to2016. Vos T, Abajobir AA, Abate KH, Abbafati C, Abbas KM, Abd-Allah F, et al. Global, regional, and national incidence, prevalence, and years lived with disability for 328 diseases and injuries for 195 countries, 1990–2016: a systematic analysis for the global burden of disease study 2016. Lancet. 2017;390:1211–59. NHS England. Hearing loss data tool. 2019. https://www.england.nhs.uk/publication/joint-strategic-needs-assessment-toolkit/2016. Kontopantelis E, Mamas MA, van Marwijk H, Ryan AM, Buchan IE, Ashcroft DM, et al. Geographical epidemiology of health and overall deprivation in England, its changes and persistence from 2004 to 2015: a longitudinal spatial population study. J Epidemiol Community Health. 2018;72:140–7. The Audiology Community. An open letter from the Audiology community to North Staffordshire CCG. 2014. https://www.hsj.co.uk/download?ac=1288524. The preliminary findings of this work were presented at The British Society of Audiology (BSA) Annual e-Conference 2020. This research was supported by the NIHR Manchester Biomedical Research Centre (personal award reference to DT: NIHR-INF-0551). The views expressed are those of the authors and not necessarily those of the BRC, the NIHR or the Department of Health. The NIHR Manchester Biomedical Research Centre had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Centre for Primary Care and Health Services Research, Institute for Health Policy and Organisation (IHPO), School of Health Sciences, Faculty of Biology, Medicine and Health, 5th floor Williamson Building, The University of Manchester, Oxford Road, Manchester, M139PL, UK Dialechti Tsimpida Institute for Health Policy and Organisation (IHPO), School of Health Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK Evangelos Kontopantelis NIHR Greater Manchester Patient Safety Translational Research Centre, School of Health Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK Darren M. Ashcroft & Maria Panagioti Darren M. Ashcroft Maria Panagioti DT was responsible for the conceptualisation, and all authors were responsible for developing the design of the study. DT was responsible for conducting the analyses and mapping, interpreting the results and drafting the manuscript. DT, EK, DMA and MP critically revised the manuscript. All authors have read and approved the final manuscript. Correspondence to Dialechti Tsimpida. Ethical approval for all the ELSA waves was granted from the National Research and Ethics Committee (MREC/01/2/91), and written informed consent was obtained from all participants. Details of the ELSA study design, sample and data collection are available at the ELSA's project website [https://www.elsa-project.ac.uk/]. Additional file 1: Table 1 and Table 2. One-Way ANOVA results of means of age at Regions of England in eight Waves of the English Longitudinal Study of Ageing (ELSA) and Predicted probabilities and 95% Confidence Intervals of hearing loss prevalence at Regions of England in eight Waves of the English Longitudinal Study of Ageing (ELSA). Tsimpida, D., Kontopantelis, E., Ashcroft, D.M. et al. Regional patterns and trends of hearing loss in England: evidence from the English longitudinal study of ageing (ELSA) and implications for health policy. BMC Geriatr 20, 536 (2020). https://doi.org/10.1186/s12877-020-01945-6 ELSA, inequalities Social epidemiology Health geography
CommonCrawl
Maoberry (Antidesma bunius) ameliorates oxidative stress and inflammation in cardiac tissues of rats fed a high-fat diet Arunwan Udomkasemsab1, Chattraya Ngamlerst2, Poom Adisakwattana3, Amornrat Aroonnual1, Rungsunn Tungtrongchitr1 and Pattaneeya Prangthip1Email authorView ORCID ID profile BMC Complementary and Alternative Medicine201818:344 Received: 9 August 2018 Backgound Chronic fat-rich diets consumption is increased risk associated with cardiovascular diseases (CVD). Prevention or reduction the progression of cardiac tissue deterioration could benefit in CVD. This study aimed to examine the effects of maoberry (Antidesma bunius), a antioxidant-rich tropical fruit, supplementation on oxidative stress and inflammation in cardiac tissues of rats fed a high-fat diet (HFD). The male rats orally received HFD with maoberry extract doses of 0.38, 0.76 or 1.52 g/kg or simvastatin (10 mg/kg) for 12 weeks. At the end of the experimental period, the rats were fasted, euthanized and harvested for the hearts. Significantly reduced oxidative stress (malondialdehyde levels) and enhanced antioxidant capacity (ferric-reducing activities) in cardiac tissues of the rats were found. Maoberry extract remarkably ameliorated the expressions of genes involved with pro-inflammatory such as the tumor necrosis factor alpha (TNF-α), interleukin-6 (IL-6), vascular cell adhesion molecule-1 (VCAM-1), monocyte chemoattractant protein-1 (MCP-1) and endothelial nitric oxide synthase (eNOS). Our findings suggest that maoberry extract has remarkable effects on preventing progression of cardiac tissue deterioration at least through lowering oxidative stress and inflammation. Cardiac tissue Maoberry Cardiovascular diseases (CVD) leading to heart failure are the main cause of mortality worldwide. Recent studies have indicated that a systemic inflammatory process can leads to the malfunctioning of the cardiac endothelium. Various mechanisms such as oxidative stress and inflammation are involved in cardiac pathogenesis. Therefore, the prevention or reduction of cardiac tissue deterioration to prevent CVD progression is considerable interested [1, 2]. Hypercholesterolemia is considered to be the hallmark of early CVD and is usually observed before vascular lesions appear. Previous studies have reported increased rates of reactive oxygen species (ROS) production in patients with hyperlipidaemia [3]. This may be attributed to fat-rich diets that increase the expression of nicotinamide adenine dinucleotide phosphate (NADPH) oxidase genes [4], which is the key enzyme responsible for ROS production [5], and that contributes to increased ROS formation in cells [6]. Overproduced ROS can react with nitric oxide (NO) to form peroxynitrite (ONOO−), a reactive short-lived peroxide, resulting in the inactivation of endothelial NO synthase (eNOS) [7], which lowers NO production and impairs vasodilatation, both of which can cause endothelial dysfunction [8–10]. Furthermore, oxidative stress contributes to CVD pathogenesis through inflammatory reactions. The lipid oxidation process originates from uncontrolled ROS overproduction, causing necrotic cell death, a major driver of inflammation [11]. In response to inflammation, monocytes and macrophages secrete pro-inflammatory cytokines such as the tumor necrosis factor alpha (TNF-α) and interleukin-6 (IL-6), stimulate the vasculatures to produce inflammatory mediators such as vascular cell adhesion molecule-1 (VCAM-1) and release chemokines such as monocyte chemoattractant protein-1 (MCP-1). All of these regulate the migration and infiltration of monocytes/macrophages into vascular inflammatory sites [2, 12]. The abundance of ROS causes low density lipoprotein (LDL) oxidation activating inflammatory cytokines, mediators and chemokines from infiltrating and resident macrophages. Both ROS and inflammation play a critical role in CVD occurrence and development [13]. Previous studies examined the beneficial effects of polyphenols, presenting in natural extracts of fruits, vegetables, soy, cocoa, tea and wine, and reported that these compounds can have biological effects on hearts [10, 14, 15]. The consumption of polyphenols as supplements appears to have various health benefits. Polyphenols present in red grape juice and red wine can markedly reduce NADPH oxidase activity and gene expression in human peripheral blood neutrophils and endothelial cells [15]. Patients with diabetes who daily consumed pomegranate as polyphenol supplements for 4 weeks showed a reduction in free radical-induced lipid peroxidation, notably through radical scavenger receptor activities [14]. Moreover, improvements in the regulation of pro-inflammatory molecules such as TNF-α, IL-6, VCAM-1 and MCP-1 [16–18]. Maoberry (Antidesma bunius) is a wild plant naturally growing throughout north-eastern Thailand. Maoberry fruits are very popular and are used in commercial products with healthy nourishment in Thailand. The fruits are considered to possess the ability to cure several ailments such as parched tongue, lack of appetite, indigestion, high blood pressure and diabetes [19, 20], possibly because of high polyphenol levels and antioxidant activity. Antidesma spp. contains several compounds such as phenolics, flavonoids, ascorbic acid and total proanthocyanidin [20–22]. Antidesma spp. also possesses strong antioxidant activity against oxidative damage with a variety of assays [21, 22]. Previously, we proposed maoberry extract benefit on atherogenic risk factors including lipid profiles, inflammation and oxidative stress in bloods [23]. To further understand the beneficial effects of maoberry extract on cardiovascular disease, we examined the effects of the maoberry extract on cardiac tissues in a hypercholesterol animal model. Oxidative stress and the specific molecules involved in endothelial damage were investigated. The results may provide additional data and encourage the application of natural extracts as an alternative for preventing damage in cardiac tissue. Maoberry preparation Dark-purple maoberry cultivars (Antidesmabunius spp.) were purchased from local orchard area of Khok Si Suphan District with Geocode of 4715, Sakon Nakhon province, North-eastern Thailand, during July and August in year 2016. The identification of maoberry cultivars were inspected confirmed by Assistant Dr. Prof. Pornprapha Chunthanom, Faculty of Natural Resources, Rajamangala University of Technology Isan, Sakon Nakhon campus, Sakon Nakhon, Thailand. After identifying, the fruits were washed, homogenized and concentrated via rotary evaporation method at 45 °C to a 40% concentration (v/v). The extracted juice was then aliquoted and preserved in opaque tubes at − 20 °C until used. A voucher specimen of maoberry has been deposited in Faculty of Natural Resources, Rajamangala University of Technology Isan, Sakon Nakhon campus. Animals and experimental settings Five-week-old male Sprague Dawley rats, weighing 160–180 g, were obtained from the National Laboratory Animal Center at the Salaya Campus, Mahidol University. Seventy-eight rats were housed according to the rules and regulations of the Animal Care Ethical Committee of Laboratory Animal Science Center, Faculty of Tropical Medicine, Mahidol University (Approval no. FTM-ACUC 011/2018). All procedures were conducted according to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health. Two rats were housed per plastic cage in a controlled room (temperature, 25 °C ± 2 °C; relative humidity, 55% ± 10%; a 12-h light–dark cycle). After acclimatization for 1 week with free access to standard diet and drinking water, the rats were randomly separated into two groups with the same average weight. One group (n = 12; ND group) was fed a standard diet (3.90 kcal/g) containing 20.3% protein, 5% fat and 66% carbohydrate for 16 weeks, whereas the other group (n = 60; HFD group) were fed a high-fat diet (HFD; 5.40 kcal/g) containing 20.2% protein, 58.3% fat and 21.5% carbohydrate. Lard was the main fat component in HFD. After 4 weeks of feeding, the HFD rats were randomly divided into the following five subgroups (12 rats each) on the basis of the types of treatments they received (no differences in mean weight): HFD with distilled water (1.52 ml/kg, HF subgroup); HFD with maoberry extract (0.38 g/kg, ML subgroup, 0.76 g/kg, MM subgroup or 1.52 g/kg, MH subgroup) and 10 mg/kg simvastatin (STAT) by oral gavage every other day for 12 weeks. At the end of the experimental period, the rats were fasted for 12 h, following which they were euthanized by CO2 inhalation. The hearts were harvested and weighed. Cross-sections obtained from the middle of the heart were fixed on filter paper using 10% formalin buffer for at least 2 days. The remaining parts were snap frozen in liquid nitrogen and stored at − 80 °C. Each frozen heart part was exsanguinated by perfusion with cold normal saline (0.9%) and was placed in 100 mg/ml phosphate-buffered saline containing heparin. After cardiac tissues were completely lysed, they were homogenized on ice using a sonicator (Sonic Inc., Stratford, CT, USA) and were centrifuged at 10,000×g for 5 min at 4 °C. The supernatants were collected and stored at − 80 °C for further analysis. Determination of protein in cardiac tissues The Bradford method (BioRad, Hercules, CA, USA) is rapid and accurate for estimating protein concentration. Following the manufacturer's protocol with slight modifications, the dye reagent was prepared by diluting one part of dye reagent concentrate with four parts of distilled, de-ionized water. Bovine serum albumin (BSA) was used as the standard. The linear range of the assay for BSA is 0–2 mg/ml. Next, 160 μl of each standard and sample was pipetted into 96-well plate, and 40 μl of the diluted dye reagent was added to each well. The sample and reagent were thoroughly mixed using a microplate mixer. The final solution was incubated at room temperature for at least 5 min. Absorbance was measured at 450 and 595 nm. The eq. Y = 3.663X + 0.6762 (r2 = 0.9919) was used, and the results were recorded in g/L. Determination of total phenolic contents (TPC) Total phenolic contents were determined using the Folin–Ciocalteu colorimetric method according to Baba and Malik (2015) [24] with slight modifications. In brief, 10 μl of cardiac tissue samples was blended with 150 μl of distilled water, mixed with 25 μl of Folin–Ciocalteu reagent and incubated for 3 min. Subsequently, 100 μl of 20% (w/v) sodium carbonate was added to each sample. The absorbance was measured at 650 nm after 1-h incubation in the dark at room temperature. A calibration curve was generated using gallic acid (10–100 μg/ml) solutions and the eq. Y = 0.0035X + 0.046 (r2 = 0.9999). The results were recorded as milligram gallic acid equivalences (mg GAE)/g protein. The solutions were assayed in duplicate. Determination of total flavonoid contents (TFC) Total flavonoid contents were determined using the aluminium chloride colorimetric method [24] with some modifications. In brief, 1.5 μl of cardiac tissue sample was combined with 30 μl of methanol, thoroughly mixed with 120 μl of distilled water and treated with nine μl of 5% NaNO2 solution. After 5-min incubation, nine μl of 10% AlCl3 solution was added, and the mixtures were allowed to stand for 6 min. Next, 60 μl of 1 mol/L NaOH solution was added, and the final mixture was incubated for 15 min. Absorbance was measured at 410 nm. The calibration curve was generated using solutions of quercetin (100–1000 μg/ml) and the eq. Y = 8E–0.5X − 0.0025 (r2 = 0.9961). The results were presented in milligram quercetin equivalence (mg QE)/g protein. The solutions were assayed in duplicate. Oxidative stress in the heart Thiobarbituric acid reactive substance (TBARS) assay was used for determining the level of malondialdehyde (MDA) by referring to lipid peroxidation from ROS. In brief, 100 μl of cardiac supernatant was thoroughly mixed with 350 μl of stock solution that contained 100 μl of 10% sodium dodecyl sulphate lysis solution and 150 μl of Thiobarbituric acid (TBA) reagent (25 ml of 10% acetic acid for 130 mg TBA) and was heated for 60 min at 95 °C. The absorbance of the supernatant was measured at 532 nm. MDA (#10009202, Cayman Chemical Company, Ann Arbor, MI, USA) was used as the standard. The standard curve (0–0.06 μmol/L of MDA) was derived using the following equation: Y = 0.0162X + 0.0097 (r2 = 0.9973). The results were presented as nmol/g protein. Ferric reducing antioxidant power (FRAP) assay Ferric-reducing activities in the heart were determined using a modified FRAP assay [25]. In brief, the FRAP reagent was prepared in the dark using 300 mM sodium acetate buffer (pH 3.6), 10 mM 2,4,6-tri (2-pyridyl)-s-triazine solution in 40 mMHCl and 20 mM FeCl3 solution in a ratio of 10:1:1. The fresh working solution was warmed at 37 °C before use. Ten microliters of the cardiac tissue supernatant were allowed to react with 300 μl of the FRAP reagent. After incubation at 37 °C for 4 min, the absorbance of the reaction mixture was measured at 593 nm. A calibration curve was generated using the standard solutions of trolox (0–1000 μmol/L) and the eq. Y = 0.0011X − 0.0057 (r2 = 0.9917). The results were presented as μmol trolox equivalence (TE)/g protein. Histopathological analysis of cardiac tissues Formalin-fixed cross-sections that were obtained from the middle of the heart were dehydrated and embedded in a paraffin–polyisobutylene mixture (Leica Biosystems, Harbourfront Centre, Singapore); 4-μm-thick sections were cut and prepared for haematoxylin and eosin (H&E) staining. The cardiac morphology was assessed using the Olympus BX-53 light microscope with a camera attachment (Tokyo, Japan) and the CellSens computer-based image analysis software (Olympus, Bangkok, Thailand). Real-time polymerase chain reaction The molecular mechanism involved in the atherogenic protection of maoberry extract was determined by analyzing the mRNA levels of endothelial nitric oxide synthase (eNOS), tumor necrosis factor- alpha (TNF-α), interleukin-6 (IL-6), scavenger receptor CD36, vascular cell adhesion molecule-1 (VCAM-1) and monocyte chemoattractant protein-1 (MCP-1). Total RNA was isolated from frozen cardiac specimens using Trizol reagent (Invitrogen, Carlsbad, CA, USA, Cat. No. 15596–026) following the manufacturer's instructions. The NanoDrop spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, USA) was used to determine the concentration and purity of the extracted RNA. Total RNA (2 μg) was reverse transcribed into cDNA using random oligo (dT)18 primers (Thermo Fisher Scientific Inc., MA, USA) and reverse transcriptase enzyme (Thermo Fisher Scientific Inc., MA, USA). Polymerase chain reaction (PCR) using cDNA templates were performed in 10-μl reaction mixtures that contained 0.3 μl of each specific primer (10 μM), five μl of LightCycler® 480 SYBR Green I Master (Cat. No.04707516001) and 3.4 μl of RNase-free water. The reactions were run using the LightCycler® 480 Real-Time PCR detection system (Roche, Indianapolis, IN USA) with the following conditions: 95 °C for 5 min in the pre-incubation phase, 95 °C for 10 s, 58 °C for 10 s and 72 °C for 10 s and 45 cycles at the amplification phase. All primers (Table 1) were synthesized by Pacific Science Co., Ltd. (Bangkok, Thailand). The housekeeping gene glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used as the internal control to normalize for differences in quantity and quality between the RNA samples. The fold differences in the expression of different mRNAs were calculated and compared with those in that of the ND group using the 2−ΔΔCt calculation as follows [26]: $$ \Delta \Delta \mathrm{Ct}=\Delta \mathrm{Ct}\left(\mathrm{treatment}\ \mathrm{group}\right)-\Delta \mathrm{Ct}\left(\mathrm{normal}\ \mathrm{group}\right) $$ $$ \Delta \mathrm{Ct}=\mathrm{Ct}\left(\mathrm{target}\ \mathrm{gene}\right)-\mathrm{Ct}\left(\mathrm{reference}\ \mathrm{gene}\right) $$ The primer details used for real-time PCR analysis Forward sequence (5′ to 3′) Reverse sequence (5′ to 3′) Amplicon length (bp) GenBank accession number GGA TTC TGG CAA GAC CGA TTA C GGT GAG GAC TTG TCC AAA CAC T NM_021838.2 Vcam-1 GGA GCC TGT CAG TTT TGA GAA TG TTG GGG AAA GAG TAG ATG TCC AC TNF-α ACT GAA CTT CGG GGT GAT TG GCT TGG TGG TTT GCT ACG AC ATATGTTCTCAGGGAGATCTTGGAA GTGCATCATCGCTGTTCATACA AGGAAGTGGCAAAGAATAGCAG ACAGACAGTGAAGGCTCAAAGA Mcp-1 GGC CTG TTG TTC ACA GTT GCT TCT CAC TTG GTT CTG GTC CAG T GCA AGT TCA ACG GCA CAG GCC AGT AGA CTC CAC GAC AT The statistical analysis software SPSS (version 18.0; SPSS Inc., IBM, Chicago, IL, USA) was used to perform one-way analysis of variance and post-hoc tests for multiple comparisons. A p value of < 0.05 was considered to be significant. All values were expressed as mean ± standard deviation of two determinations. The GraphPad Prism software version 5.0 (La Jolla, CA, USA) was used for curve fitting. Total phenolic and flavonoid contents in cardiac tissues No mortality, illness and alterations in appearance or behavior were observed in all rats, both during and after 12 weeks of maoberry extract administration. A significant increase in both total phenols (p < 0.01 in the ML and MM subgroups and p < 0.001 in the MH subgroup) and total flavonoids (p < 0.01 in the MM subgroup and p < 0.001 in the ML and MH subgroups) were observed in cardiac tissues of rats that were fed HFD and different concentrations of maoberry extract compared with those of rats that were fed HFD with distilled water (Fig. 1a and b). Total phenolic content (a), total flavonoid content (b), malondialdehyde level (c) and Ferric reducing antioxidant power (FRAP) level (d) in the cardiac tissue. Data are expressed as mean ± SD, n = 12. ND: standard diet, HF: high fat diet, ML, MM, MH: high fat diet with Mao Luangcrude extract 0.38 or 0.76 or 1.52 g/kg, respectively, STAT: high fat diet with simvastatin 10 mg/kg.*p < 0.05, **p < 0.01, ***p < 0.001 represent significant differences when compared with the ND group. #p < 0.05, ## p < 0.01, ### p < 0.001 represent significant differences when compared with the HF group Oxidative stress in cardiac tissues Oxidative stress was evaluated on the basis of the production of MDA (a biomarker of lipid peroxidation) using the TBARS assay. After 12 weeks of treatment, MDA levels were significantly higher in the HF group than in the ND group (p < 0.001), confirming the establishment of oxidative stress in myocytes associated with hypercholesterolemia. Furthermore, MDA levels were significant lower in all rats that were fed HFD with maoberry extract (p < 0.05 in the ML and MH subgroups and p < 0.01 in the MM subgroup) or simvastatin (p < 0.01) than in rats that were fed HFD with distilled water (Fig. 1c). Antioxidant capacity in cardiac tissue The FRAP level markedly decreased in the HF group compared with that in ND group (p < 0.01). In contrast, the FRAP levels significantly increased (p < 0.01 in the MM subgroup and p < 0.001 in the ML and MH subgroups) in rats that were fed HFD with maoberry extract or simvastatin (p < 0.05) compared with rats that were fed HFD with distilled water (Fig. 1d). Histopathological changes in cardiac tissue Microscopic images of H&E-stained cardiac tissues of rats in the ND group (Figs. 2a and 3a) revealed benign, blunt-looking, arranged cardiac muscles with no abnormal pathological findings. Alternatively, histological alterations were observed in cardiac tissues of rats in the HF group. Pathological alterations such as myocyte hypertrophy, enhancement of fat droplets and accumulation of mononuclear cells associated with inflammation were evident in the HF group compared with the ND group (Figs. 2b and 3b). Light microscopic images of H&E-stained transverse-sections of cardiac tissues of rats after 12 weeks of Maoberry. For comparison, images showed different fat droplets in the different treatment groups (magnification × 100). a ND: rats fed a standard diet, b HF: rats fed a high-fat diet with distilled water, c ML, d MM and e MH: rats fed a high-fat diet with Maoberry extract at 0.38, 0.76 and 1.52 g/kg, respectively, f STAT: rats fed a high-fat diet with 10 mg/kg simvastatin. Arrows represent fat droplets. Panels enlarge the area of pathological alterations (magnifiacation × 400) Pathological changes in transverse-sections of cardiac tissues of rats after 12 weeks of treatment showing amounts of mononuclear cell infiltration in different treatments. a ND: a standard diet, b HF: a high-fat diet with distilled water, c ML, d MM and e MH: a high-fat diet with Maoberry extract at 0.38, 0.76 and 1.52 g/kg, respectively, f STAT: a high-fat diet with 10 mg/kg simvastatin. H&E staining (magnification× 100). Arrows represent mononuclear cell infiltration. Panels enlarge the area of pathological alterations (magnifiacation × 400) After 12 weeks of treatment, mild myocytic hypertrophy and a decrease in the number of fat droplets (Fig. 2c-e) and mononuclear cells (Fig. 3c-e) were evidently observed in Maoberry-treated rats, particularly those of the MH group, compared with those in HF group. Real-time PCR analysis of cardiac tissues To understand the effects of maoberry extract on HFD-induced inflammation and vascular dysfunction, mRNA levels were examined by real-time PCR analysis. The mRNA levels of inflammatory cytokines such as TNF-α and IL-6 were significantly elevated in the HF group (p < 0.01 and p < 0.001 respectively), whereas the levels were markedly decreased in rats that were fed HFD with different concentrations of maoberry extract or simvastatin (Fig. 4a and b). TNF-α, IL-6, VCAM-1, MCP-1, CD36 and eNOS mRNA levels from cardiac tissues of each groupby Real-time PCR detection. The expressions of TNF-α, IL-6, VCAM-1, MCP-1, CD36 and eNOS mRNA in HF group were significantly higher than those in the ND group. The expressions of IL-6, VCAM-1, MCP-1, and eNOS mRNA in Maoberrytreated groups were significantly lower than those in the HF group. Each bar represents the relative mean ± SD, n = 12 each. *p < 0.05, **p < 0.01 and ***p < 0.01 represent significant differences when compared with the ND group. #p < 0.05, ##p < 0.01 and ###p < 0.001 represent significant differences when compared with the HF group. ND: a standard diet, HF: a high fat diet, ML, MM, MH: a high fat diet with Maoberry extract 0.38 or 0.76 or 1.52 g/kg, respectively, STAT: a high fat diet with simvastatin 10 mg/kg Moreover, mRNA levels of VCAM-1, MCP-1, CD36 and eNOS were significantly higher in cardiac tissues of the HF group (p < 0.001, except eNOS, which was p < 0.01) than those of the ND group (Fig. 4c-f). Supplementing with different concentrations of maoberry extract or simvastatin significantly reduced the upregulation of VCAM-1, MCP-1 and eNOS mRNA levels compared with supplementing HFD with distilled water. However, significant downregulation in CD36 gene expressions was observed only in the STAT group. According to our previous studies [23], maoberry extract doses of 0.38, 0.76 or 1.52 g/kg for 12 weeks may have tremendous beneficial effects on risk factors of cardiovascular disease such as atherogenic indices, inflammation and oxidative stress in peripheral blood and spleen histopathology. The principle of maoberry dose use calculated according to American Heart Association recommendation to consume fruits 4 servings per day for long-term benefits to health and heart [27]. Four portions of fresh maoberries equal to 0.38 g per 1 kg body weight. Two and four times of maoberry extracts are 0.76 and 1.52 g/kg body weight, respectively. Previously, we examined nutritive values of maoberry extract and its antioxidant activities based on oxygen radical absorbance capacity (ORAC) and ferric reducing antioxidant power (FRAP) assays [23]. Gallic acid, epicatechin, catechin, and cyanidin-3-O-glucoside were analysed by HPLC and reported as the major polyphenolic components in fourteen maoberry cultivars from northeast Thailand [28]. Catechin, procyanidin B1, and procyanidin B2 analysed by HPLC are the major flavonoid compounds in fifteen cultivars from Northeast Thailand as well [29]. These polyphenols and flavonoids are often known to be antioxidants and may be involved in and contribute to the antioxidant activity of maoberry extract. We previously reported that maoberry extract could reduce the number of immune cells associated with inflammation and the platelet population connected to the vascular blockage [23]. There are many finding on major active compounds from maoberry (Antidesma bunius) on oxidative stress and inflammation. Cyanidin-3-glucoside showed to inhibit free radicals and inhibit the expression of important cytokines in inflammation including nuclear factor-kappa B (NF-kB) in human vascular endothelial cells with dose-dependent manner [30]. Catechin showed to significantly reduce lipid peroxidation (MDA) in drug inducing cardiotoxicity in rats. Significant decrease in NF-kB, tumor necrosis factor alpha and inducible nitric oxide synthase were also present in rats treated with catechin [31]. Gallic acid available in maoberry was reported to be one of the most active dietary antioxidants in humans [32]. One of observation from Medical University of Vienna, Austria evidenced that administration of gallic acid in the amount of daily consumption could reduce oxidative DNA damage, oxidized-LDL and C-reactive protein in plasma of patients with type 2 diabetes mellitus [33]. The current study further demonstrates that maoberry extract contributes to prevent deterioration in cardiac tissue by improving the oxidative stress status and down regulating the expression of inflammatory cytokines and chemokines. Oxidative stress, which occurs because of an imbalance between excess free radical formation and low antioxidant defense, has unspecific damage to the structure and functions of cells [34]. In the present study, a high fat diet (HFD) consumption appeared to increase lipid peroxidation in cardiac tissues, demonstrated by increased MDA levels in the tissues. MDA is a by-product of lipid peroxidation and is accepted as a vital marker of oxidative stress [35]. In previous studies in which animals were fed HFD, supplementation with lard induced oxidative stress, which originates from the upregulated expression of NADPH oxidase (up to threefold) [4, 8, 34]. After maoberry extract administration, numerous polyphenols in the extract pass through the stomach, are hydrolyzed and absorbed in the small intestine. Some polyphenols enter into the blood circulation and reach the organs [36]. In this study, although polyphenol levels in tissues depend on the amount of uptake and secretion by specific tissues, significantly higher levels of total flavonoids and total phenols were observed in the cardiac tissue of rats consuming maoberry extract. This meant that the cardiac tissue can uptake polyphenols from maoberry extract. Furthermore, the results of the FRAP assay demonstrated the role of maoberry extract that reduces ferric iron (Fe3+) to ferrous iron (Fe2+) in the cardiac tissue. Altogether, these findings confirm that cardiac tissues can absorb polyphenolic substances, which are responsible for radical scavenging and reducing activities [37]. Similarly, polyphenols from other sources such as pomegranate [14], tea, cacao and red grape juice [15] are potential antioxidants that exert their effects by radical scavenging, which inhibits lipid peroxidation, Fe3+ reduction and downregulating NADPH oxidase activity [12, 15, 38]. Moreover, oxidative stress-induced LDL oxidation increases the expression levels of scavenger receptors and pro-inflammatory cytokines, mainly secreted by macrophages, and upregulates the expression of inflammatory adhesion molecules on endothelial cells [12, 34, 39]. In the current study, along with histopathological alterations, significantly elevated mRNA levels of TNF-α, IL-6, VCAM-1, CD36 and MCP-1 were observed in cardiac tissues of the HF group. Thus, it appears that the progression of endothelial dysfunction is associated with inflammation in the endothelium [40]. Inflammation reduced after 12 weeks of treatment with maoberry extract. A significant decrease in mRNA expression levels of pro-inflammatory cytokines and chemokines, such as TNF-α, IL-6 and MCP-1 and adhesion molecules, such as VCAM-1 was also observed. These results are buttressed by the apparent reduction in the number of lymphocytic cells in histological cardiac tissues of rats fed HFD with maoberry extract. In agreement with previous studies, the protective effects of polyphenols in maoberry extract may prevent endothelial dysfunction and inflammation via the downregulation of mRNA expression levels of TNF-α, IL-6, VCAM-1 and MCP-1 [18, 34, 39] by suppressing the activation of the transcription factor NF-κB in endothelial cells [41, 42]. In contrast, LDL oxidation enhances the expression of scavenger receptors such as CD36 in macrophages, which uptakes the excess modified lipid molecules, thus leading to the formation of foam cells, a key player in early inflammation-induced atherogenesis [43]. In the present study, we also found apparently high mRNA levels of CD36 in the HF group compared with those in the ND group, probably owing to the upregulation of structurally defined oxidized molecules, which serve as high affinity ligands for CD36 [43]. Although no significant difference was noted between the Mao Luang-treated groups and the HF group, mRNA expression levels of CD36 demonstrated a tendency to decline. Therefore, the beneficial effects of maoberry extract may be related to reduced CD36 expression and functions and suppressed oxLDL modification [39, 43]. Moreover, polyphenols in maoberry extract may suppress or downregulate the expression of other scavenger receptors that recognize oxidation-specific oxLDL, including SRA-1, SRA-2, MARCO, SR-B1, LOX-1 and PSOX [12]. Endothelial dysfunction is characterized by the impairment of endothelium-dependent relaxation owing to decreased vascular NO bioavailability caused by oxidative stress. eNOS is the major enzyme that is responsible for NO production. Lower eNOS levels or the lack of a substrate or cofactor such as L-arginine and tetrahydrobiopterin (BH4), which is affected by ROS, can lead to uncoupled eNOS formation, resulting in decreased NO levels and increased ONOO− production. The overproduction of ONOO− can enhance BH4 oxidation, leading to BH4 deficiency, thus generating a vicious circle. Furthermore, eNOS exposure to ONOO− leads to the uncoupling of eNOS and is a crucial mechanism that contributes to vascular dysfunction [40, 44]. In the current study, we observed higher mRNA levels of eNOS in the HF group than in the ND group. After 12 weeks of treatment with maoberry extract, a reduction in upregulated mRNA levels of eNOS was observed, particularly in the ML (p < 0.05) and MM (p < 0.01) subgroups compared with HF group. This is in agreement with the results of a study on human umbilical vein endothelial cells, where in Genistein [45], a soy polyphenol, improved vascular reactivity by increasing eNOS expression. Genistein acutely stimulates eNOS synthesis in vascular endothelial cells. In contrast, Lund et al. [46] revealed that soy isoflavone did not affect eNOS expression in hyperlipidemic rabbits. These findings suggest that eNOS can be regulated by genomic and nongenomic factors [45]. Generally, regulatory systems allow living cells to change biochemical processes or gene expression programs automatically in response to alterations in the intracellular and/or extracellular environment. Therefore, in this study maoberry extract may affect on the modulation of eNOS expression into the normal range. Furthermore, during inflammation, TNF families alone could reduce eNOS mRNA half-life from 48 h to 3 h. [47]. Consequently, decreased mRNA levels of eNOS may indicate positive autoregulation related to inflammation, resulting in increased mRNA transcription and translation of eNOS [48], which were found in HF group. In contrast, maoberry extract may function as a free radical scavenger because of which eNOS and other substrates or cofactors can take effective action in NO production; negative feedback may control eNOS transcription and translation caused the decrease in eNOS mRNA expression [48]. Although the molecular mechanism remains unclear and complicated, the present study may be the first one to indicate that maoberry extract may prevent cardiac tissue deterioration. The current study reported for the first time that oral administration of maoberry extract had significant beneficial effects in cardiac tissues by reducing oxidative stress and enhancing antioxidant activity, thereby preventing progression of cardiac tissue deterioration involved in inflammation. In accordance with the outcomes of this study, maoberry extract ameliorates oxidative stress and inflammation in cardiac tissues of rats fed with a high-fat diet by at least through: (1) acting as a radical scavenger and Fe3+ reduction (2) reducing MDA levels that are associated with lipid peroxidation inside tissues and enhancing cardiac antioxidative capacity, (3) counteracting against the upregulation of inflammatory cytokines (TNF-α, IL-6, VCAM-1 and MCP-1) to reduce inflammation and (4) downregulating eNOS mRNA expression associated with positive and negative autoregulation. Although the precise molecular mechanisms by which maoberry extract protects cardiac tissue deterioration remains unclear, this study may profoundly demonstrate that maoberry extract treatment may be a congenial alternative application against in damage progression of cardiac tissue. Bovine serum albumin eNOS: Endothelial NO synthase FRAP: Ferric reducing antioxidant power Glyceraldehyde-3-phosphate dehydrogenase H&E: Haematoxylin and eosin HFD: A high-fat diet High-fat diet LDL: Monocyte chemoattractant protein-1 MDA: Malondialdehyde MH: High-fat diet with maoberry extract 1.52 g/kg ML: NADPH: Nicotinamide adenine dinucleotide phosphate ND: Normal diet ONOO− : Peroxynitrite High-fat diet with simvastatin TBA: Thiobarbituric acid Trolox equivalence TFC: Total flavonoid contents;TBARS, Thiobarbituric acid reactive substance VCAM-1: Vascular cell adhesion molecule-1 This research acknowledges to the Faculty of Tropical Medicine for supporting the equipment and facilities. This research was supported by the Faculty of Tropical Medicine in the study design, collection, analysis and in writing the manuscript. The research was also supported by and Thailand Research Fund (TRF) under Grand No.MRG6180101 in the study design, analysis and interpretation of data. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. AU made a contribution in the experimentation, analyzing and drafting of the manuscript. CN made a contribution in the experimentation. PA, AA, RT contribution in designing the experimentation, PP takes the entire responsibility of the manuscript. All authors read and approved the final manuscript. PP did her PhD (Nutrition) in The Institute of Nutrition, Mahidol University. She is working as Assistant Professor at the Department of Tropical Nutrition and Food Science, Mahidol University, Bangkok, Thailand. This study makes use of rats under rules and regulations of the Animal Care Ethical Committee of Laboratory Animal Science Center, Faculty of Tropical Medicine, Mahidol University (Approval no. FTM-ACUC 011/2018). Department of Tropical Nutrition and Food Science, Faculty of Tropical Medicine, Mahidol University, 420/6 Ratchawithi Road, Ratchathewi, Bangkok, Thailand Institute of Nutrition, Mahidol University, Nakhon Pathom, Thailand Department of Helminthology, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand Ikonomidis I, Michalakeas CA, Parissis J, Paraskevaidis I, Ntai K, Papadakis I, et al. Inflammatory markers in coronary artery disease. BioFactors (Oxford England). 2012;38:320–8.View ArticleGoogle Scholar Vilahur G, Padró T, Casaní L, Mendieta G, López JA, Streitenberger S, et al. Polyphenol-enriched diet prevents coronary endothelial dysfunction by activating the Akt/eNOS pathway. Revista Española de Cardiología (English Edition). 2015;68:216–25.View ArticleGoogle Scholar Lü J-M, Lin PH, Yao Q, Chen C. Chemical and molecular mechanisms of antioxidants: experimental approaches and model systems. J Cell Mol Med. 2010;14:840–60.View ArticleGoogle Scholar Mai A, Li J-M. NADPH oxidase activation and oxidative stress in high-fat diet-induced hypertension and metabolic disorders. Heart. 2014;100:A1.Google Scholar Castaneda OA, Lee S-C, Ho C-T, Huang T-C. Macrophages in oxidative stress and models to evaluate the antioxidant function of dietary natural compounds. J Food Drug Anal. 2017;25:111–8.View ArticleGoogle Scholar Ho E, Karimi Galougahi K, Liu C-C, Bhindi R, Figtree GA. Biological markers of oxidative stress: applications to cardiovascular research and practice. Redox Biol. 2013;1:483–91.View ArticleGoogle Scholar Radi R. Peroxynitrite, a stealthy biological oxidant. J Biol Chem. 2013;288:26464–72.Google Scholar Jiang F, Lim HK, Morris MJ, Prior L, Velkoska E, Wu X, et al. Systemic upregulation of NADPH oxidase in diet-induced obesity in rats. Redox Rep. 2011;16:223–9.View ArticleGoogle Scholar Kurin E, Fakhrudin N, Nagy M. eNOS promoter activation by red wine polyphenols: an interaction study. Acta Facultatis Pharmaceuticae Universitatis Comenianae. 2013:27.Google Scholar Storniolo CE, Roselló-Catafau J, Pintó X, Mitjavila MT, Moreno JJ. Polyphenol fraction of extra virgin olive oil protects against endothelial dysfunction induced by high glucose and free fatty acids through modulation of nitric oxide and endothelin-1. Redox Biol. 2014;2:971–7.View ArticleGoogle Scholar Leiva E, Wehinger S, Guzmán L, Orrego R. Chapetr: role of oxidized LDL in atherosclerosis. In: Kumar SA, editor. Hypercholesterolemia: InTech; 2015. p. 55–78.Google Scholar Hansson GK, Hermansson A. The immune system in atherosclerosis. Nat Immunol. 2011;12:204–12.View ArticleGoogle Scholar Cai L, Wang Z, Ji A, Meyer JM, van der Westhuyzen DR. Scavenger receptor CD36 expression contributes to adipose tissue inflammation and cell death in diet-induced obesity. PLoS One. 2012;7:e36785.View ArticleGoogle Scholar Basu A, Newman ED, Bryant AL, Lyons TJ, Betts NM. Pomegranate polyphenols lower lipid peroxidation in adults with type 2 diabetes but have no effects in healthy volunteers: a pilot study. J Nutr Metab. 2013;2013:7.View ArticleGoogle Scholar Dávalos A, de la Peña G, Sánchez-Martín CC, Teresa Guerra M, Bartolomé B, Lasunción MA. Effects of red grape juice polyphenols in NADPH oxidase subunit expression in human neutrophils and mononuclear blood cells. Br J Nutr. 2009;102:1125–35.View ArticleGoogle Scholar Karlsen A, Paur I, Bøhn S, Sakhi A, Borge G, Serafini M, et al. Bilberry juice modulates plasma concentration of NF-κB related inflammatory markers in subjects at increased risk of CVD. Eur J Nutr. 2010;49:345–55.View ArticleGoogle Scholar Sayegh M, Miglio C, Ray S. Potential cardiovascular implications of sea buckthorn berry consumption in humans. Int J Food Sci Nutr. 2014;65:521–8.View ArticleGoogle Scholar Wang M, Jiang L, Monticone RE, Lakatta EG. Proinflammation: the key to arterial aging. Trends Endocrinol Metab. 2014;25:72–9.View ArticleGoogle Scholar Lim TK. Antidesma bunius. Edible medicinal and non-medicinal plants. New York: Springer; 2012. p. 220–4.Google Scholar Samappito S, Butkhup L. An analysis on flavonoids, phenolics and organic acids contents in brewed red wines of both non-skin contact and skin contact fermentation techniques of Maoberryripe fruits (Antidesma bunius) harvested from Phupan Valley in Northeast Thailand. Pak J Biol Sci. 2008;11:1654–61.View ArticleGoogle Scholar Poontawee W, Natakankitkul S, Wongmekiat O. Enhancing phenolic contents and antioxidant potentials of antidesma thwaitesianum by supercritical carbon dioxide extraction. J Anal Methods Chem. 2015;2015:7.View ArticleGoogle Scholar Sripakdee T, Sriwicha A, Jansam N, Mahachai R, Chanthai S. Determination of total phenolics and ascorbic acid related to an antioxidant activity and thermal stability of the Mao fruit juice. Int Food Res J. 2015;22:618.Google Scholar Udomkasemsab A, Ngamlerst C, Kwanbunjun K, Krasae T, Amnuaysookkasem K, Chunthanom P, Prangthip P. Maoberry (Antidesma bunius) improves glucose metabolism, triglyceride and spenic lesions in high fat diet induced hypercholesterolemic rats. J Med Food. 2018:1–9.Google Scholar Baba SA, Malik SA. Determination of total phenolic and flavonoid content, antimicrobial and antioxidant activity of a root extract of Arisaema jacquemontii Blume. J Taibah Univ Sci. 2015;9:449–54.View ArticleGoogle Scholar Ayub AM, Inaotombi D, Varij N, Victoria CK, Lalsanglura R. Antioxidant activity of fruits available in Aizawl market of Mizoram, India. Int J Pharm Biol Sci. 2010;1:76–81.Google Scholar Livak KJ, Schmittgen TD. Analysis of relative gene expression data using real-time quantitative PCR and the 2(−Delta Delta C(T)) method. Methods (San Diego Calif). 2001;25:402–8.View ArticleGoogle Scholar The American heart association. Suggested servings from each food group. http://www.heart.org:HEARTORG/HealthyLiving/HealthyEating/HealthyDietGoals/Suggested-Servings-from-Each-Food-Group_UCM_318186_Article.jsp#.W0gl9tUzbX6 (Accessed 6 July 2018). Jorjong S, Butkhup L, Samappito S. Phytochemicals and antioxidant capacities of Mao-Luang (Antidesma bunius L.) cultivars from Northeastern Thailand. Food Chem. 2015;181:248–55.View ArticleGoogle Scholar Butkhup L, Samappito S. An analysis on flavonoids contents in Mao Luang fruits of fifteen cultivars (Antidesma bunius), grown in Northeast Thailand. Pak J Biol Sci. 2008;11:996–1002.View ArticleGoogle Scholar Sivasinprasasn S, Pantan R, Thummayot S, Tocharus J, Suksamrarn A, Tocharus C. Cyanidin-3-glucoside attenuates angiotensin II-induced oxidative stress and inflammation in vascular endothelial cells. Chem Biol Interact. 2016;260:67–74.View ArticleGoogle Scholar El-Aziz TAA, Mohamed RH, Pasha HF, Abdel-Aziz HR. Catechin protects against oxidative stress and inflammatory-mediated cardiotoxicity in adriamycin-treated rats. Clin Exp Med. 2012;12:233–40.View ArticleGoogle Scholar Ferk F, Chakraborty A, Jager W, Kundi M, Bichler J, Misik M, et al. Potent protection of gallic acid against DNA oxidation: results of human and animal experiments. Mutat Res. 2011;715:61–71.View ArticleGoogle Scholar Ferk F, Kundi M, Brath H, Szekeres T, Al-Serori H, Mišík M, et al. Gallic acid improves health-associated biochemical parameters and prevents oxidative damage of dna in type2 diabetes patients: results of a placebo-controlled pilot study. Mol Nutr Food Res. 2018;62:1700482.View ArticleGoogle Scholar Li CM, Guo YQ, Dong XL, Li H, Wang B, Wu JH, et al. Ethanolic extract of rhizome of Ligusticum chuanxiong Hort. (chuanxiong) enhances endothelium-dependent vascular reactivity in ovariectomized rats fed with high-fat diet. Food Funct. 2014;5:2475–85.View ArticleGoogle Scholar Alam MN, Bristi NJ, Rafiquzzaman M. Review on in vivo and in vitro methods evaluation of antioxidant activity. Saudi Pharm J. 2013;21:143–52.View ArticleGoogle Scholar Marin L, Miguelez EM, Villar CJ, Lombo F. Bioavailability of dietary polyphenols and gut microbiota metabolism: antimicrobial properties. Biomed Res Int. 2015;2015:18.View ArticleGoogle Scholar Makowczynska J, Grzegorczyk KI, Wysokinska H. Antioxidant activity of tissue culture-raised ballota nigra l. plants grown ex vitro. Acta Pol Pharm. 2015;72:769–75.PubMedGoogle Scholar Fernando CD, Soysa P. Total phenolic, flavonoid contents, in-vitro antioxidant activities and hepatoprotective effect of aqueous leaf extract of Atalantia ceylanica. BMC Complement Altern Med. 2014;14:395.View ArticleGoogle Scholar Daleprane JB, Freitas Vda S, Pacheco A, Rudnicki M, Faine LA, Dorr FA, et al. Anti-atherogenic and anti-angiogenic activities of polyphenols from propolis. J Nutr Biochem. 2012;23:557–66.View ArticleGoogle Scholar Silva BR, Pernomian L, Bendhack LM. Contribution of oxidative stress to endothelial dysfunction in hypertension. Front Physiol. 2012;3:441.View ArticleGoogle Scholar Chandler D, Woldu A, Rahmadi A, Shanmugam K, Steiner N, Wright E, et al. Effects of plant-derived polyphenols on TNF-α and nitric oxide production induced by advanced glycation endproducts. Mol Nutr Food Res. 2010;54:S141–S50.View ArticleGoogle Scholar Tangney C, Rasmussen HE. Polyphenols, inflammation, and cardiovascular disease. Curr Atheroscler Rep. 2013;15:324.View ArticleGoogle Scholar Cho S. CD36 as a therapeutic target for endothelial dysfunction in stroke. Curr Pharm Des. 2012;18:3721–30.View ArticleGoogle Scholar Tobias S, Habermeier A, Siuda D, Reifenberg G, Xia N, Closs EI, et al. Dexamethasone, tetrahydrobiopterin and uncoupling of endothelial nitric oxide synthase. J Geriatr Cardiol. 2015;12:528–39.PubMedPubMed CentralGoogle Scholar Cho HY, Park CM, Kim MJ, Chinzorig R, Cho CW, Song YS. Comparative effect of genistein and daidzein on the expression of MCP-1, eNOS, and cell adhesion molecules in TNF-α-stimulated HUVECs. Nutr Res Pract. 2011;5:381–8.View ArticleGoogle Scholar Lund CO, Mortensen A, Nilas L, Breinholt VM, Larsen J-J, Ottesen B. Estrogen and phytoestrogens: effect on eNOS expression and in vitro vasodilation in cerebral arteries in ovariectomized Watanabe heritable hyperlipidemic rabbits. Eur J Obstet Gynecol Reprod Biol. 2007;130:84–92.View ArticleGoogle Scholar Tai SC, Robb GB, Marsden PA. Endothelial nitric oxide synthase: a new paradigm for gene regulation in the injured blood vessel. Arterioscler Thromb Vasc Biol. 2004;24:405–12.View ArticleGoogle Scholar Mitrophanov AY, Groisman EA. Positive feedback in cellular control systems. BioEssays. 2008;30:542–55.View ArticleGoogle Scholar
CommonCrawl
\begin{document} \runningauthor{Bartels, Stensbo-Smidt, Moreno-Mu\~noz, Boomsma, Frellsen, Hauberg} \twocolumn[ \aistatstitle{Adaptive Cholesky Gaussian Processes} \aistatsauthor{Simon Bartels \And Kristoffer Stensbo-Smidt \And Pablo Moreno-Mu\~noz} \aistatsaddress{University of Copenhagen \And Technical University of Denmark \And Technical University of Denmark} \aistatsauthor{Wouter Boomsma \And Jes Frellsen \And S\o{}ren Hauberg} \aistatsaddress{University of Copenhagen \And Technical University of Denmark \And Technical University of Denmark} ] \tikzset{external/force remake=false} \tikzexternaldisable \setlength{\figwidth}{.5\textwidth} \setlength{\figheight}{.25\textheight} \definecolor{color1}{rgb}{1,0.6,0.2} \definecolor{color0}{rgb}{0,0.4717,0.4604} \renewcommand{\boldsymbol}{\boldsymbol} \begin{abstract} We present a method to approximate Gaussian process regression models for large datasets by considering only a subset of the data. Our approach is novel in that the size of the subset is selected on the fly during exact inference with little computational overhead. From an empirical observation that the log-marginal likelihood often exhibits a linear trend once a sufficient subset of a dataset has been observed, we conclude that many large datasets contain redundant information that only slightly affects the posterior. Based on this, we provide probabilistic bounds on the full model evidence that can identify such subsets. Remarkably, these bounds are largely composed of terms that appear in intermediate steps of the standard Cholesky decomposition, allowing us to modify the algorithm to adaptively stop the decomposition once enough data have been observed. \end{abstract} \section{Introduction} \begin{figure} \caption{ The figure shows the log-marginal likelihood as a function of the size of the training set for five random permutations of the \texttt{pm25}{} dataset. The different colors correspond to different Gaussian process models, using the squared exponential kernel with length scale $\ell$. Depending on the model (but little on the permutation), the log-likelihood starts to exhibit a linear trend after processing a certain amount of inputs. More examples can be found in \cref{app:evidence}. } \label{fig:evidence} \end{figure} The key computational challenge in Gaussian process regression is to evaluate the log-marginal likelihood of the $N$ observed data points, which is known to have cubic complexity \parencite{rasmussenwilliams}. It has been observed \parencite{Chalupka2013comparison} that the random-subset-of-data approximation can be a hard-to-beat baseline for approximate Gaussian process inference. However, the question of how to choose the size of the subset is non-trivial to answer. Here we make an attempt. We first make an empirical observation when studying the behavior of the log-marginal likelihood with increasing number of observations. \cref{fig:evidence} show this progression for a variety of models. We elaborate on this figure in \cref{subsec:intuition}, but for now note that after a certain number of observations, determined by model and dataset, the log-marginal likelihood starts to progress with a linear trend. This suggest that we may leverage this near-linearity to estimate the log-marginal likelihood of the full dataset after having seen only a subset of the data. However, as the point of linearity differs between models and datasets, this point cannot be set in advance but must be estimated on-the-fly. In this paper, we investigate three main questions, namely 1) how to detect the near linear trend when processing datapoints sequentially, 2) when it is safe to assume that this trend will continue , and 3) how to implement an efficient stopping strategy, that is, without too much overhead to the exact computation. We approach these questions from a (frequentist) probabilistic numerics perspective \parencite{hennigRSPA2015}. By treating the dataset as a collection of independent and identically distributed random variables, we provide expected upper and lower bounds on the log-marginal likelihood, which become tight when the above-mentioned linear trend arises. These bounds can be evaluated with little computational overhead by leveraging intermediate computations performed by the Cholesky decomposition that is commonly used for evaluating the log-marginal likelihood. We refer to our method as \emph{Adaptive Cholesky Gaussian Process} (ACGP\xspace{}). Our approach has a complexity of $\mathcal{O}(M^3)$, where $M$ is the processed subset-size, inducing an overhead of $\mathcal{O}(M)$ to the Cholesky decomposition. The main difference to previous work is that our algorithm does \emph{not necessarily} look at the whole dataset, which makes it particularly useful in settings where the dataset is so large that even linear-time approximations are not tractable. When a dataset contains a large amount of redundant data, ACGP\xspace allows the inference procedure to stop early, saving precious compute---especially when the kernel function is expensive to evaluate. \section{Background} \label{sec:background} We use a \textsc{python}-inspired index notation, abbreviating for example $[y_1, \ldots, y_{n-1}]^{\top}$ as $\bm{y}_{:n}$; observe that the indexing starts at 1. With $\operatorname{Diag}$ we define the operator that sets all off-diagonal entries of a matrix to $0$. \subsection{Gaussian Process Regression} We start by briefly reviewing Gaussian process (GP) regression models and how they are trained (see \textcite[Chapter 2 and 5.4]{rasmussenwilliams}). We consider the training dataset $\mathcal{D} = \{\bm{x}_n, y_n\}^N_{n=1}$ with inputs $\bm{x}_n \in \mathbb{R}^{D}$ and outputs $y_n \in \mathbb{R}$. The inputs are collected in the matrix $\bm{X} = [\bm{x}_1, \bm{x}_2, \ldots, \bm{x}_N]^{\top} \in \mathbb{R}^{N\times D}$. A GP $f \sim \mathcal{GP}(m(\bm{x}), k(\bm{x}, \bm{x}'))$ is a collection of random variables defined in terms of a mean function, $m(\bm{x})$, and a covariance function or \emph{kernel}, $k(\bm{x}, \bm{x}') = \operatorname{cov}(f(\bm{x}), f(\bm{x}'))$, such that any finite amount of random variables has a Gaussian distribution. Hence, the prior over $\bm{f}\colonequals f(\bm{X})$ is $\mathcal{N}(\bm{f}; m(\boldsymbol X), \mK_\text{ff})$, where we have used the shorthand notation $\mK_\text{ff} = k(\bm{X}, \bm{X})$. Without loss of generality, we assume a zero-mean prior, $m(\cdot)\colonequals 0$. We will consider the observations $\bm{y}$ as being noise-corrupted versions of the function values $\bm{f}$, and we shall parameterize this corruption through the likelihood function $p(\bm{y} \operatorname{|} \bm{f})$, which for regression tasks is typically assumed to be Gaussian, $p(\bm{y}\operatorname{|}\bm{f}) = \mathcal{N}(\bm{f}, \sigma^2\I)$. For such a model, the posterior over test inputs $\boldsymbol X_*$ can be computed in closed-form: $p(\bm{f}_*\operatorname{|}\bm{y})= \mathcal{N}(\bm{m}_* , \boldsymbol S_*)$, where \begin{align*} \bm{m}_* &= k(\boldsymbol X_*, \boldsymbol X)\bm{K}^{-1}\bm{y} \quad \text{ and } \\ \boldsymbol S_* &= k(\boldsymbol X_*, \boldsymbol X_*) - k(\boldsymbol X_*, \boldsymbol X)\bm{K}^{-1}k(\boldsymbol X, \boldsymbol X_*) \end{align*} with $\bm{K}\colonequals \mK_\text{ff} + \sigma^2\I$. By marginalizing over the function values of the likelihood distribution, we obtain the marginal likelihood, $p(\bm{y})= \int p(\bm{y}\operatorname{|}\bm{f})p(\bm{f}) d\bm{f}$, the de facto metric for comparing the performance of models in the Bayesian framework. While this integral is not tractable in general, it does have a closed-form solution for Gaussian process regression. Given the GP prior, $p(\bm{f}) = \mathcal{N}(\mathbf{0}, \mK_\text{ff})$, and the Gaussian likelihood, the log-marginal likelihood distribution can be found to be \begin{align} \label{eq:log_marginal} \log p(\bm{y}) = -\frac{1}{2}\left(\log \det{2\pi\bm{K}} + \bm{y}^{\top}\bm{K}^{-1}\bm{y} \right)\, . \end{align} Evaluating this expressions costs $\mathcal{O}(N^3)$ operations. \subsection{Background on the Cholesky decomposition} \label{sec:cholesky} Inverting covariance matrices such as $\bm{K}$ is a slow and numerically unstable procedure. Therefore, in practice, one typically leverages the Cholesky decomposition of the covariance matrices to compute the inverses. The Cholesky decomposition of a symmetric and positive definite matrix $\boldsymbol K$ is the unique, lower\footnote{Equivalently, one can define $\boldsymbol L$ to be upper triangular such that $\boldsymbol K=\boldsymbol L^{\top} \boldsymbol L$.} triangular matrix $\boldsymbol L$ such that $\boldsymbol K=\boldsymbol L\boldsymbol L^{\top}$ \parencite[Theorem 4.2.7]{golub2013matrix4}. The advantage of having such a decomposition is that inversion with triangular matrices amounts to Gaussian elimination. There are different ways to compute $\boldsymbol L$. The Cholesky of a $1\times 1$ matrix is the square root of the scalar. For larger matrices, \begin{align} \label{eq:cholesky} \operatorname{chol}[\boldsymbol K]= \begin{bmatrix}\operatorname{chol}[\boldsymbol K_{:s,:s}] & \boldsymbol 0\\ \boldsymbol T & \operatorname{chol}\left[\boldsymbol K_{s:,s:}-\boldsymbol T\boldsymbol T^{\top}\right] \end{bmatrix}, \end{align} where $\boldsymbol T\colonequals \boldsymbol K_{s:,:s}{\operatorname{chol}[\boldsymbol K_{:s,:s}]}^{-\top}$ and $s$ is any integer between $1$ and the size of $\boldsymbol K$. Hence, extending a given Cholesky to a larger matrix requires three steps: \begin{enumerate} \setlength\itemsep{0em} \item solve the linear equation system $\boldsymbol T$, \item apply the downdate $\boldsymbol K_{s:,s:}-\boldsymbol T\boldsymbol T^{\top}$ and \item compute the Cholesky of the down-dated matrix. \end{enumerate} An important observation is that $\boldsymbol K_{s:,s:}-\boldsymbol T\boldsymbol T^{\top}$ is the posterior covariance matrix $\boldsymbol S_*+\sigma^2\boldsymbol I$ when considering $\boldsymbol X_{s:}$ as test points. We will make use of this observation in \cref{sec:bound_realization}. The log-determinant of $\boldsymbol K$ can be obtained from the Cholesky using $\log\det{\boldsymbol K}=2\sum_{n=1}^{N}\log \boldsymbol L_{nn}$. A similar recursive relationship exists between the quadratic form $\boldsymbol y^{\top} \inv{\boldsymbol K}\boldsymbol y$ and $\inv{\boldsymbol L}\boldsymbol y$ (see appendix, \cref{eq:recursive_LES}). \subsection{Related work} Much work has gone into tractable approximations to the log-marginal likelihood. Arguably, the most popular approximation methods for GPs are inducing point methods \parencite{quionero2005unifying,snelson2006sparse,titsias2009variational,hensman2013gaussian,hensman2017variational,shi2020sparse,artemev2021cglb}, where the dataset is approximated through a set of pseudo-data points (inducing points), summarizing information from nearby data. Other approaches involve building approximations to $\boldsymbol K$ \parencite{Fine2001Lowrank,Rahimi2008RandomFeatures,LazaroGredilla2010SparseSpectrum,Harbrecht2012pivotedCholesky,wilson2015kernel,Rudi2017Falkon,wang2019exact} or aggregating of distributed local approximations \parencite{gal2014distributed,deisenroth2015distributed}. One may also consider separately the approximation of the quadratic form via linear solvers such as conjugate gradients \parencite{hestenes1952methods,Cutajar2016Preconditioning} and the approximation of the log-determinant \parencite{Fitzsimons2017BayesianDeterminant,Fitzsimons2017EntropicDeterminant,Dong2017scalable}. Another line of research is scaling the hardware \parencite{Nguyen2019distributedGP}. All above referenced approaches have computational complexity at least $\mathcal{O}(N)$ (with the exception of \textcite{hensman2013gaussian} since it uses mini-batching). However, the size of a dataset is seldom a particularly chosen value but rather the ad-hoc end of the sampling procedure. The dependence on the dataset size implies that more data requires more computational budget even though more data might not be helpful. This is the main motivation for our work: to derive an approximation algorithm where computational complexity does not depend on redundant data. The work closest in spirit to the present paper is by \textcite{artemev2021cglb}, who also propose lower and upper bounds on quadratic form and log-determinant. There are a number of differences, however. Their bound relies on the method of conjugate gradients where we work directly with the Cholesky decomposition. Furthermore, while their bounds are deterministic, ours are probabilistic, which can make them tighter in certain cases, as they do not need to hold for all worst-case scenarios. This is also the main difference to the work of \textcite{hensman2013gaussian}. Their bounds allow for mini-batching, but these are inherently deterministic when applied with full batch size. \section{Methodology} \label{sec:methods} In the following, we will sketch our method. Our main goal is to convey the idea and intuition. To this end, we use suggestive notation. We refer the reader to the appendix for a more thorough and formal treatment. \subsection{Intuition on the linear extrapolation} \label{subsec:intuition} The marginal likelihood is typically presented as a joint distribution, but, using Bayes rule, one can also view it from a cumulative perspective as the sum of log-conditionals: \begin{align} \label{eq:log_marginal_alternative} \log p(\bm{y}) &= \sum_{n=1}^N \log p(y_n \operatorname{|} \bm{y}_{:n})\ . \end{align} With this equation in hand, the phenomena in \cref{fig:evidence} becomes much clearer. The figure shows the value of \cref{eq:log_marginal_alternative} for an increasing number of observations $n$. When the plot exhibits a linear trend, it is because the summands $\log p(y_{n}\operatorname{|} \bm{y}_{:n})$ become approximately constant, implying that the model is not gaining additional knowledge. In other words, new outputs are conditionally independent given the output observations seen so far. The key problem addressed in this paper is how to estimate the full marginal likelihood, $p(\bm{y})$, from only a subset of $M$ observations. The cumulative view of the log-marginal likelihood in \cref{eq:log_marginal_alternative} is our starting point. In particular, we will provide probabilistic bounds, which are functions of seen observations, on the estimate of the full marginal likelihood. These bounds will allow us to decide, on the fly, when we have seen enough observations to accurately estimate the full marginal likelihood. \subsection{Stopping strategy} Suppose that we have processed $M$ data points with $N-M$ data points yet to be seen. We can then decompose \cref{eq:log_marginal_alternative} into a sum of terms, which have already been computed, and a remaining sum \begin{align*} \log p(\boldsymbol y) &= \underbrace{\sum_{n=1}^M \log p(y_{n}\mid \boldsymbol y_{:n})}_{\kschg{p(\boldsymbol y_{\mathcal{A}})}: \text{ processed}} + \underbrace{\sum_{n=M+1}^{N} \log p(y_{n}\mid \boldsymbol y_{:n})}_{\kschg{p(\boldsymbol y_{\mathcal{B}} \given \boldsymbol y_{\mathcal{A}})}: \text{ remaining}}. \end{align*} Recall that we consider the $\boldsymbol x_i, y_i$ as independent and identically distributed random variables. Hence, we could estimate $\kschg{p(\boldsymbol y_{\mathcal{B}} \given \boldsymbol y_{\mathcal{A}})}$ as $(N-M)\kschg{p(\boldsymbol y_{\mathcal{A}})}/M$. Yet this is estimator is biased, since $(\boldsymbol x_{M+1}, y_{M+1}), \dots, (\boldsymbol x_N, y_N)$ interact non-linearly through the kernel function. Instead, we will derive unbiased lower and upper bounds, $\mathcal{L}$ and $\mathcal{U}$. To obtain unbiased estimates, we use the last-$m$ processed points, such that conditioned on the points up to $s\colonequals M-m$, the expected value of $\log p(\boldsymbol y)$ can be bounded from above and below: \begin{align*} \mathbb{E}[\mathcal{L}\operatorname{|} \bm{X}_{:s}, \bm{y}_{:s}] \leq \mathbb{E}[p(\boldsymbol y)\operatorname{|} \bm{X}_{:s}, \bm{y}_{:s}] \leq \mathbb{E}[\mathcal{U} \operatorname{|} \bm{X}_{:s}, \bm{y}_{:s}], \end{align*} and the observations from $s$ to $M$ can be used to estimate $\mathcal{L}$ and $\mathcal{U}$. \cref{fig:method_sketch} shows a sketch of our approach. \begin{figure} \caption{ Illustration of how ACGP\xspace proceeds during estimation of the full $\log p(\bm{y})$. The Cholesky decomposition works by processing data in blocks of size $m$ (see \cref{eq:cholesky}), so ACGP\xspace computes the log-marginal likelihood in blocks of size $m$ as well. In the illustration, $s$ datapoints have been fully processed, meaning we have the exact $\log p(\bm{y}_{:s})$ for those. As the next $m$ data are being processed, we can compute the bounds on the \emph{full} $\log p(\bm{y})$ (\latin{i.e.}\xspace, including the unprocessed data) after step 2 of the Cholesky decomposition. If the stopping conditions in \cref{eq:stop_cond_r} are met, we return the linear extrapolation as estimate of $\log p(\bm{y})$. \cref{thm:main,thm:guaranteed_precision} describe the conditions under which this estimate achieves the desired error with high probability. } \label{fig:method_sketch} \end{figure} We can then detect when the upper and lower bounds are sufficiently near each other, and stop computations early when the approximation is sufficiently good. More precisely, given a desired relative error $r$, we stop when \begin{align} \label{eq:stop_cond_r} \frac{\mathcal{U}-\mathcal{L}}{2\min(|\mathcal{U}|, |\mathcal{L}|)}<r \quad\text{and}\quad \operatorname{sign}(\mathcal{U})=\operatorname{sign}(\mathcal{L})\, . \end{align} If the bounds hold, then the estimator $(\mathcal{L}+\mathcal{U})/2$ achieves the desired relative error (\cref{lemma:rel_err_bound} in appendix). This is in contrast to other approximations, where one specifies a computational budget, rather than a desired accuracy. \subsection{Bounds on the log-marginal likelihood} \label{sec:bounds} From \cref{eq:log_marginal}, we see that the log-marginal likelihood requires computing a log-determinant of the kernel matrix and a quadratic term. In the following we present upper and lower bounds for both the log-determinant ($\mathcal{U}_\text{D}$ and $\mathcal{L}_\text{D}$, respectively) and the quadratic term ($\mathcal{U}_\text{Q}$ and $\mathcal{L}_\text{Q}$). We will need the posterior equations for the observations, \latin{i.e.}\xspace, $p(y_n \operatorname{|} \boldsymbol y_{:n})$, and we will need them as functions of test inputs $\boldsymbol x_*$ and $\boldsymbol x_{*}'$. To this end, define \begin{align*} \bm{m}_*^{(n)}(\boldsymbol x_*)&\colonequals k(\boldsymbol x_*, \boldsymbol X_{:n})\inv{\boldsymbol K_{:n, :n}}\boldsymbol y_{:n} \\%\qquad \text{and}\\ \intertext{and} \begin{split} \bm{\Sigma}_*^{(n)}(\boldsymbol x_*, \boldsymbol x_*')&\colonequals k(\boldsymbol x_*, \boldsymbol x_*')+\sigma^2\delta_{\boldsymbol x_*,\boldsymbol x_*'}\\ &\phantom{\colonequals\ } -k(\boldsymbol x_*, \boldsymbol X_{:n})\inv{\boldsymbol K_{:n,:n}}k(\boldsymbol X_{:n}, \boldsymbol x_*'), \end{split} \end{align*} such that $p(y_n \operatorname{|} \boldsymbol y_{:n})=\mathcal{N}(y_n; \bm{m}_*^{(n)}(\boldsymbol x_n), \bm{\Sigma}_*^{(n)}(\boldsymbol x_n, \boldsymbol x_n))$, which allows us to rewrite \cref{eq:log_marginal_alternative} as \begin{align} \label{eq:log_marginal_elementwise} \begin{split} \log p(\boldsymbol y)&\propto \sum_{n=1}^N \log \bm{\Sigma}_*^{(n-1)}(\boldsymbol x_{n}, \boldsymbol x_{n})\\ &\quad +\sum_{n=1}^N \frac{(y_n-\bm{m}_*^{(n-1)}(\boldsymbol x_{n}))^2}{\bm{\Sigma}_*^{(n-1)}(\boldsymbol x_n, \boldsymbol x_n)}\, . \end{split} \end{align} This reveals that the log-determinant can be written as a sum of posterior variances and the quadratic form has an expression as normalized square errors. Other key ingredients for our bounds are estimates for average posterior variance and average covariance. Therefore define the shorthands \begin{align*} \boldsymbol V&\colonequals \operatorname{Diag} \left[ \boldsymbol \Sigma_*^{(s)}(\boldsymbol X_{s:M}, \boldsymbol X_{s:M}) \right] \\%& \text{and} \intertext{and} \boldsymbol C&\colonequals \sum_{i=1}^{\frac{M}{2}} \boldsymbol \Sigma_*^{(s)}(\boldsymbol x_{s+2i}, \boldsymbol x_{s+2i-1})\boldsymbol e_{2i}\boldsymbol e_{2i}^{\top}\ , \end{align*} where $\boldsymbol e_j \in \mathbb{R}^{m}$ is the $j$-th standard basis vector. The matrix $\boldsymbol V$ is simply the diagonal of the posterior covariance matrix $\boldsymbol\Sigma_*$. The matrix $\boldsymbol C$ consists of every \emph{second} entry of the first off-diagonal of $\boldsymbol \Sigma_*$. These elements are placed on the diagonal with every second element being $0$. The reason for taking every second element is of theoretical nature, see \cref{remark:estimated_correlation} in the appendix. \subsubsection{Bounds on the log-determinant} Both bounds, lower and upper, use that $\log\det{\boldsymbol K}=\log\det{\boldsymbol K_{:s,:s}}+\log\det{\boldsymbol \Sigma_*^{(s)}(\boldsymbol X_{s:}, \boldsymbol X_{s:})}$ which follows from the matrix-determinant lemma. The first term is available from the already processed datapoints. It is the second addend that needs to be estimated, which we approach from the perspective of \cref{eq:log_marginal_elementwise}. It is well-established that, for a fixed input, more observations decrease the posterior variance, and this decrease cannot cross the threshold $\sigma^2$ \parencite[Question 2.9.4]{rasmussenwilliams}. This remains true when taking the expectation over the input. Hence, the average of the posterior variances for inputs $\boldsymbol X_{s:M}$ is with high probability an overestimate of the average posterior variance for inputs with higher index. This motivates our upper bound on the log-determinant: \begin{align} \mathcal{U}_\text{D} &= \log\det{\boldsymbol K_{:s,:s}} + (N-s)\mu_D, \label{eq:bound_ud} \\\mu_D&\colonequals \frac{1}{m} \sum_{i=1}^m \log\left(\boldsymbol V_{ii}\right). \notag \eqcomment{average log posterior variance} \end{align} To arrive at the lower bound on the log-determinant, we need an expression for how fast the average posterior variance could decrease which is governed by the covariance between inputs. The variable $\rho_D$ measures the average covariance, and we show in \cref{thm:log_det_lower_bound} in the appendix that this overestimates the decrease per step with high probability. Since the decrease cannot exceed $\sigma^2$, we introduce $\psi_D$ to denote the step which would cross this threshold. \begin{align} \begin{split} \mathcal{L}_\text{D} &= \log\det{\boldsymbol K_{:s,:s}} +(N-\psi_D)\log \sigma^2\\ &\quad +(\psi_D-s)\left(\mu_D-\frac{\psi_D-s-1}{2}\rho_D\right)\label{eq:bound_ld} \end{split} \\\rho_D&\colonequals \frac{2}{m\sigma^4}\sum_{i=1}^{m}\boldsymbol C^2_{2i,2i} \notag \eqcomment{average square covariance} \\\psi_D&\colonequals \max\left(N, s+\left\lfloor\frac{\tilde{\mu}_D-\log\sigma^2}{\tilde{\rho}_D}+\frac{1}{2}\right\rfloor\right)\label{eq:variance_cutoff_step} \leqcomment{steps $\mu_D$ can decrease by $\rho$} \end{align} where variables with a tilde refer to a preceding estimate, that is, exchanging the indices $M$ for $M-m$ and $s$ for $s-m$. \kscmt{Instead of $\psi_D$, could we use something like $N_{\sigma^2}$, $n_{\sigma^2}$ or $n_\rho$?} Both bounds collapse to the exact solution when $s=N$. The bounds are close when the average covariance between inputs, $\rho_D$, is small. This occurs for example when the average variance is close to $\sigma^2$ since the variance is an upper bound to the covariance. Another case where $\rho_D$ is small is when points are not correlated to begin with. \subsubsection{Bounds on the quadratic term} Denote with $\boldsymbol r_{*}\colonequals \boldsymbol y_{s:}-\boldsymbol m_*^{(s)}(\boldsymbol X_{s:})$ the prediction errors (the residuals), when considering the first $s$ points as training set and the remaining inputs as test set. Analogous to the bounds on the log-determinant, one can show with the matrix inversion lemma that $\boldsymbol y^{\top}\inv{\boldsymbol K}\boldsymbol y=\boldsymbol y_{:s}^{\top}\inv{\boldsymbol K_{:s, :s}}\boldsymbol y_{:s}+\boldsymbol r_{*}^{\top}\inv{(\boldsymbol \Sigma_*^{(s)}(\boldsymbol X_{s:}))}\boldsymbol r_{*}$. Again, the first term will turn out to be already computed. With a slight abuse of notation let $\boldsymbol r_{*}\colonequals \boldsymbol y_{s:M}-\boldsymbol m_*^{(s)}(\boldsymbol X_{s:M})$, that is, we consider only the first $m$ entries. Our lower bound arises from another well-known lower bound: $\boldsymbol a^{\top}\inv{\boldsymbol A}\boldsymbol a\geq 2\boldsymbol a^{\top}\boldsymbol b-\boldsymbol b^{\top}\boldsymbol A\boldsymbol b$ for all $\boldsymbol b$ (see for example \textcite{kim2018scalableStructureDiscovery,artemev2021cglb}). We write $\boldsymbol a^{\top} \inv{\boldsymbol A}\boldsymbol a$ as $\boldsymbol a^{\top}\operatorname{Diag}[\boldsymbol a]\inv{\left(\operatorname{Diag}[\boldsymbol a]\boldsymbol A\operatorname{Diag}[\boldsymbol a]\right)}\operatorname{Diag}[\boldsymbol a]\boldsymbol a$ and choose $\boldsymbol b\colonequals\inv{\operatorname{Diag}[\boldsymbol A]}\boldsymbol 1$. The result, after some cancellations, is the following probabilistic lower bound on the quadratic term: \begin{align} \mathcal{L}_\text{Q} &= \boldsymbol y_{:s}^{\top}\inv{\boldsymbol K_{:s,:s}}\boldsymbol y_{:s} + (N-s)\left(\mu_Q-\max(0, \rho_Q)\right) \label{eq:bound_lq} \\\mu_Q&\colonequals \frac{1}{m}\boldsymbol r_{*}^{\top}\inv{\boldsymbol V} \boldsymbol r_{*} \notag \eqcomment{average calibrated square error}\\ \begin{split} \rho_Q&\colonequals \frac{N-s-1}{2m} \\ &\phantom{\colonequals\ }\cdot\sum_{j=\frac{s+2}{2}}^{\frac{M}{2}}\frac{\boldsymbol r_{*,2j}\boldsymbol r_{*,2j-1}\bm{\Sigma}_*^{(s)}(\boldsymbol x_{2j}, \boldsymbol x_{2j-1})}{\bm{\Sigma}_*^{(s)}(\boldsymbol x_{2j}, \boldsymbol x_{2j})\bm{\Sigma}_*^{(s)}(\boldsymbol x_{2j-1}, \boldsymbol x_{2j-1})} \notag \end{split} \leqcomment{calibrated error correlation} \end{align} Our upper bound arises from the element-wise perspective of \cref{eq:log_marginal_elementwise}. We assume that the expected mean square error $(y_n-\bm{m}_*^{(n-1)}(\boldsymbol x_n))^2$ decreases with more observations. However, though mean square error and variance decrease, their expected ratio may increase or decrease depending on the choice of kernel, dataset and number of processed points. Using the average error calibration with a correction for the decreasing variance, we arrive at our upper bound on the quadratic term: \begin{align} \mathcal{U}_\text{Q} &= \boldsymbol y_{:s}^{\top}\inv{\boldsymbol K_{:s,:s}}\boldsymbol y_{:s} + (N-s)\left(\mu_Q + \rho_Q'\right) \label{eq:bound_uq} \\\rho_Q'&\colonequals \frac{N-s-1}{m}\frac{1}{\sigma^4}\boldsymbol r_{*}^{\top}\boldsymbol C\inv{\boldsymbol V}\boldsymbol C\boldsymbol r_{*} \notag \leqcomment{square error correlation} \end{align} In the appendix (\cref{thm:quad_upper_bound}), we present a tighter bound which uses a similar construction as for the lower bound on the log-determinant, switching the form at a step $\psi$. Again, the bounds collapse to the true quantity when $s=N$. The bounds will give good estimates when the average covariance between inputs is low or when the model can predict new data well, that is, when $\boldsymbol r_{*}$ is close to $0$. \subsection{Validity of bounds and stopping condition} \label{sec:main_theoretical_result} For the upper bound on the quadratic form, we need to make a (technical) assumption. It expresses the intuition that the (expected) mean square error should not increase with more data---a model should not become worse as its training set increases. It is possible to construct counter-examples where this assumption is violated: for example when $\boldsymbol y\sim\mathcal{N}(\boldsymbol 0, \boldsymbol I)$ and $p(\boldsymbol f)=\mathcal{N}(\boldsymbol 0, \boldsymbol K)$, the posterior mean is with high probability no longer zero-mean. However, our experiments in \cref{sec:experiments} indicate that this assumption is not problematic in practice. \begin{assumption} \label{assumption:targets} Assume that \begin{align*} \mathbb{E}\left[f(\boldsymbol x, \boldsymbol x')(y_j-\boldsymbol m_{*}^{(j-1)}(\boldsymbol x))^2\mid \boldsymbol X_{:s}, \boldsymbol y_{:s}\right] \\\quad\leq \mathbb{E}\left[f(\boldsymbol x, \boldsymbol x')(y_j-\boldsymbol m_{*}^{(s)}(\boldsymbol x))^2\mid \boldsymbol X_{:s}, \boldsymbol y_{:s}\right] \end{align*} for all $s\in\{1, \dots, N\}$ and for all $s <j\leq N$, where $f(\boldsymbol x, \boldsymbol x')$ is either $\frac{1}{\boldsymbol\Sigma_*^{(s)}(\boldsymbol x, \boldsymbol x)}$ or $\frac{\boldsymbol\Sigma_*^{(s)}(\boldsymbol x, \boldsymbol x')^2}{\sigma^4 \boldsymbol\Sigma_*^{(s)}(\boldsymbol x, \boldsymbol x)}$. \end{assumption} \begin{theorem} \label{thm:main} Assume that $(\boldsymbol x_1, y_1), \dots, (\boldsymbol x_N, y_N)$ are independent and identically distributed and that \cref{assumption:targets} holds. For any $s \in \{1, \dots, N\}$, the bounds defined in \cref{eq:bound_ld,eq:bound_ud,eq:bound_uq,eq:bound_lq} hold in expectation: \begin{align*} \begin{split} \mathbb{E}[\mathcal{L}_D\mid \boldsymbol X_{:s}, \boldsymbol y_{:s}]&\leq \mathbb{E}[\log\det{\boldsymbol K} \mid \boldsymbol X_{:s}, \boldsymbol y_{:s}]\\ &\leq \mathbb{E}[\mathcal{U}_D\mid \boldsymbol X_{:s}, \boldsymbol y_{:s}] \end{split}\\ \intertext{and} \begin{split} \mathbb{E}[\mathcal{L}_Q\mid \boldsymbol X_{:s}, \boldsymbol y_{:s}]&\leq \mathbb{E}[\boldsymbol y^{\top}\inv{\boldsymbol K}\boldsymbol y \mid \boldsymbol X_{:s}, \boldsymbol y_{:s}]\\ &\leq \mathbb{E}[\mathcal{U}_Q\mid \boldsymbol X_{:s}, \boldsymbol y_{:s}]\ . \end{split} \end{align*} \end{theorem} The proof can be found in \cref{sec:main_theorem_proof}, and a sketch in \cref{sec:proof_sketch}. \begin{theorem} \label{thm:guaranteed_precision} Let $r>0$ be a desired relative error and set $\mathcal{U}\colonequals -\frac{1}{2}\left(\mathcal{L}_D+\mathcal{L}_Q+N\log2\pi\right)$ and $\mathcal{L}\colonequals -\frac{1}{2}\left(\mathcal{U}_D+\mathcal{U}_Q+N\log2\pi\right)$. If the stopping conditions hold, that is, $\operatorname{sign}(\mathcal{U})=\operatorname{sign}(\mathcal{L})$ and \cref{eq:stop_cond_r} is true, then $\log p(\boldsymbol y)$ can be estimated from $(\mathcal{U}+\mathcal{L})/2$ such that, under the condition $\mathcal{L}_D\leq \log(\det{\boldsymbol K}) \leq \mathcal{U}_D \text{ and } \mathcal{L}_Q \leq \boldsymbol y^{\top}\inv{\boldsymbol K}\boldsymbol y \leq \mathcal{U}_Q$, the relative error is smaller than $r$, formally: \begin{align} \label{eq:main_result} \left|{\log p(\boldsymbol y)-(\mathcal{U}+\mathcal{L})/2}\right| \leq r|{\log p(\boldsymbol y)}|. \end{align} \end{theorem} The proof follows from \cref{lemma:rel_err_bound} in the appendix. \cref{thm:main} is a first step to obtain a probabilistic statement for \cref{eq:main_result}, that is, a statement of the form $\proba{\left|\frac{\log p(\boldsymbol y)-\frac{1}{2}(\mathcal{U}+\epsilon_{\mathcal{U},\delta}+\mathcal{L}-\epsilon_{\mathcal{L},\delta})}{\log p(\boldsymbol y)}\right| > r} \leq \delta$. In earlier work \parencite{bartels2021stoppedcholesky}, we have shown that such a statement can be obtained for the log-determinant. Theoretically, we can obtain such a statement using standard concentration inequalities and a union bound over $s$. In practice, the error guarding constants $\epsilon$ would render the result trivial. A union bound can be avoided using Hoeffding's inequality for martingales \parencite{fan2012hoeffding}. However, this requires to replace $s\colonequals M-m$ by a stopping time independent of $M$, which we regard as future work. \subsection{Practical implementation} \label{sec:bound_realization} The proposed bounds turn out to be surprisingly cheap to compute. If we set the block-size of the Cholesky decomposition to be $m$, the matrix $\boldsymbol\Sigma_*^{(s)}$ is exactly the downdated matrix in Step~2 of the algorithm outlined in \cref{sec:cholesky}. Similarly, the expressions for the bounds on the quadratic form appear while solving the linear equation system $\inv{\boldsymbol L}\boldsymbol y$. A slight modification to the Cholesky algorithm is enough to compute these bounds on the fly during the decomposition with little overhead. The stopping conditions can be checked before or after Step~3 of the Cholesky decomposition (\cref{sec:cholesky}). Here, we explore the former option since Step 3 is the bottleneck due to being less parallelizable than the other steps. Note that the definition of the bounds does not involve variables $\boldsymbol x, y$ which have not been processed. This allows an on-the-fly construction of the kernel matrix, avoiding potentially expensive kernel function evaluations. Furthermore, it is \emph{not} necessary to allocate $\mathcal{O}(N^2)$ memory in advance; a user can specify a maximal amount of processed datapoints, hoping that stopping occurs before hitting that limit. We provide the pseudo-code for this modified algorithm, our key algorithmic contribution, in \cref{sec:proof_sketch} (\cref{algo:chol_blocked,alg:bounds}). \sbchg{ For technical reasons, the bounds we use in practice, deviate in some places from the ones presented. We describe the details fully in \cref{sec:stopping}. } Additionally, we provide a \textsc{Python} implementation of our modified Cholesky decomposition and scripts to replicate the experiments of this paper.\footnote{The code is available at the following repository: \url{https://github.com/SimonBartels/acgp}} \subsection{The cumulative perspective} Using Bayes rule, we can write $\log p(\boldsymbol y)$ equivalently as \begin{align} \label{eq:app_log_marginal_alternative} &\log p(\bm{y}) = -\frac{1}{2}\left(\log\det{\boldsymbol K_N}+\boldsymbol y^{\top}\inv{\boldsymbol K_N}\boldsymbol y+N\log(2\pi)\right)= \sum_{n=1}^N \log p(y_{n}\mid \boldsymbol y_{:n-1}). \end{align} For each potential stopping point $t$ we can decompose \cref{eq:app_log_marginal_alternative} into a sum of three terms: \begin{align*} \log p(\boldsymbol y) &= \underbrace{\sum_{n=1}^s \log p(y_{n}\mid \boldsymbol y_{:n-1})}_{A: \text{ fully processed}} +\!\!\! \underbrace{\sum_{n=s+1}^{t}\!\!\! \log p(y_{n}\mid \boldsymbol y_{:n-1})}_{B: \text{ partially processed}} +\!\!\! \underbrace{\sum_{n=t+1}^{N}\!\!\! \log p(y_{n}\mid \boldsymbol y_{:n-1})}_{C: \text{ remaining}}, \end{align*} where $s<t$. We will use the partially processed points between $s$ and $t$, to obtain unbiased upper and lower bounds on the expected value of $\log p(\boldsymbol y_{s+1:}\operatorname{|} \boldsymbol y_{:s})$: \begin{align} \mathbb{E}[\mathcal{L}_t\mid\boldsymbol x_1,y_1,\dots \boldsymbol x_{s},y_{s}] \leq A+ \mathbb{E}[B+C \mid \boldsymbol x_1,y_1,\dots \boldsymbol x_{s},y_{s}] \leq \mathbb{E}[\mathcal{U}_t \negthickspace \mid \boldsymbol x_1,y_1,\dots \boldsymbol x_{s},y_{s}]. \end{align} \subsection{General bounds} The posterior of the $n$th observation conditioned on the previous is Gaussian with \begin{align*} p(y_{n}\mid \boldsymbol y_{:n-1}) = & \mathcal{N}(\GPmean[n-1]{n}, \GPvar[n-1]{n}) \\ \GPmean[n-1]{n} \colonequals& k(\boldsymbol x_n, \boldsymbol X_{:n-1})\inv{\boldsymbol K_{n-1}}\boldsymbol y_{:n-1} \\ \postk{\boldsymbol x_n}{\boldsymbol x_n}{n-1} \colonequals& k(\boldsymbol x_n, \boldsymbol x_n) - k(\boldsymbol x_n, \boldsymbol X_{:n-1})\inv{\boldsymbol K_{n-1}}k(\boldsymbol X_{:n-1}, \boldsymbol x_n), \end{align*} where we assumed (w.l.o.g) that $\mu_0(\boldsymbol x)\colonequals 0$. Inspecting these expressions one finds that \begin{align} \log\det{\boldsymbol K_N}&=\sum_{n=1}^N \log\left(\GPvar[n-1]{n}\right), \\\boldsymbol y^{\top}\inv{\boldsymbol K_N}\boldsymbol y&=\sum_{n=1}^N\frac{\left(y_n-\GPmean[n-1]{n}\right)^2}{\GPvar[n-1]{n}}. \end{align} Our strategy is to find function families $u$ (and $l$) which upper (and lower) bound the expectation \begin{align*} l_{n,t}^d \leq_E \log \postk{\boldsymbol x_n}{\boldsymbol x_n}{n-1}\leq_E u_{n,t}^d\\ l_{n,t}^q \leq_E \frac{\left(y_n-\GPmean[n-1]{n}\right)^2}{\GPvar[n-1]{n}}\leq_E u_{n,t}^q, \end{align*} where $\leq_E$ denotes that the inequality holds in expectation. We will choose the function families such that the unseen variables interact only in a \emph{controlled} manner. More specifically, \begin{align} \label{eq:required_form} f^x_{n,t}(\boldsymbol x_n, y_n, \dots, \boldsymbol x_1, y_1)=\sum_{j=s+1}^n g_{t}^{f,x}(\boldsymbol z_n, \boldsymbol z_j; \boldsymbol z_1, \dots \boldsymbol z_{s})\notag, \end{align} with $f\in\{u,l\}$ and $x\in\{d,q\}$. The effect of this restriction becomes apparent when taking the expectation. The sum over the bounds becomes the sum of only two terms: variance and covariance, formally: \begin{align} &\mathbb{E}\left[\sum_{n=s+1}^N f^x_{n,t}(\boldsymbol z_n, \dots, \boldsymbol z_1)\mid\sigma(\boldsymbol z_1, \dots, \boldsymbol z_{s})\right] \\&=\left(N-s\right)\mathbb{E}\left[g(\boldsymbol z_{s+1}, \boldsymbol z_{s+1}, \boldsymbol z_{1}\dots, \boldsymbol z_n)\mid\sigma(\boldsymbol z_1, \dots, \boldsymbol z_{s})\right] \notag \\&+\left(N-s\right) \frac{N-s+1}{2} \mathbb{E}\left[g(\boldsymbol z_{s+1}, \boldsymbol z_{s+2}, \boldsymbol z_{1}\dots, \boldsymbol z_n)\mid\sigma(\boldsymbol z_1, \dots, \boldsymbol z_{s})\right]. \end{align} We can estimate this expectation from the observations we obtained between $s$ and $t$. \begin{align} &\approx\frac{N-t}{t-s}\sum_{n=s+1}^t g(\boldsymbol z_n, \boldsymbol z_n, \boldsymbol z_{1}\dots, \boldsymbol z_{s}) \\&+\frac{2(N-t)}{t-s} \frac{N-s+1}{2} \sum_{i=1}^{\frac{t-s}{2}} g(\boldsymbol z_{s+2i}, \boldsymbol z_{s+2i-1}, \boldsymbol z_{1}\dots, \boldsymbol z_{s}). \notag \end{align} \subsection{Bounds on the log-determinant} Since the posterior variance of a Gaussian process can never increase with more data, the average of the (log) posterior variances is an estimator for an upper bound on the log-determinant. Hence in this case, we simply ignore the interaction between the remaining variables. We set $g(\boldsymbol x_n, \boldsymbol x_i)\colonequals \delta_{ni}\log\left(\GPvar{n}\right)$ where $\delta_{ni}$ denotes Kronecker's $\delta$. To obtain a lower bound we use that for $c>0$ and $a\geq b\geq 0$, one can show that $\log\left(c+a-b\right)\geq\log\left(c+a\right)-\frac{b}{c}$ where the smaller $b$ the better the bound. In our case $c=\noisef{\boldsymbol x_n}$, $a=\postkt{\boldsymbol x_n}{\boldsymbol x_n}$ and $b=\postkt{\boldsymbol x_n}{\estX[n-1]}\inv{\left(\postkt{\estX[n-1]}{\estX[n-1]}+\noisef{\estX[n-1]}\right)}\postkt{\estX[n-1]}{\boldsymbol x_n}$. Underestimating the eigenvalues of $\postkt{\estX[n-1]}{\estX[n-1]}$ by 0 we obtain a lower bound, where each quantity can be estimated. Formally, for any $s\leq t$, \begin{align} &\log\left(\GPvar[n-1]{n}\right)\geq \left(\log\left(\GPvar{n}\right)-\sum_{i=s+1}^{n-1}\frac{\postkt{\boldsymbol x_n}{\boldsymbol x_i}^2}{\noisef{\boldsymbol x_n}\noisef{\boldsymbol x_i}}\right). \end{align} This bound can be worse than the deterministic lower bound $\log\sigma^2$. It depends on how large $n$ is, how large the average correlation is and how small $\log\sigma^2$ is. Denote with $\mu$ the estimator for the left addend and with $\rho$ the estimator for the second addend. We can determine the number of steps $n-s$ that this bound is better by solving for the maxima of a quadratic equation: \begin{align} p\left(\mu-\frac{p-1}{2}\rho\right)\geq p\log\sigma^2 \end{align} The tipping point $\psi$ is \begin{align} \psi\colonequals\max\left(N, s+\left\lfloor\frac{\mu-\log\sigma^2}{\rho}+\frac{1}{2}\right\rfloor\right). \end{align} Hence, for $n>\psi$ we set $u^d_n\colonequals \log\sigma^2$. Observe that, the smaller $\postkt{\boldsymbol x_j}{\boldsymbol x_{j+1}}^2$ the closer the bounds. This term represents the correlation of datapoints conditioned on the $s$ datapoints observed before. Thus, our bounds come together, when incoming observations become independent conditioned on what was already observed. Essentially, that $\postkt{\boldsymbol x_j}{\boldsymbol x_{j+1}}^2=0$ is the basic assumption of inducing input approximations \parencite{quionero2005unifying}. \subsection{Bounds on the quadratic form} For an upper bound on the quadratic form we apply a similar trick: \begin{align} \frac{x}{c+a-b}\leq \frac{x(c+b)}{c(c+a)}, \end{align} where $x\geq 0$. Further we assume that in expectation the mean square error improves with more data. \todo[inline]{the bound below holds only in expectation!} Formally, \begin{align} \frac{\left(y_j-\GPmean[j-1]{j}\right)^2}{\GPvar[j-1]{j}}\leq_E \frac{\left(y_j-\GPmean{j}\right)^2}{\noisef{\boldsymbol x_j}\left(\GPvar{j}\right)}\left(\noisef{\boldsymbol x_j}+\sum_{i=s+1}^{j-1}\frac{\left(\postkt{\boldsymbol x_j}{\boldsymbol x_i}\right)^2}{\noisef{\boldsymbol x_i}}\right) \end{align} For a lower bound observe that \begin{align} \boldsymbol y^{\top}\inv{\boldsymbol K}\boldsymbol y=\boldsymbol y_{:s}^{\top}\inv{\boldsymbol K_{s}}\boldsymbol y_{:s}+\left(\boldsymbol y_{s+1:N}-\GPmeanX{\remX}\right)^{\top}\inv{\matQ[N]}\left(\boldsymbol y_{s+1:N}-\GPmeanX{\remX}\right) \end{align} where $\matQ \colonequals \postkt{\estX[j]}{\estX[j]}+\noisefm{\estX[j]}$ with $j\geq s+1$ for the posterior covariance matrix of $\estX[j]$ conditioned on $\boldsymbol X_{1:s}$. We use a trick we first encountered in \textcite{kim2018scalableStructureDiscovery}: $\boldsymbol y^{\top}\inv{\boldsymbol A}\boldsymbol y\geq 2\boldsymbol y^{\top}\boldsymbol b-\boldsymbol b^{\top}\boldsymbol A\boldsymbol b$, for any $\boldsymbol b$. \newcommand{\boldsymbol e}{\boldsymbol e} For brevity introduce $\boldsymbol e\colonequals \boldsymbol y_{s+1:N}-\GPmeanX{\remX}$. After applying the inequality with $\boldsymbol b\colonequals \inv{\operatorname{Diag}[\matQ[N]]}\boldsymbol e$, we obtain \begin{align} 2 \sum_{n=s+1}^N \frac{\left(y_n-\GPmean{n}\right)^2}{\GPvar{n}} -\sum_{n,n'=s+1}^N\frac{(y_n-\GPmean{n})}{\GPvar{n}}[\matQ[N]]_{nn'}\frac{(y_{n'}-\GPmean{n'})}{\GPvar{n'}} \end{align} which is now in the form of \cref{eq:required_form}. Observe that, the smaller the square error $(y_j-\GPmean{j})^2$, the closer the bounds. That is, if the model fit is good, the quadratic form can be easily identified. \subsection{Using the Bounds for Stopping the Cholesky} \label{sec:stopping} \label{sec:bound_computation} We will use the following stopping strategy: when the difference between bounds becomes sufficiently small and their absolute value is far away from zero. More precisely, when having deterministic bounds $\mathcal{L}\leq x\leq \mathcal{U}$ on a number $x$, with \begin{align} \frac{\mathcal{U}-\mathcal{L}}{2\min(\Abs{\mathcal{U}},\Abs{\mathcal{L}})}\leq r \text{ and } \label{eq:stopping_condition_1} \\\operatorname{sign}{\mathcal{U}}=\operatorname{sign}{L}, \label{eq:stopping_condition_2} \end{align} then the relative error of the estimate $\frac{1}{2}(\mathcal{U}+\mathcal{L})$ is less than $r$, that is $|\frac{\frac{1}{2}(\mathcal{U}+\mathcal{L})-x}{x}|\leq r$. \begin{remark} \label{remark:autodiff} In our experiments, we do not use $\frac{1}{2}(\mathcal{U}+\mathcal{L})$ as estimator, and instead use the biased estimator $(N-\tau)\frac{1}{\tau}\log p(\boldsymbol y_{:\tau})$. Since stopping occurs when log-determinant and quadratic form evolve roughly linearly, the two estimators are not far off each other. The main reason for using the biased estimator is of a technical nature; it is easier and faster to implement a custom backward function which can handle the in-place operations of our Cholesky implementation. \end{remark} \begin{remark} \label{remark:estimated_correlation} To estimate the average correlation between elements of the kernel matrix, we use all elements of the off-diagonal instead of only every second. This has no effect on our main result, but it becomes important when developing PAC bounds. \end{remark} \begin{remark} \label{remark:advancing_bounds} The lower bound on the log-determinant, and the upper bound on the quadratic form switch their form at a step $\psi$ (\cref{thm:log_det_lower_bound,thm:quad_upper_bound}). Currently, to prove our results, this requires $\psi$ to be $\mathcal{F}_{s}$-measurable, and for that reason we use estimators using inputs only up to index $s$, to define $\psi$. However, a PAC bound proof would allow to condition on the event that the estimators (plus some $\epsilon$) overestimate their expected values with high probability. Under that condition, we could use the true expected value (which is $\mathcal{F}_{s}$-measurable) to define $\psi$. Hence, in our practical implementation we use estimators based on inputs with indices up to $M$ to define $\psi$. \end{remark} The question remains how to use the bounds and stopping strategy to derive an approximation algorithm. We transform the exact Cholesky decomposition for that purpose. For brevity denote $\boldsymbol L_{s}\colonequals \operatorname{chol}[k(\boldsymbol X_{:s}, \boldsymbol X_{:s})+\noisef{\boldsymbol X_{:s}}]$ and $\boldsymbol T_{s}\colonequals k(\boldsymbol X_{s+1:}, \boldsymbol X_{}){\boldsymbol L_{s}}^{-\top}$. For any $s\in \{1,\dots N\}$: \begin{align} \boldsymbol L_N= \begin{bmatrix}\boldsymbol L_{s} & \boldsymbol 0\\ \boldsymbol T & \operatorname{chol}\left[k(\boldsymbol X_{s+1:, s+1:})-\boldsymbol T\boldsymbol T^{\top}\right] \end{bmatrix} \end{align} One can verify that $\boldsymbol L_N$ is indeed the Cholesky of $\boldsymbol K_N$ by evaluating $\boldsymbol L_N\boldsymbol L_N^{\top}$. Observe that $k(\boldsymbol X_{s+1:, s+1:})-\boldsymbol T\boldsymbol T^{\top}$ is the posterior covariance matrix of the $\boldsymbol y_{s+1:}$ conditioned on $\boldsymbol y_{:s}$. Hence, in the step before the Cholesky of the posterior covariance matrix is computed, we can estimate our log-determinant bounds. Similar reasoning applies for solving the linear equation system. We can write \begin{align} \label{eq:recursive_LES} \boldsymbol \alpha_N= \begin{bmatrix} \boldsymbol \alpha_{s}\\ \inv{\operatorname{chol}\left[k(\boldsymbol X_{s+1:, s+1:})-\boldsymbol T\boldsymbol T^{\top}\right]}\left(\boldsymbol y_{s+1:}-\boldsymbol T_{s}\boldsymbol \alpha_{s}\right) \end{bmatrix} \end{align} Now observe that $\boldsymbol T_{s}\boldsymbol \alpha_{s}=\GPmeanX{\boldsymbol X_{s+1:}}$. Hence, before the solving the lower equation system (and before computing the posterior Cholesky), we can compute our bounds for the quadratic form. There are different options to implement the Cholesky decomposition. We use a blocked, row-wise implementation \parencite{george1986parallelCholesky}. For a practical implementation see \cref{algo:chol_blocked} and \cref{alg:bounds}. \section{Experiments} \label{sec:experiments} We now examine the bounds and stopping strategy for ACGP\xspace{}{}. When running experiments without GPU support, all linear algebra operations are substituted for direct calls to the \textsc{OpenBLAS} library \parencite{wang2013OpenBLAS}, for efficient realization of \textit{in-place} operations. To still benefit from automatic differentiation, we used \textsc{PyTorch} \parencite{paszke2019pytorch} with a custom backward function for $\log p(\boldsymbol y)$ which wraps \textsc{OpenBLAS}. The details of our experimental setup can be found in \cref{sec:app_experimental_details}. \subsection{Performance on synthetic data} \label{sec:synthetic_experiment} ACGP\xspace{} will stop the computation when the posterior covariance matrix of the remaining points conditioned on the processed points is essentially diagonal. This scenario occurs for example when using a squared exponential kernel with long lengthscale and small observational noise on densely sampled dataset. To test ACGP\xspace in this scenario, we sample a function from a GP prior with zero mean and a squared exponential kernel with length scale $\log\ell=-2$. From this function, we uniformly sample $10^{12}$ observations $(\bm{x}, y)$ in the interval $[0,1]$ using an observation noise of $\sigma^2\colonequals 0.1$, that is, $\bm{y} = f(\bm{x}) + \mathcal{N}(0, 0.1)$, see \cref{fig:visualization}. This is a scenario where ACGP\xspace excels, since it does not need to load the dataset into memory in advance, whereas methods with at least linear complexity cannot even start computation. \begin{figure} \caption{ The figure shows one of the sampled functions from the synthetic experiment in \cref{sec:synthetic_experiment} as well as the posterior predictive distribution recovered by ACGP\xspace. Since the entire dataset of $10^{12}$ observations is too large to visualize, we show only the data that were selected by ACGP\xspace before stopping; in this case just $4000$. The relative error on $\log p(\bm{y})$ for $10^4$ observations was $0.054$. Notice how, despite the larger relative error, the posterior process mean closely follows the actual underlying function. } \label{fig:visualization} \end{figure} The task is to estimate the true $\log p(\bm{y})$ of the full dataset, and we run ACGP\xspace{} with a relative error of $r=0.01$ and a blocksize of $1000$ to obtain this estimate. Since we cannot evaluate the actual $\log p(\bm{y})$ for all $10^{12}$ observations, we use the predicted and actual $\log p(\bm{y})$ for $10^4$ observations as proxy for assessing the performance of ACGP\xspace{}. We repeat the experiment for 10 different random seeds. Recall that ACGP\xspace{} estimates $\mathbb{E}[\log p(\boldsymbol y)\operatorname{|} \boldsymbol X_{:s}, \boldsymbol y_{:s}]$ as opposed to $\log p(\boldsymbol y)$, directly. Hence, there are two sources of error for ACGP\xspace{}: the deviation of $\log p(\boldsymbol y)$ from its expected value and the deviation of the empirical estimates from their expectations.\footnote{This shows the benefit of developing our theory further, to obtain probably-approximately-correct bounds. Such bounds introduce error-guarding constants to protect against fluctuations.} The average $\log p(\boldsymbol y_{:10^4})$ is $-2699.67\pm70.81$, and thus, due to the relative variance, a relative error of $r=0.01$ will be hard to achieve. When run on all $10^{12}$ observations, ACGP\xspace{} stops after processing just $4600 \pm 1562$ on average, obtaining an actual relative error on the estimate of $\log p(\bm{y})$ of $0.047 \pm 0.034$. To decrease this error, one can either decrease the specified relative error of ACGP\xspace or increase the blocksize, which will lead to more stable predictions. For the experiments in the remainder of this paper, we choose the latter strategy and set the blocksize to $10^4$, which is also better suited for parallel computations. \input{sections/bound_plots.tex} \input{sections/hyperparameter_tuning.tex} \section{Conclusions} The Cholesky decomposition is the de facto way to invert matrices when training Gaussian processes, yet it tends to be considered a black box. However, if one opens this black box, it turns out that the Cholesky decomposition computes the marginal log-likelihood of the full dataset, and, crucially, in intermediate steps, the posteriors of unprocessed training data conditioned on the processed. Making the community aware of this remarkable insight is one of our main contributions of our paper. Our main novelty is to use this insight to bound the (expected) marginal log-likelihood of the full dataset from only a subset. With only small modifications to this classic matrix decomposition, we can use these upper and lower bounds to stop the decomposition before all observations have been processed. This has the practical benefit that the kernel matrix $\boldsymbol{K}$ does not have to computed prior to performing the decomposition, but can rather be computed on-the-fly. Empirical results indicate that the approach carries significant promise. In general, we find that exact GP inference leads to better behaved optimization than approximations such as CGLB\xspace{} and inducing point methods, and that a well-optimized Cholesky implementation is surprisingly competitive in terms of performance. An advantage of our approach is that it is essentially parameter-free. The user has to specify a requested numerical accuracy and the computational demands will be scaled accordingly. Finally, we note that ACGP\xspace{} is complementary to much existing work, and should be seen as an addition to the GP toolbox, rather than a substitute for existing tools. \subsubsection*{References} \printbibliography[heading=none] \appendix \onecolumn \renewcommand{\eqcomment}[1]{\\&\sslash\text{{\small\emph{#1}}}\notag} \section{Evolution of the log-marginal likelihood} \label{app:evidence} This section contains figures for the progression of the log-marginal likelihood for five different permutations of the same datasets as used in \cref{sec:bound_experiments} of the main paper. \cref{fig:evidence_rbf} shows the results for the squared exponential kernel (\cref{eq:app_kernel_se}) with $\theta\colonequals 1$ and $\sigma^2\colonequals 10^{-3}$, and \cref{fig:evidence_ou} shows the results for the Ornstein-Uhlenbeck kernel (\cref{eq:app_kernel_ou}) using the same parameters. \newcommand{\evidencefig}[3]{ \begin{minipage}[b]{.5\textwidth} \centering \includegraphics[width=0.95\textwidth]{./figs/llh_progression/llh_#1_tight.pdf} \subcaption{ Log-marginal likelihood evolution for #2. } \label{fig:evidence_#1} \end{minipage} } \begin{figure} \caption{ The figure shows the log-marginal likelihood as a function of the size of the training set for the large datasets described in \cref{tbl:datasets} using the squared exponential kernel. See \cref{app:evidence} for a description of the experimental setup. } \label{fig:evidence_rbf} \end{figure} \begin{figure} \caption{The figure shows the log-marginal likelihood as a function of the size of the training set for the large datasets described in \cref{tbl:datasets} using the Ornstein-Uhlenbeck kernel. See \cref{app:evidence} for a description of the experimental setup. } \label{fig:evidence_ou} \end{figure} \FloatBarrier{} \section{Experimental details} \label{sec:app_experimental_details} \begin{table}[htb] \centering \caption{ Overview over all datasets used for the experiments in \cref{sec:experiments}. The total dataset size (training and testing) is denoted $N$ and $D$ denotes the dimensionality. } \begin{tabular}{lS[table-format=5]S[table-format=2]l} \toprule Key & {$N$} & {$D$} & Source \\\midrule \texttt{bike} & 17379 & 17 & \textcite{Fanaee2013BikeDataset}. \web{http://archive.ics.uci.edu/ml/datasets/bike+sharing+dataset} \\\texttt{elevators} & 16599 & 18 & \textcite{Camachol1998AileronsElevators}. \\\texttt{kin40k} & 40000 & 8 & \textcite{Schwaighofer2002kin40k}. \\\texttt{metro} & 48204 & 66 & No citation request. \web{http://archive.ics.uci.edu/ml/datasets/Metro+Interstate+Traffic+Volume} \\\texttt{pm25} & 43824 & 79 & \textcite{Liang2015pmDataset}. \web{http://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data} \\\texttt{poletelecomm} & 15000 & 26 & \textcite{Weiss1995Poletelecomm}. \\\texttt{protein} & 45730 & 9 & No citation request. \web{http://archive.ics.uci.edu/ml/datasets/Physicochemical+Properties+of+Protein+Tertiary+Structure} \\\texttt{pumadyn} & 8192 & 32 & No citation request. \web[this website]{https://www.cs.toronto.edu/~delve/data/pumadyn/desc.html} \\\bottomrule \end{tabular} \label{tbl:datasets} \end{table} For an overview of the datasets we use, see~\cref{tbl:datasets}. The datasets are all normalized to have zero mean and unit variance for each feature. We explore two different computing environments. For datasets smaller than \num{20000} data points, we ran our experiments on a single GPU. This is the same setup as in \textcite{artemev2021cglb} with the difference that we use a \textsc{Titan RTX} whereas they have used a \textsc{Tesla V100}. For datasets larger than \num{20000} datapoints, our setup differs from \textcite{artemev2021cglb}. We use only CPUs on machines where the kernel matrix still fits fully into memory. Specifically, we used machines running Ubuntu 18.04 with 50 Gigabytes of RAM and two \textsc{Intel Xeon E5-2670 v2} CPUs. \subsection{Bound quality experiments} For CGLB\xspace, we compute the bounds with varying number of inducing inputs $M \colonequals \{512, 1024, 2048, 4096\}$ and measure the time it takes to compute the bounds. For ACGP\xspace, we define the blocksize $m \colonequals 256\cdot 40=\num{10192}$ which is the default \textsc{OpenBLAS}\xspace block size on our machines times the number of cores. This ensures that the sample size for our bounds is sufficiently large for accurate estimation, and at the same time the number of page-faults should be comparable to the default Cholesky implementation. We measure the elapsed time every time a block of data points is added to the processed dataset and the bounds are recomputed. We compare both methods using squared exponential kernel (SE) and the Ornstein-Uhlenbeck kernel (OU). \begin{align} k_\text{SE}(\boldsymbol x, \boldsymbol z)&\colonequals \theta \exp\left(-\frac{\norm{\boldsymbol x - \boldsymbol z}^2}{2\ell^2}\right) \label{eq:app_kernel_se} \\k_\text{OU}(\boldsymbol x, \boldsymbol z)&\colonequals \theta \exp\left(-\frac{\norm{\boldsymbol x - \boldsymbol z}}{\ell}\right). \label{eq:app_kernel_ou} \end{align} where we fix $\theta\colonequals 1$ and we vary $\ell$ as $\log\ell\in \{-1,0,1,2\}$. We use a Gaussian likelihood and fix the noise to $\sigma^2\colonequals 10^{-3}$. \subsection{Hyper-parameter tuning} In this section, we describe our experimental setup for the hyper-parameter optimization experiments, which closely follows that of \textcite{artemev2021cglb}. We randomly split each dataset into a training set consisting of 2/3 of examples, and a test set consisting of the remaining third. We use a Mat\'ern$\frac{3}{2}$ kernel function and L-BFGS-B as the optimizer with \textsc{SciPy} \parencite{Virtanen2020SciPy} default parameters if not specified otherwise. All algorithms are stopped the latest after 2000 optimization steps, after 12 hours of compute time, or when optimization has failed three times. We repeat each experiment five times with a different shuffle of the dataset and report the results in \cref{tbl:results,tbl:results_gpu}. For CGLB\xspace, it is necessary to decide on a number of inducing inputs. From the results reported by \textcite{artemev2021cglb}, it appears that using $M=2048$ inducing inputs yields the best trade-off in terms of speed and performance, hence we use this value in our experiments. For the exact Cholesky and CGLB\xspace, the L-BFGS-B convergence criterion ``relative change in function value'' (\texttt{ftol}) is set to 0. For ACGP\xspace, we need to decide on both the desired relative error, $r$, as well as the block size $m$. We successively decrease the optimizer's tolerance \texttt{ftol} as $(2/3)^{\text{restart}+1}$ and we set the same value for $r$. That is, regardless of whether the optimization of ACGP\xspace stopped successfully or for abnormal reasons, the optimization restarts aiming for higher precision. The effect of this is that, early in the hyper-parameter optimization, ACGP\xspace will stop early, thus providing only an approximation to the optimal hyper-parameter values, but also saving computations. With each restart, ACGP\xspace increases the precision, ensuring that we get closer and closer to the optimal hyper-parameter values at the expense of approaching the computational demand of an exact GP. The block size $m$ is set to the same value as for the bound quality experiments, \cref{sec:bound_experiments}, $40\cdot 256=\num{10192}$, which is the number of cores times the \textsc{OpenBLAS}\xspace block size. This ensures that the sample size for our bounds is sufficiently large for accurate estimation, and at the same time the number of page-faults should be comparable to the default Cholesky implementation. Note that $m$ is a global parameter, independent of the dataset. Hence, natural choices for both $r$ and $m$ are determined by parameters of standard software, which have sensible, machine-dependent default values. ACGP\xspace can therefore be considered parameter-free. Differing from the previous section, we use for ACGP\xspace the biased estimator $(N-M)\log p(\boldsymbol y_{:M})/M$ instead of $\mathcal{U}/2+\mathcal{L}/2$ to approximate $\log p(\boldsymbol y)$ when stopping. Since stopping occurs when log-determinant and quadratic form evolve roughly linearly, the two estimators are not far off each other. The main reason for using the biased estimator is of technical nature: for auto-differentiation, it is easier and faster to implement a custom backward function which can handle the in-place operations of our Cholesky implementation. This custom backward function needs roughly a factor two of the computation of $\log p(\boldsymbol y)$ whereas the \textsc{Torch}-default needs a factor six. This shows that when comparing to exact inference, auto-differentiation can be disadvantageous and make the Cholesky appear slower than it is. Regarding CGLB\xspace, computation time is not dominated by the gradient but only the function evaluation itself. \section{Additional results} \label{sec:app_additional_results} In this section, we report additional results for both the hyper-parameter tuning experiments (section~\ref{subsec:hyperparameters}) as well as plots to show the quality of the bounds on both the log-determinant term, the quadratic term, and the log-marginal likelihood (see \cref{subsec:bound_evolution}). \subsection{Additional results for hyper-parameter tuning} \label{subsec:hyperparameters} Denote with $N_*$ the number of test instances, and with $\mu$ and $\sigma^2$ the mean and variance approximations of a method. As performance metrics we use root mean square error (RMSE) $$\sqrt{\frac{1}{N^*}\sum_{n=1}^{N^*} (y_n^*-\mu(\boldsymbol x_n^*))^2}\ ,$$ negative log predictive density (NLPD) $$\frac{1}{2N^*}\sum_{n=1}^{N^*} \frac{(y_n^*-\mu(\boldsymbol x_n^*))^2}{\sigma^2(\boldsymbol x_n^*)}+\log\left(2\pi\sigma^2(\boldsymbol x_n^*)\right) \ ,$$ and the negative marginal log likelihood $-\log p(\boldsymbol y)$. \cref{tbl:results,tbl:results_gpu} summarize the results reported for each dataset, averaging over the outcomes of the final optimization step of each repetition. For each metric, we indicate whether a higher ($\uparrow$) or lower ($\downarrow$) value indicates a better result. The results for the exact GP regression are marked in italics to emphasize that these are results we are trying to approach, not to beat. As the other methods are all approximations to the exact GP, there is little hope of achieving better performance. The best result among the approximation methods for each dataset is highlighted in bold. \begin{table}[htb] \centering \caption{Summary of the CPU hyper-parameter tuning results from \cref{sec:hyperparameter_experiments}. For each metric, we report its final value over the course of optimization. For SVGP, we did not compute the exact marginal log-likelihoods, to save cluster time. \label{tbl:results}} \input{sup/results_table_cpu.tex} \end{table} \begin{table}[htb] \centering \caption{Summary of the GPU hyper-parameter tuning results from \cref{sec:hyperparameter_experiments}. For each metric, we report its final value over the course of optimization. We did not compute the exact marginal log-likelihoods, to save cluster time. \label{tbl:results_gpu}} \input{sup/results_table_gpu.tex} \end{table} \subsection{Additional plots for hyper-parameter tuning} \label{subsec:hyperparameter_plots} The plots for the hyper-parameter optimization are shown in figures~\ref{fig:hyp_metro_lml}--\ref{fig:hyp_pumadyn_nlpd}. Each point in the plots corresponds to one accepted optimization step for the given methods. Each point thus corresponds to a particular set of hyper-parameters during the optimization. In figures~\ref{fig:hyp_metro_rmse}--\ref{fig:hyp_pumadyn_rmse}, we show the root-mean-square error, RMSE, that each methods obtains on the test set at each optimisation step, and figures ~\ref{fig:hyp_metro_nlpd}--\ref{fig:hyp_pumadyn_nlpd} show the same for NLPD. In figures~\ref{fig:hyp_metro_lml}--\ref{fig:hyp_pumadyn_lml}, we show the log-marginal likelihood, $\log p(\bm{y})$, that an exact GP would have achieved with the specific set of hyper-parameters at each optimization step for each method. \begin{figure} \caption{$\log p(\bm{y})$ for the \texttt{metro} dataset.} \label{fig:hyp_metro_lml} \caption{$\log p(\bm{y})$ for the \texttt{pm25} dataset.} \label{fig:hyp_pm25_lml} \caption{$\log p(\bm{y})$ for the \texttt{protein} dataset.} \label{fig:hyp_protein_lml} \caption{$\log p(\bm{y})$ for the \texttt{kin40k} dataset.} \label{fig:hyp_kin40k_lml} \caption{$\log p(\bm{y})$ for the \texttt{bike} dataset.} \label{fig:hyp_bike_lml} \caption{$\log p(\bm{y})$ for the \texttt{elevators} dataset.} \label{fig:hyp_elevators_lml} \caption{$\log p(\bm{y})$ for the \texttt{pole} dataset.} \label{fig:hyp_pole_lml} \caption{$\log p(\bm{y})$ for the \texttt{pumadyn32nm} dataset.} \label{fig:hyp_pumadyn_lml} \end{figure} \begin{figure} \caption{RMSE for the \texttt{metro} dataset.} \label{fig:hyp_metro_rmse} \caption{RMSE for the \texttt{pm25} dataset.} \label{fig:hyp_pm25_rmse} \caption{RMSE for the \texttt{protein} dataset.} \label{fig:hyp_protein_rmse} \caption{RMSE for the \texttt{kin40k} dataset.} \label{fig:hyp_kin40k_rmse} \caption{RMSE for the \texttt{bike} dataset.} \label{fig:hyp_bike_rmse} \caption{RMSE for the \texttt{elevators} dataset.} \label{fig:hyp_elevators_rmse} \caption{RMSE for the \texttt{pole} dataset.} \label{fig:hyp_pole_rmse} \caption{RMSE for the \texttt{pumadyn32nm} dataset.} \label{fig:hyp_pumadyn_rmse} \end{figure} \begin{figure} \caption{NLPD for the \texttt{metro} dataset.} \label{fig:hyp_metro_nlpd} \caption{NLPD for the \texttt{pm25} dataset.} \label{fig:hyp_pm25_nlpd} \caption{NLPD for the \texttt{protein} dataset.} \label{fig:hyp_protein_nlpd} \caption{NLPD for the \texttt{kin40k} dataset.} \label{fig:hyp_kin40k_nlpd} \caption{NLPD for the \texttt{bike} dataset.} \label{fig:hyp_bike_nlpd} \caption{NLPD for the \texttt{elevators} dataset.} \label{fig:hyp_elevators_nlpd} \caption{NLPD for the \texttt{pole} dataset.} \label{fig:hyp_pole_nlpd} \caption{NLPD for the \texttt{pumadyn32nm} dataset.} \label{fig:hyp_pumadyn_nlpd} \end{figure} \subsection{Additional plots for the bound quality experiments} \label{subsec:bound_evolution} \input{sections/appendix_bound_plots_stacked.tex} \section{Notation} We use a \textsc{python}-inspired index notation, abbreviating for example $[y_1, \ldots, y_{n}]^{\top}$ as $\bm{y}_{:n}$---observe that the indexing starts at 1. \todo[inline]{check that that's what we need} Indexing binds before any other operation such that $\inv{\boldsymbol K_{:s, :s}}$ is the inverse of $\boldsymbol K_{:s, :s}$ and \emph{not} all elements up to $s$ of $\inv{\boldsymbol K}$. For $s \in \{1,\dots, N\}$ define $\mathcal{F}_s\colonequals\sigma(\boldsymbol x_1, y_1, \dots, \boldsymbol x_s, y_s)$ to be the $\sigma$-algebra generated by $\boldsymbol x_1, y_1, \dots, \boldsymbol x_s, y_s$. With respect to the main article, we change the letter $M$ to $t$. The motivation for the former notation is to highlight the role of the variable as a subset size, whereas in this part, the focus is on $M$ as a stopping time. \section{Proof Sketch} \label{sec:proof_sketch} In this section of the appendix, we provide additional intuition on the theorems and proofs for the theory behind ACGP\xspace{}. \input{sections/method_long.tex} \input{sections/pseudo_code.tex} \input{sections/bound_code.tex} \section{Assumptions} \begin{assumption} \label{assume:exchangeability} \label{assumption:exchangeability} Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and let $(\boldsymbol x_j, y_j)_{j=1}^N$ be a sequence of independent and identically distributed random vectors with $\boldsymbol x:\Omega\rightarrow\mathbb{R}^D$ and $y:\Omega\rightarrow\mathbb{R}$. \end{assumption} \begin{assumption} \label{assume:expected_quadratic_form_assumptions} \label{assumption:expected_quadratic_form_assumptions} For all $s,i,j,t$ with $s< i\leq j\leq N$ and functions $f(\boldsymbol x_j,\boldsymbol x_i;\boldsymbol x_1, \dots \boldsymbol x_{s})\geq 0$ \begin{align} \label{eq:expected_quadratic_form_assumptions} \mathbb{E}\left[f(\boldsymbol x_j, \boldsymbol x_i)\left(y_j-\GPmean[j-1]{j}\right)^2\mid\mathcal{F}_{s}\right] \leq \mathbb{E}\left[f(\boldsymbol x_j, \boldsymbol x_i)\left(y_j-\GPmean[s]{j}\right)^2\mid\mathcal{F}_{s}\right] \end{align} where $f(\boldsymbol x_j, \boldsymbol x_i)\in\left\{\frac{1}{\GPvar{j}}, \frac{\postkt{\boldsymbol x_j}{\boldsymbol x_i}^2}{(\GPvar{j})\noisef{\boldsymbol x_j}\noisef{\boldsymbol x_i}}\right\}$. \end{assumption} That is, we assume that in expectation the estimator improves with more data. Note that, $f$ can not depend on any entries of $\boldsymbol y$. \section{Main Theorem} \label{sec:main_theorem_proof} \todo[inline]{adapt notation} This section restates \cref{thm:main} and connects the different proofs in the sections to follow. \begin{theorem} Assume that \cref{assume:exchangeability} and \cref{assume:expected_quadratic_form_assumptions} hold. For any even $m\in\{2, 4, \dots, N-2\}$ and any $s \in \{1, \dots, N-m\}$, the bounds defined in \cref{eq:bound_ud,thm:log_det_lower_bound,thm:quad_lower_bound,thm:quad_upper_bound} hold in expectation: \begin{align*} \mathbb{E}[\mathcal{L}_D\mid \mathcal{F}_{s}]\leq \mathbb{E}[\log(\det{\boldsymbol K}) \mid \mathcal{F}_{s}]\leq \mathbb{E}[\mathcal{U}_D\mid \mathcal{F}_{s}] \text{ and} \\\mathbb{E}[\mathcal{L}_Q\mid \mathcal{F}_{s}]\leq \mathbb{E}[\boldsymbol y^{\top}\inv{\boldsymbol K}\boldsymbol y \mid \mathcal{F}_{s}]\leq \mathbb{E}[\mathcal{U}_Q\mid \mathcal{F}_{s}]\ . \end{align*} \end{theorem} \begin{proof} Follows from \cref{thm:log_det_lower_bound,thm:quad_lower_bound,thm:quad_upper_bound,lemma:decreasing_expectation}. \end{proof} \section{Proof for the Lower Bound on the Determinant} \input{sup/det_lower_bound_proof.tex} \section{Proof for the Upper Bound on the Quadratic Form} \input{sup/quad_upper_bound_proof.tex} \section{Proof for the Lower Bound on the Quadratic Form} \input{sup/quad_lower_bound_proof.tex} \section{Utility Proofs} \input{sup/utility_proofs.tex} \begin{lemma}[Link between the Cholesky and Gaussian process regression] \label{lemma:cholesky_and_gp_variance} Denote with $\boldsymbol C_N$ the Cholesky decomposition of $\boldsymbol K$, so that $\boldsymbol C_N\boldsymbol C_N^{\top}=\boldsymbol K$. The $n$-th diagonal element of $\boldsymbol C_N$, squared, is equivalent to $\GPvar[n-1]{n}$: \[ [\boldsymbol C_N]_{nn}^2=\GPvar[n-1]{n} \, . \] \end{lemma} \begin{proof} With abuse of notation, define $\boldsymbol C_1\colonequals \sqrt{k(\boldsymbol x_1, \boldsymbol x_1)}$ and $$\boldsymbol C_N \colonequals \begin{bmatrix} \boldsymbol C_{N-1} & \boldsymbol 0 \\ \boldsymbol k_N^{\top}\boldsymbol C_{N-1}^{-\top} & \sqrt{k(\boldsymbol x_N, \boldsymbol x_N)+\sigma^2-\boldsymbol k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\boldsymbol k_N} \end{bmatrix}.$$ We will show that the lower triangular matrix $\boldsymbol C_N$ satisfies $\boldsymbol C_N\boldsymbol C_N^{\top} = \boldsymbol K_N+\sigma^2\boldsymbol I_N$. Since the Cholesky decomposition is unique \parencite[Theorem~4.2.7]{golub2013matrix4}, $\boldsymbol C_N$ must be the Cholesky decomposition of $\boldsymbol K$. Furthermore, by definition of $\boldsymbol C_N$, $[\boldsymbol C_N]_{NN}^2=k(\boldsymbol x_N, \boldsymbol x_N)+\sigma^2-\boldsymbol k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\boldsymbol k_N$. The statement then follows by induction. To remain within the text margins, define $$x\colonequals \boldsymbol k_N^{\top}\boldsymbol C_{N-1}^{-\top}\boldsymbol C_{N-1}^{\!-1}\boldsymbol k_N+k(\boldsymbol x_N, \boldsymbol x_N)+\sigma^2-\boldsymbol k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\boldsymbol k_N.$$ We want to show that $\boldsymbol C_N\boldsymbol C_N^{\top} = \boldsymbol K_N+\sigma^2\boldsymbol I_N$. \begin{align*} \boldsymbol C_{N}\boldsymbol C_{N}^{\top} &= \begin{bmatrix} \boldsymbol C_{N-1} & \boldsymbol 0 \\ \boldsymbol k_N^{\top}\boldsymbol C_{N-1}^{-\top} & \sqrt{k(\boldsymbol x_N, \boldsymbol x_N)+\sigma^2-\boldsymbol k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\boldsymbol k_N} \end{bmatrix}\\ &\quad\cdot\begin{bmatrix} \boldsymbol C_{N-1}^{\top} & \boldsymbol C_{N-1}^{\!-1}\boldsymbol k_N \\ \boldsymbol 0^{\top} & \sqrt{k(\boldsymbol x_N, \boldsymbol x_N)+\sigma^2-\boldsymbol k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\boldsymbol k_N} \end{bmatrix} \\&=\begin{bmatrix} \boldsymbol C_{N-1}\boldsymbol C_{N-1}^{\top} & \boldsymbol C_{N-1}\boldsymbol C_{N-1}^{\!-1}\boldsymbol k_N\\ \boldsymbol k_N^{\top} \boldsymbol C_{N-1}^{-\top} \boldsymbol C_{N-1}^{\top} & x \end{bmatrix} \\&=\begin{bmatrix} \boldsymbol K_{N-1} +\sigma^2\boldsymbol I_{N-1} & \boldsymbol k_N\\ \boldsymbol k_N^{\top} & x \end{bmatrix} \end{align*} Also $x$ can be simplified further. \begin{align*} x&=\boldsymbol k_N^{\top}\boldsymbol C_{N-1}^{-\top}\boldsymbol C_{N-1}^{\!-1}\boldsymbol k_N+k(\boldsymbol x_N, \boldsymbol x_N)+\sigma^2-\boldsymbol k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\boldsymbol k_N \\&=\boldsymbol k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\boldsymbol k_N+k(\boldsymbol x_N, \boldsymbol x_N)+\sigma^2-\boldsymbol k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\boldsymbol k_N \\&=k(\boldsymbol x_N, \boldsymbol x_N)+\sigma^2. \end{align*} \end{proof} \end{document}
arXiv
BMC Plant Biology Dynamics of gene expression associated with arsenic uptake and transport in rice during the whole growth period Dandan Pan1,2,3,4, Jicai Yi5, Fangbai Li2, Xiaomin Li1,4, Chuanping Liu2, Weijian Wu2 & Tingting Tao2,6 BMC Plant Biology volume 20, Article number: 133 (2020) Cite this article Genes associated with arsenite uptake and transport in rice plants (i.e., OsLsi1, OsLsi2, OsLsi3, OsLsi6 and OsABCC1) have been identified to date. However, their expression over time during the whole growth period of rice under arsenite stress conditions is still poorly understood. In this study, the dynamics of gene expression associated with arsenite transport and arsenic concentrations in different organs of rice were investigated to determine the critical period(s) of arsenite uptake and translocation regulated by gene expression during the whole growth period. The relative expression of OsLsi2 and OsLsi1 in the roots was upregulated and reached its highest value (2-∆∆Ct = 4.04 and 1.19, respectively) at the jointing stage (9 weeks after transplantation), in which the arsenic concentration in roots also was the highest at 144 mg/kg. A range from 45.1 to 61.2% of total arsenic accumulated in the roots during seedling to heading stages (3–16 weeks), which was mainly associated with the relatively high expression of OsABCC1 (1.50–7.68), resulting in arsenic located in the vacuoles of roots. Subsequently, the As translocation factor from root to shoot increased over time from heading to milky ripe (16–20 weeks), and 74.3% of the arsenic accumulated in shoots at the milk stage. Such an increase in arsenic accumulation in shoots was likely related to the findings that (i) OsABCC1 expression in roots was suppressed to 0.14–0.75 in 18–20 weeks; (ii) OsLsi3 and OsABCC1 expression in nodes I, II, and III was upregulated to 4.01–25.8 and 1.59–2.36, respectively, in 16–20 weeks; and (iii) OsLsi6 and OsABCC1 expression in leaves and husks was significantly upregulated to 2.03–5.26 at 18 weeks. The jointing stage is the key period for the expression of arsenite-transporting genes in roots, and the heading to milky ripe stages are the key period for the expression of arsenite-transporting genes in shoots, both of which should be considered for regulation during safe rice production in arsenic-contaminated paddy soil. Arsenic (As) contamination in soils and water has become a serious environmental problem, especially in South and Southeast Asia [1, 2]. Mining and industrial activities are the main sources of As contamination to the environments [3,4,5]. The As content in rice grains produced from contaminated sites in China, India and Korea can be as high as 0.77–0.85 mg/kg [6, 7]. In paddy soil, arsenite (As(III), H3AsO3) is the predominant species of As that is taken up by rice [8,9,10]. Understanding the regulations and key period of As(III) uptake and transport by rice is important to developing control strategies for safe rice production in As-contaminated soils. To date, genes that have been identified to be associated with As(III) uptake and transport in rice plants are the same as those for silicon (Si) uptake and transport because arsenite is a chemical analogue of silicic acid. In rice roots, As(III) is inadvertently taken up and transported via the silicic acid transporters OsLsi1 and OsLsi2 [8, 11, 12]. OsLsi1 is preferentially distributed on the distal side of Casparian bands, passively transporting As(III) into root cells; OsLsi2, localized on the proximal side of Casparian bands, actively transports As(III) from root cells to apoplast toward xylem [8, 13, 14]. Once transported into the root cells, As(III) can be either complexed with phytochelatins (PCs) and then sequestered in vacuoles for detoxification [15, 16] or transported to stems and leaves by transpirational flow through the xylem vessels of rice [17]. OsABCC1, a C-type ABC (ATP-binding cassette) transporter localized in the tonoplast, is responsible for As vacuolar compartmentalization [16]. OsABCC1 can be expressed in roots, stems, leaves and husks of rice, and sequestering As in vacuoles is important in reducing the allocation of As to rice grains [16]. Nodes in graminaceous plants control the distribution of mineral elements in different tissues of shoots, including essential and toxic elements [18, 19]. In the nodes of rice, three transporters (i.e., OsLsi6, OsLsi2, and OsLsi3) are involved in the intervascular transport of As(III) from nodes to panicles [20,21,22]. OsLsi6, a plasma membrane-localized Si/As(III) channel, is mainly expressed at the xylem transfer cells of enlarged vascular bundles (EVBs) [23]. OsLsi2 and OsLsi3 are localized at the distal side of the bundle sheath of EVBs and parenchyma cells between EVBs and diffuse vascular bundles (DVBs), respectively [22]. As(III) in the xylem of EVBs can be selectively unloaded by OsLsi6 and then reloaded to the xylem of DVBs by OsLsi2 and OsLsi3, leading to preferential As distribution to panicles through xylem vessels [22, 23]. In addition, OsLsi6 can be found in leaves and nodes and is responsible for As(III) transport out of xylem into the tissues of leaf and node [8]. A number of studies have explored As(III) uptake and transport in rice plants [19, 22, 24], the majority of which mainly focused on the seedling or maturing stage of rice growth [25,26,27]. For example, many experiments implemented to identify the As(III) transporters in rice (e.g., OsLsi1 and OsLsi2) were performed during the seedling stage [25,26,27]. Arsenic is mainly transported into caryopsis during the grain filling stage [28], which is considered to be the key stage to take measures to reduce As uptake in rice [29]. However, As(III) can be taken up by rice during the whole growth period, and As(III) transport in different organs and/or tissues of rice is mediated by various transporters, as mentioned above. The expression of genes for these transporters is important to regulating As accumulation in grains. However, gaps in our understanding remain with regard to the dynamics in gene expression of As(III) uptake and transport during the whole growth period. Thus, the aim of the present study was to investigate the dynamics of gene expression of As(III)-related transporters as well as the characteristics of As(III) uptake and accumulation in different organs of rice during the whole growth period. The results obtained can provide a better understanding of the As(III) uptake and transport regulated by gene expression in different parts of rice, which would be useful to guide As mitigation strategies in As-contaminated paddy soil. Arsenic distribution in rice plants during the whole growth period Generally, the total As concentrations in different organs of rice ranked in the following order: root > stem ≥ leaf > husk > brown rice in the +As treatment (Fig. 1). The As concentration in roots increased over time from the seedling to jointing stages (3–9 weeks) and reached its highest value of 144 mg/kg at the jointing stage (9 weeks) (Fig. 1a). During heading to milk stages (16–20 weeks), the As concentration in roots decreased from 126 mg/kg to 56.1 mg/kg, while those in stems and leaves increased from 16.1 mg/kg to 42.7 mg/kg and from 30.9 mg/kg to 63.6 mg/kg, respectively (Fig. 1a-1c). At the same time, the As concentrations in husks were higher than those in brown rice during the heading to milk stages (Fig. 1d). The results in Figure S1 show that the majority of the biomass of roots, stems, leaves and grains in the +As treatment were similar to those in the CK treatment, except for the decrease in root and leaf biomass at 12 weeks (P < 0.05) and decrease in grain yields at 20 weeks (P < 0.05). The As concentration in different organs of rice plants during the whole growth period. a Root, b stem, c leaf, d husk and brown rice. Plants were grown under hydroponic conditions with 5 μM NaAsO2 (+As treatment) or without it (the control, CK). Data are presented as the mean ± SE (n = 3). Different letters with colors corresponding to their respective lines indicate significant differences (P < 0.05) among values at different time intervals in the +As treatment In the +As treatment, the total As content in roots was the highest at 938 μg at the heading stage (16 weeks), while those in stems and leaves were the highest at 889 μg and 1116 μg at the milk stage, respectively (Figure S2a). During the seedling to heading stages (3–16 weeks), 45.1–61.2% of the As taken up by rice accumulated in roots, while 66.4–74.3% accumulated in shoots (particularly 40.6–44.4% in leaves) during the flowering to milk stages (Fig. 2a). The translocation factor (TF) remained stable at a range of 0.49–0.63 before the booting stage (12 weeks), then decreased to a minimum value of 0.14 at the heading stage (16 weeks), and then increased linearly over time to a maximum value of 0.82 at the milk stage (20 weeks) (Fig. 2b). These results indicate that the seedling to heading stages are the period for As uptake and accumulation in roots, while the heading to milk stages are the period for As transport from roots to shoots during the whole growth period of rice. a Percentage of As content in the different organs of rice plants and b translocation factor (TF) during the whole growth period of the +As treatment. Columns and line labeled by different letters are significantly different at a P < 0.05 level among values at different time intervals Expression of arsenite-related genes in roots The results in Fig. 3 show that the OsLsi1 and OsLsi2 genes are constitutively expressed in roots during the whole growth period of the +As treatment, which is consistent with previously reported results [14, 26]. In this study, their relative expression varied with time. While the relative expression of the OsLsi1 and OsLsi2 genes was significantly suppressed to < 0.13 at 6 and 12 weeks (P < 0.05), their expression was disinhibited or promoted to 1.19 and 4.04 at 9 weeks and to 0.67 and 1.23 at 16 weeks, respectively, followed by a decline over time after 16 weeks. Relative expression of OsLsi1, OsLsi2 and OsABCC1 genes in roots of rice in the +As treatment relative to those in the control treatment during the whole growth period. Data are presented as the mean ± SE (n = 3). Lines labeled by * with colors corresponding to their respective lines are significantly different at a P < 0.05 level in comparison to the control treatment The relative expression of the OsABCC1 gene in roots was obviously promoted (≥ 1.48 during the seedling to heading stages and had the maximum value of 7.68 at the jointing stage (9 weeks), whereas it was suppressed to below 0.75 after the heading stage. These results suggested that As sequestering in vacuoles of roots is active during the period of arsenite uptake and accumulation in roots (seedling to heading stages). In addition, the relative expression of OsABCC1 in roots was linearly positively related to those of OsLsi1 and OsLsi2 in roots (Figure S3), which indicated that if more As(III) uptake occurs in roots, then more As(III) is accumulated in the vacuoles of roots. Expression of arsenite-related genes in shoots In the basal stems, the relative expression of OsABCC1 was maintained at a range of 0.48–1.25 during the whole growth period, while that of OsLsi6 increased from 0.66 at the seedling stage to the highest value of 1.65 at the heading stage and then significantly decreased to 0.14 (P < 0.05) at the milk stage (Fig. 4a). Such an inhibition of gene expression of OsLsi6 in the basal stems at 20 weeks could result in more As accumulated in the unelongated stems and more As transported to the bottom leaves, which are connected to the unelongated nodes in the basal stem via xylem vessels. Relative expression of OsLsi6, OsLsi2, OsLsi3 and OsABCC1 genes in (a) basal stem, (b) node III, (c) node II, and (d) node I of rice in the +As treatment relative to those in the control treatment during the whole growth period. Data are presented as the mean ± SE (n = 3). Lines labeled by * with colors corresponding to their respective lines are significantly different at a P < 0.05 level in comparison to the control treatment In nodes III, II, and I, the relative expression of OsLsi3 and OsABCC1 could be upregulated to 4.01–25.8 and 1.59–2.36, respectively, for 16–20 weeks, while the majority of OsLsi6 relative expression was < 1.0 and those of OsLsi2 were only promoted to 1.36–1.50 in node III (Fig. 4b-d). On the other hand, the expression levels of OsABCC1, OsLsi6, OsLsi3, and OsLsi2 were also suppressed (< 1.0) sometimes, particularly in nodes III and I at 18 weeks (Fig. 4b and d). The inhibition of the gene expression of OsLsi6, OsLsi3, and OsLsi2 at 18 weeks in node I could retain more As accumulated in node I or transport it to the top first leaf, while the promotion of their gene expression in the nodes may lead to more As transported to grains. For the bottom first leaf, significant upregulation of gene expression (P < 0.05) was observed for the OsLsi6 gene at 12 and 18 weeks and for the OsABCC1 gene at 3 and 18 weeks (Fig. 5a). For the top leaves, the relative expression of OsLsi6 and OsABCC1 was only observed to be upregulated in the top first leaf at 20 weeks, while the remainder was maintained at a range of 0.38–1.0 for 12–20 weeks (Fig. 5b and c). In the husks, the relative expression of OsLsi6 and OsABCC1 was significantly promoted to 5.26 and 3.97 at 18 weeks, respectively (P < 0.05) and then suppressed to < 0.54 at 20 weeks (Fig. 5d). Hence, the gene expression of OsLsi6 and OsABCC1 was mainly upregulated in the bottom first leaf and husks but not in the flag leaves (i.e., top first and second leaves) from the booting to milk stages. Relative expression of OsLsi6 and OsABCC1 genes in (a) bottom first leaf, (b) top second leaf, (c) top first leaf, and (d) husk of rice in the +As treatment relative to those in the control treatment during whole growth period. Data are presented as the mean ± SE (n = 3). Lines labeled by * with colors corresponding to their respective lines are significantly different at a P < 0.05 level in comparison to the control treatment Since arsenite is generally complexed with PCs before being sequestered in the vacuoles for detoxification [16], and the biosynthesis of PCs is catalyzed by the phytochelatin synthase such as PCS1 [30], the expression of OsPCS1 gene in different organs of rice was quantified as well. As shown in Figure S4-S6, the OsPCS1 and OsABCC1 genes are constitutively expressed in roots, nodes, and husks in the +As treatment, which confirmed that vacuolar sequestration of As is strongly associated with its complexation by PCs. Generally, the whole growth period of rice plants includes seven growth stages, i.e., seedling, tillering, jointing, booting, heading, flowering, and milk stages, which can be categorized into vegetative growth or reproductive growth phases [29]. It is difficult to identify the time of transition from the vegetative growth to reproductive growth phases (i.e., the initiation of panicle differentiation), and both vegetative growth and reproductive growth phases are usually believed to proceed simultaneously during the jointing to heading stages [29, 31]. Nonetheless, the duration of the vegetative growth phase in rice is considered from the seedling to jointing stages (3–9 weeks) and that of the reproductive growth phase from the booting until milk stages (12–20 weeks) in this study. Key gene expression in the vegetative growth phase The vegetative growth phase is an important period for the formation of the root system of rice [29], which is the first organ to absorb arsenite and to respond to the toxic effect of arsenite accumulated in rice [32]. In the vegetative growth phase of this study, it was obvious that As concentrations in roots were higher than those in stems and leaves (Fig. 1), and the majority of As was accumulated in roots, with the total As content accounting for 45.1–51.4% (Fig. 2a). During this phase, we observed an inhibition of the gene expression of OsLsi1 and OsLsi2 in roots at the tillering stage (Fig. 3), which is consistent with previous results of field experiments [26, 33]. Such an inhibition is probably due to a self-protecting response of rice to toxic As [34], which could be one of the reasons for the decrease in the percentage of As content in roots at this stage (Fig. 2a). Subsequently, the disinhibition of OsLsi1 and promotion of OsLsi2 expression in roots at the jointing stage (Fig. 3) was attributed to a demand for Si for rice growth and/or because the transporters OsLsi1 and OsLsi2 were degraded and needed to be recovered [26, 33]. Such a promotion of OsLsi1 and OsLsi2 expression can result in an increase in arsenite uptake by roots (Fig. 1 and S2a). Once arsenite is taken up by roots, vacuolar sequestration of As by the transporter OsABCC1 can limit the mobility of arsenite and control the transfer of arsenite to other organs in plants [16, 35]. OsABCC1 expression in the bottom first leaf was highest at the seedling stage (Fig. 5a), which may result in a preferential accumulation of As in the vacuoles of bottom leaves. At the jointing stage, notably, OsABCC1 expression in roots showed the highest value (Fig. 3), while this expression was suppressed in the basal stem and bottom first leaf (Fig. 4 and 5a). Such a difference in OsABCC1 expression between roots and shoots not only led to the increase in As concentration in roots but also resulted in the decrease in As concentration in shoots (Fig. 1). The decrease in the TF value from the seedling to tillering/jointing stages (Fig. 2b) confirms that although the total As contents in the roots, stems and leaves increased gradually as their biomass grew (Figure S1 and S2), less As was transferred from the roots to the shoots during the vegetative growth phase. As a result, the promotion of OsABCC1 expression in roots is the key regulation to sustaining high As accumulation in roots and to restraining As transfer from roots to shoots during the vegetative growth phase. During the vegetative phase, the expression patterns of OsLsi1, OsLsi2 and OsABCC1 in roots in this study were different from those in previous studies [16, 26, 33], which might be due to the fact that the nutrient solution of this study was Si-free, and the paddy soils in their field experiments contained Si. While these transporters are predominantly responsible for As(III) uptake in this study, the presence of Si may prevail against As(III) since Si and As(III) use the same transporters for uptake by rice as that in the previous studies. High Si accumulation in shoots can decrease As uptake and accumulation through downregulation of the expression of OsLsi1 and OsLsi2 in roots [36]. Si fertilizer has been demonstrated to hinder OsLsi1 and OsLsi2 expression in roots [26, 33], and it can be applied to rice plants grown in arsenic-contaminated paddy soils at the jointing stage. Key gene expression in the reproductive growth phase During the reproductive growth phase, the majority of As accumulated in rice remained in the roots at the booting and heading stages, while more As was transported from roots to shoots from heading to milk stages (Fig. 2). Such a translocation of As accumulation in rice plants could be a result of combined regulation by different gene expression in both roots and shoots. In the roots, the relative expression of OsABCC1 was still above 1.0 at the booting and heading stages, and the suppression of OsLsi1 and OsLsi2 expression at the booting stage was released at the heading stage (Fig. 3), both of which favored As accumulation in roots, resulting in 61.2% of total arsenic accumulated in roots at the heading stage (Fig. 2a). At the same time, the expression of OsLsi6 in basal stems was promoted at the heading stage (Fig. 4a), which facilitated As translocation to upper nodes and leaves that are connected to the node via transit vascular bundles (TVB). The expression of OsLsi1, OsLsi2 and OsABCC1 in roots began to be inhibited and decreased over time after the heading stage (Fig. 3). Less As was likely taken up by roots or accumulated in the vacuoles of roots and more As was transferred to the shoots. This result was confirmed by the finding that the TF value increased substantially from the heading to milk stage (Fig. 2b). When As is transferred to the upper stems, the nodes are vitally important to controlling the arsenic distribution in rice plants [18, 19, 23]. In node III, the expression of OsABCC1 was substantially higher than that in nodes II and I, particularly at the milk stage (Fig. 4b-d), which could have led to a preferential As accumulation in node III. This result was supported by the finding that the As concentration in node III was higher than that in nodes II and I (P > 0.05) (Figure S7). In addition, the expression of OsABCC1 in node II and I was enhanced at the flowering and heading stages, respectively, indicating that more As accumulated in the vacuoles of the nodes. Our results confirmed that the As concentrations in the nodes were much higher than those in the stems (Figure S7 and Fig. 1b). In node I, EVBs and DVBs are connected to the top first leaf and the panicle, respectively; therefore, the transfer of elements between EVBs and DVBs determines their relative distribution between the top first leaf and the grains. The expression of OsLsi6, OsLsi2 and OsLsi3 was simultaneously suppressed at the flowering stage, which can probably attribute to a self-protection of As toxicity to the tissues of node I. Since the cooperation of these three transporters is required for the allocation of As to the panicles via the xylem pathway [22, 23], their simultaneous suppression in node I would reduce As transfer from node I to panicles and increase As transfer to the top first leaf at the flowering stage. The leaves of rice plants had the highest As concentration and percentage of As contents at the flowering and milk stages relative to that in other organs (Fig. 1 and 2a). This result implies that As transport to leaves via transpirational flow through the xylem vessels still plays an important role in As accumulation in leaves. On the other hand, OsLsi6 in leaves is responsible for the unloading of substrates (including arsenite) out of xylem into the leaf tissues [8, 21]. The stimulated expression of both the OsLsi6 and OsABCC1 genes in the bottom and top first leaves suggested that the leaves had a high capacity to accumulate As in their tissues and vacuoles (Fig. 5a and c). In husks, the relative expression of OsLsi6 and OsABCC1 was promoted simultaneously at the flowering stage (Fig. 5). The transient enhancement of OsLsi6 expression in husks may increase the transport of arsenite through the xylem pathway into the husks. At the same time, the promotion of OsABCC1 expression could increase the vacuolar segregation of As in the husks. Therefore, upregulation of OsLsi6 and OsABCC1 expression at the flowering stage can lead to an increase in As accumulation in husks and a decrease in As distribution to brown rice. Overall, the expression of different genes in roots, nodes, leaves and husks during the reproductive growth phase demonstrated that rice plants have developed various ways to regulate the transport and accumulation of As in plants, which eventually decreases As distribution in brown rice. During the reproductive phase, regulation of gene expression in different organs of rice is more complicated. On the one hand, the nodes and leaves have upregulated the expression of the OsLsi6 and OsABCC1 genes and accumulated the majority of As taken up by rice. This sceanrio can be considered self-regulation of rice in response to arsenic stress. On the other hand, our results also demonstrated that the milk stage is the key period for As distribution in panicles. Therefore, downregulation of OsLsi6, OsLsi2 and OsLsi3 gene expression in the nodes and upregulation of OsLsi6 and OsABCC1 gene expression in the husks are necessary to reduce As transport from nodes to panicles and increase As accumulation in husks. Under the stress of arsenite, our results demonstrated that the jointing stage is the key period for the expression of arsenite-transporting genes in roots, and the heading to milky ripe stages are the key period for the expression of arsenite-transporting genes in shoots. The high accumulation of arsenic in roots at the jointing stage was mainly associated with the relatively high expression of OsLsi2, OsLsi1 and OsABCC1 genes. The substantial increase in arsenic accumulation in shoots during the heading to milk stages was related to the upregulation of OsLsi3/OsLsi6 and OsABCC1 expression in nodes/leaves and husks as well as the suppression of OsABCC1 expression in roots. These findings provide useful information on the critical time to apply regulation measures to control the As uptake and transport in rice plants during safe rice production in arsenic-contaminated paddy soil. It should be noted that the results obtained with one rice genotype in this study may not apply to the other rice genotypes; therefore, further investigations are needed to ascertain the expression patters of genes for arsenite uptake and translocation in other rice varieties that are also widely used in the rice production. Plant materials and growth experiments The rice (cv. Oryza sativa L.) variety Youyou 128, a three-line indica hybrid rice cultivar that can accumulate high As in rice grains, was selected [37, 38]. The rice seeds were purchased from Vegetable Research Institute of Guangdong Agricultural Academy. After being surface sterilized in 75% ethyl alcohol and 30% H2O2, the seeds were thoroughly rinsed with deionized water for 4 h and then placed on a sheet of moist filter paper in the dark at 25 °C. Once germinated, rice seedlings were transplanted into a 2 L plastic pot containing half strength Kimura B solution as a nutrient solution and set as the beginning of the whole growth period (0 week). After 1 week, NaAsO2 with a final concentration of 5 μM was added into the nutrient solution in the +As treatment, with a CK treatment that had no As(III) as a control. Three independent biological replicates were set up for each treatment, in which 6 rice plants that grew in the same plastic pot were set as one replicate. The seedlings were grown in an artificial greenhouse at 22–28 °C and 70% relative humidity with a photoperiod of 10:14 h (light/dark). The temperature was then increased gradually to 32 °C, and the photoperiod changed to 12:12 h during the booting and heading stages (12–16 weeks). The composition of the nutrient solution was as follows: 0.18 mM (NH4)2SO4, 0.27 mM MgSO4, 0.09 mM KNO3, 0.09 mM KH2PO4, 0.18 mM Ca(NO3)2, 0.045 mM K2SO4, 20 μM NaFe-EDTA, 6.7 μM MnSO4, 0.15 μM ZnSO4, 0.16 μM CuSO4, 9.4 μM H3BO3, 0.10 μM Na2MoO4, and 0.10 μM CoSO4. The pH of the hydroponic nutrient solution was adjusted to 5.6 with 1.0 M KOH or 1 M HCl, and the nutrient solution was renewed every 3 d during the whole growth period. The nutrient solution was a Si-free formulation with a Si concentration below the detection limit of 0.013 mM to minimize the competitive absorption impact of silicic acid on arsenite uptake by roots [8, 33]. Hydroponics was used in this study because it is more suitable to monitoring the physiological functions of plants (e.g., absorption and translocation of nutrients in plants), while pot or field experiments are more susceptible to the environment [39, 40]. Sample collection and preparation The rice plants were collected at seven stages during the whole growth period: (i) seedling stage (3 weeks after transplantation, 3 weeks), (ii) tillering stage (6 weeks), (iii) jointing stage (9 weeks), (iv) booting stage (12 weeks), (v) heading stage (16 weeks), (vi) flowering stage (18 weeks), and (vii) milk stage (20 weeks). At each stage, the harvested plants were washed with distilled water and separated into roots and shoots, with the shoots being subdivided into stems (including leaf sheaths), leaves, panicles, and grains (i.e., husks and brown rice). The roots, stems, leaves, panicles, husks and brown rice were oven dried at 60 °C for 72 h and ground into powder with a mill before As analysis. For the analysis of gene expression, the roots, basal stems, nodes (i.e., node III, II, and I), leaves (i.e., bottom first leaf, top second leaf, and top first leaf), panicles and husks (Figure S8) were frozen and milled into powder in liquid nitrogen, and then stored at − 80 °C before RNA extraction. Determination of total as and translocation factor Approximately 0.2 g of the dried samples was predigested with a 10 mL HNO3 and HClO4 mixture (87:13, v:v) at room temperature for 8 h and then digested on a graphite digestion apparatus (proD48, Changsha Zerom Instrument and Meter Co., Ltd., Hunan, China) [41]. Then, the digested solution was diluted with 1% HNO3 to 50 mL and then filtered with 0.45 μm filter paper. The total As concentration was determined by a hydrogen generation-atomic fluorescence spectrometer (AFS-933, Titan Instruments Co., Ltd., Beijing, China). Certified reference material (GBW10020, citrus leaf flour samples) and a blank were used for quality control. The As recovery from the citrus leaf flour was 111.8 ± 1.9% (n = 18). The translocation from root to shoot was presented as a TF, which was calculated as follows: $$ \mathrm{TF}={\mathrm{C}}_{\mathrm{shoot}}/{\mathrm{C}}_{\mathrm{root}}, $$ where Cshoot and Croot are As concentrations in the shoots (mg/kg) and roots of rice plants (mg/kg), respectively. RNA extraction and reverse transcriptase polymerase chain reaction (RT-PCR) Total RNA from the plant samples was extracted using Trizol reagent (Invitrogen Corp., CA, USA). Then, first-strand cDNA was synthesized from 1 μg of total RNA using oligo dT (18) primer after removing genomic DNA by a PrimeScript™ RT reagent kit with gDNA eraser (Takara Bio. Inc., Kanagawa, Japan). Relative transcript levels of the genes OsLsi1, OsLsi2, OsLsi3, OsLsi6 and OsABCC1 in different parts of rice and Actin (internal control) were measured (Table S1). Real-time quantitative RT-PCR was performed in a 10 μL reaction volume containing 2.5 μL of 1:5 diluted cDNA, 500 nM each gene-specific primers and SYBR Premix Ex Taq (Takara Bio. Inc., Kanagawa, Japan) using CFX384 Real-Time System (CFX384 touch, Bio-Rad Laboratories Inc., CA, USA). Real-time quantitative PCR was performed using the following protocol: (94 °C/2 min) × 1; (94 °C/30 s)/(58 °C/30 s)/(72 °C/30 s) × 45; and (72 °C/5 min) × 1. The specific primer sequences for the genes OsLsi1, OsLsi2, OsLsi3, OsLsi6, OsABCC1 and Actin are shown in Table S2. The target gene expression was normalized based on Actin in the CK treatment by the 2-ΔΔCt method as follows [42]: $$ {\Delta \mathrm{C}}_{\mathrm{t}}={\mathrm{C}}_{\left(\mathrm{t},\mathrm{target}\ \mathrm{gene}\right)}-{\mathrm{C}}_{\left(\mathrm{t},\mathrm{internal}\ \mathrm{control}\ \mathrm{gene}\right)} $$ $$ {\Delta \Delta \mathrm{C}}_{\mathrm{t}}={\Delta \mathrm{C}}_{\left(\mathrm{t},+\mathrm{As}\right)}-{\Delta \mathrm{C}}_{\left(\mathrm{t},\mathrm{CK}\right)} $$ $$ \mathrm{Relative}\ \mathrm{expression}\ \mathrm{level}={2}^{-\Delta \Delta \mathrm{Ct}} $$ C(t, target gene) and C(t, internal control gene) are the threshold cycles of the target gene and Actin amplification, respectively. ΔC(t, +As) and ΔC(t, CK) are equal to the difference in threshold cycles for the target and internal control genes in the +As and CK treatments, respectively. All statistical analyses were performed with SPSS 19.0 software (SPSS Inc., IL, USA). The significance of the difference among the growth stages was analysed by one-way ANOVA. A one-sample t-test was used to detect significant differences. The graphs were created by Origin 8.0 (OriginLab, Mass, USA). All data generated or analysed during this study are included in this published article and its supplementary information files or are available from the corresponding author on request. As(III): Arsenite EVBs: Enlarged vascular bundles DVBs: Diffuse vascular bundles Phytochelatins RT-PCR: Reverse transcriptase polymerase chain reaction TF: Translocation factor TVB: Transit vascular bundles Shakoor MB, Riaz M, Niazi NK, Ali S, Rizwan M, Arif MS, Arif M. Recent advances in arsenic accumulation in rice. In: Hasanuzzaman M, Fujita M, Nahar K, Biswas J, editors. Advances in rice research for abiotic stress tolerance. Woodhead Publishing; 2019. https://doi.org/10.1016/B978-0-12-814332-2.00018-6. Zhou Y, Niu L, Liu K, Yin S, Liu W. Arsenic in agricultural soils across China: distribution pattern, accumulation trend, influencing factors, and risk assessment. Sci Total Environ. 2018. https://doi.org/10.1016/j.scitotenv.2017.10.232. Chen H, Tang Z, Wang P, Zhao FJ. Geographical variations of cadmium and arsenic concentrations and arsenic speciation in Chinese rice. Environ Pollut. 2018. https://doi.org/10.1016/j.envpol.2018.03.048. He J, Charlet L. A review of arsenic presence in China drinking water. J Hydrol. 2013. https://doi.org/10.1016/j.jhydrol.2013.04.007. Shi YL, Chen WQ, Wu SL, Zhu YG. Anthropogenic cycles of arsenic in mainland China: 1990-2010. Environ Sci Technol. 2017. https://doi.org/10.1021/cs.est.6b01669. Kwon JC, Nejad ZD, Jung MC. Arsenic and heavy metals in paddy soil and polished rice contaminated by mining activities in Korea. Catena. 2017. https://doi.org/10.1016/j.catena.2016.01.005. Norton GJ, Duan G, Dasgupta T, Islam MR, Lei M, Zhu YG, Deacon CM, Moran AC, Islam S, Zhao FJ, Stroud JL, Mcgrath SP, Feldmann J, Price AH, Meharg AA. Environmental and genetic control of arsenic accumulation and speciation in rice grain: comparing a range of common cultivars grown in contaminated sites across Bangladesh, China, and India. Environ Sci Technol. 2009. https://doi.org/10.1021/es901844q. Ma JF, Yamaji N, Mitani N, Xu XY, Su YH, Mcgrath SP, Zhao FJ. Transporters of arsenite in rice and their role in arsenic accumulation in rice grain. Proc Natl Acad Sci U S A. 2008. https://doi.org/10.1073/pnas.0802361105. Sun W, Sierra-Alvarez R, Milner L, Oremland R, Field JA. Arsenite and ferrous iron oxidation linked to chemolithotrophic denitrification for the immobilization of arsenic in anoxic environments. Environ Sci Technol. 2009. https://doi.org/10.1021/es900978h. Wu Z, Ren H, McGrath SP, Wu P, Zhao FJ. Investigating the contribution of the phosphate transport pathway to arsenic accumulation in rice. Plant Physiol. 2011. https://doi.org/10.1104/pp.111.178921. Carey AM, Norton GJ, Deacon C, Scheckel KG, Lombi E, Punshon T, Guerinot ML, Lanzirotti A, Newville M, Choi YS, Price AH, Meharg AA. Phloem transport of arsenic species from flag leaf to grain during grain filling. New Phytol. 2011. https://doi.org/10.1111/j.1469-8137.2011.03789.x. Schroeder JI, Delhaize E, Frommer W, Guerinot ML, Harrison MJ, Herrera-Estrella L, Horie T, Kochian L, Munns R, Nishizawa NK, Tsay YF, Sanders D. Using membrane transporters to improve crops for sustainable food production. Nature. 2013. https://doi.org/10.1038/nature11909. Ma JF, Tamai K, Yamaji N, Mitani N, Konishi S, Katsuhara M, Ishiguro M, Murata Y, Yano M. A silicon transporter in rice. Nature. 2006. https://doi.org/10.1038/nature04590. Ma JF, Yamaji N, Mitani N, Tamai K, Konishi S, Fujiwara T, Katsuhara M, Yano M. An efflux transporter of silicon in rice. Nature. 2007. https://doi.org/10.1038/nature05964. Dhankher OP, Rosen BP, McKinney EC, Meagher RB. Hyperaccumulation of arsenic in the shoots of Arabidopsis silenced for arsenate reductase (ACR2). Proc Natl Acad Sci U S A. 2006. https://doi.org/10.1073/pnas.0509770102. Song WY, Yamaki T, Yamaji N, Ko D, Jung KH, Fujii-Kashino M, An G, Martinoia E, Lee Y, Ma JF. A rice ABC transporter, OsABCC1, reduces arsenic accumulation in the grain. Proc Natl Acad Sci U S A. 2014. https://doi.org/10.1073/pnas.1414968111. Mitani N, Ma JF, Iwashita T. Identification of the silicon form in xylem sap of rice (Oryza sativa L.). Plant Cell Physiol. 2005. https://doi.org/10.1093/pcp/pci018. Chen Y, Moore KL, Miller AJ, Mcgrath SP, Ma JF, Zhao FJ. The role of nodes in arsenic storage and distribution in rice. J Exp Bot. 2015. https://doi.org/10.1093/jxb/erv164. Yamaji N, Ma JF. The node, a hub for mineral nutrient distribution in graminaceous plants. Trends Plant Sci. 2014. https://doi.org/10.1016/j.tplants.014.05.007. Yamaji N, Ma JF. A transporter at the node responsible for intervascular transfer of silicon in rice. Plant Cell. 2009. https://doi.org/10.1105/tpc.109.069831. Yamaji N, Mitatni N, Ma JF. A transporter regulating silicon distribution in rice shoots. Plant Cell. 2008. https://doi.org/10.1105/tpc.108.059311. Yamaji N, Sakurai G, Mitani-Ueno N, Ma JF. Orchestration of three transporters and distinct vascular structures in node for intervascular transfer of silicon in rice. Proc Natl Acad Sci U S A. 2015. https://doi.org/10.1073/nas.1508987112. Yamaji N, Ma JF. Node-controlled allocation of mineral elements in Poaceae. Curr Opin Plant Biol. 2017. https://doi.org/10.1016/j.pbi.2017.05.002. Li N, Wang J, Song WY. Arsenic uptake and translocation in plants. Plant Cell Physiol. 2016. https://doi.org/10.1093/pcp/pcv143. Mitani N, Yamaji N, Ma JF. Characterization of substrate specificity of a rice silicon transporter, Lsi1. Pflugers Arch - Eur J Physiol. 2008. https://doi.org/10.1007/s00424-007-0408-y. Yamaji N, Ma JF. Further characterization of a rice silicon efflux transporter, Lsi2. Soil Sci Plant Nutr. 2011. https://doi.org/10.1080/00380768.2011.565480. Zhao FJ, Ago Y, Mitani N, Li RY, Su YH, Yamaji N, McGrath SP, Ma JF. The role of the rice aquaporin Lsi1 in arsenite efflux from roots. New Phytol. 2010. https://doi.org/10.1111/j.1469-8137.2010.03192.x. Zheng MZ, Cai C, Hu Y, Sun GX, Williams PN, Cui HJ, Zhao FJ, Zhu YG. Spatial distribution of arsenic and temporal variation of its concentration in rice. New Phytol. 2011. https://doi.org/10.1111/j.1469-8137.2010.03456.x. Yu HY, Wang X, Li F, Li B, Liu C, Wang Q, Lei J. Arsenic mobility and bioavailability in paddy soil under iron compound amendments at different growth stages of rice. Environ Pollut. 2017. https://doi.org/10.1016/j.envpol.017.01.072. Yamazaki S, Ueda Y, Mukai A, Ochiai K, Matoh T. Rice phytochelatin synthases OsPCS1 and OsPCS2 make different contributions to cadmium and arsenic tolerance. Plant Direct. 2018. https://doi.org/10.1002/pld3.34. Liu T, Liu H, Zhang H, Xing Y. Validation and characterization of Ghd7.1, a major quantitative trait locus with pleiotropic effects on spikelets per panicle, plant height, and heading date in rice (Oryza sativa L.). J Integr Plant Biol. 2013. https://doi.org/10.1111/jipb.12070. Zhang FQ, Wang YS, Lou ZP, Dong JD. Effect of heavy metal stress on antioxidative enzymes and lipid peroxidation in leaves and roots of two mangrove plant seedlings (Kandelia candel and Bruguiera gymnorrhiza). Chemosphere. 2007. https://doi.org/10.1016/j.chemosphere.2006.10.007. Yamaji N, Ma JF. Spatial distribution and temporal variation of the rice silicon transporter Lsi1. Plant Physiol. 2007. https://doi.org/10.1104/pp.106.093005. Srivastava S, Srivastava AK, Suprasanna P, Souza SF. Quantitative real-time expression profiling of aquaporins-isoforms and growth response of Brassica juncea under arsenite stress. Mol Biol Rep. 2013. https://doi.org/10.1007/s11033-012-2303-7. Guo J, Xu W, Ma M. The assembly of metals chelation by thiols and vacuolar compartmentalization conferred increased tolerance to and accumulation of cadmium and arsenic in transgenic Arabidopsis thaliana. J Hazard Mater. 2012. https://doi.org/10.1016/j.jhazmat.2011.11.008. Mitaniueno N, Yamaji N, Ma JF. High silicon accumulation in the shoot is required for down-regulating the expression of Si transporter genes in rice. Plant Cell Physiol. 2016. https://doi.org/10.1093/pcp/pcw163. Suriyagoda LDB, Dittert K, Lambers H. Mechanism of arsenic uptake, translocation and plant resistance to accumulate arsenic in rice grains. Agric Ecosyst Environ. 2018. https://doi.org/10.1016/j.agee.2017.10.017. Zhou H, Zeng M, Zhou X, Liao BH, Peng P, Hu M, Zhu W, Wu YJ, Zou ZJ. Heavy metal translocation and accumulation in iron plaques and plant tissues for 32 hybrid rice (Oryza sativa L.) cultivars. Plant Soil. 2015. https://doi.org/10.1007/s11104-014-2268-5. Huang L, Li M, Yun S, Sun T, Li C, Ma F. Ammonium uptake increases in response to PEG-induced drought stress in Malus hupehensis Rehd. Environ Exp Bot. 2018. https://doi.org/10.1016/j.envexpbot.2018.04.007. Felizeter S, McLachlan MS, De Voogt P. Root uptake and translocation of perfluorinated alkyl acids by three hydroponically grown crops. J Agric Food Chem. 2014. https://doi.org/10.1021/jf500674j. Wang X, Yi Z, Yang H, Wang Q, Liu S. Investigation of heavy metals in sediments and Manila clams Ruditapes philippinarum from Jiaozhou Bay. China Environ Monit Assess. 2010. https://doi.org/10.1007/s10661-009-1262-5. Livak KJ, Schmittgen TD. Analysis of relative gene expression data using real-time quantitative PCR. Methods. 2001. https://doi.org/10.1006/meth.2001.1262. This work was financially supported by the National Natural Science Foundation of China (41877043), Guangdong Key Research and Development Project (2019B110207002), Guangdong Natural Science Funds for Distinguished Young Scholars (2017A030306010), National Key Research and Development Project of China (2016YFD08007010), Guangdong Academy of Sciences' Projects (2017GDASCX-0404), Local Innovative and Research Teams Project of Guangdong Pearl River Talents Program (2017BT01Z176), and Guangdong Special Support Plan for High-Level Talents (2017TQ04Z511). SCNU Environmental Research Institute, Guangdong Provincial Key Laboratory of Chemical Pollution and Environmental Safety & MOE Key Laboratory of Theoretical Chemistry of Environment, South China Normal University, Guangzhou, 510006, China Dandan Pan & Xiaomin Li Guangdong Institute of Eco-Environmental Science & Technology, Guangdong Key Laboratory of Integrated Agro-environmental Pollution Control and Management, Guangzhou, 510650, China Dandan Pan, Fangbai Li, Chuanping Liu, Weijian Wu & Tingting Tao College of Natural Resources and Environment, South China Agricultural University, Guangzhou, 510642, China Dandan Pan School of Environment, South China Normal University, Guangzhou, 510006, China College of Life Sciences, South China Agricultural University, Guangzhou, 510642, China Jicai Yi School of Food Science and Engineering, Foshan University, Foshan, 528000, China Tingting Tao Fangbai Li Xiaomin Li Chuanping Liu Weijian Wu DP conducted all the experimental works, data collection, analysis, interpretation, and drafting the manuscript and revisions. WW and TT carried out the plant growth, RNA extraction and determination of As concentration experiments. JY, FL, XL and CL contributed to the conception and design of the experiment, data analysis, data interpretation, writing and revising the manuscript. All authors read and approved the final manuscript. Correspondence to Xiaomin Li. Additional file 1 Figure S1. Biomass of different parts of the rice plants during the whole growth period. Figure S2. Total arsenic contents in different parts of the rice plants during the whole growth period. Figure S3. Correlations between the relative expression of OsLsi1 and OsABCC1 (a) and between the relative expression of OsLsi2 and OsABCC1 in rice roots (b) during the whole growth period of rice plants. Figure S4. Relative expression of OsPCS1 gene in roots in the +As treatment during the whole growth period. Figure S5. Relative expression of OsPCS1 gene in (a) basal stem, (b) node III, (c) node II, and (d) node I of rice in the +As treatment during the whole growth period. Figure S6. Relative expression of OsPCS1 gene in (a) bottom first leaf, (b) top second leaf, (c) top first leaf, and (d) husk in the +As treatment during whole growth period. Figure S7. Total As concentration in nodes at the milk stage in the +As treatment. Figure S8. Schematic diagram of rice samples harvested. Table S1. Target genes in different tissues were determined in the experiment. Table S2. Specific primer sequences of the genes in the experiment. Additional file 2. Data of the means with standard errors for three replicates in Figs. 1, 2, 3, 4 and 5, S1-S2, and S4-S7. Pan, D., Yi, J., Li, F. et al. Dynamics of gene expression associated with arsenic uptake and transport in rice during the whole growth period. BMC Plant Biol 20, 133 (2020). https://doi.org/10.1186/s12870-020-02343-1 Accepted: 17 March 2020 Uptake and transport Whole growth period Submission enquiries: [email protected]
CommonCrawl
Vision 2030 and reducing the stigma of vocational and technical training among Saudi Arabian students Abdulaziz Salem Aldossari1 Empirical Research in Vocational Education and Training volume 12, Article number: 3 (2020) Cite this article Technical and vocational education and training (TVET) plays a critical role in developing essential labor market skills. In Saudi Arabia, participation in TVET has traditionally been stigmatized in favor of white-collar jobs. However, the importance of skilled labor has increased in Saudi Arabia's private sector as the country's Vision 2030 focuses on moving the economy from oil to investment. This quantitative study investigates the role of recent socio-economic transformations in changing attitudes toward TVET. Statistical analysis of a questionnaire distributed to 1007 TVET students identified a significant relationship between perceptions of TVET and gender, family income, and parental educational level. Saudi Arabia spends more money on education and training than any other country in the Middle East (Yamada 2018), including providing free public school, college, and university education. However, despite universal primary education and a large proportion of young people benefiting from secondary and higher education, many graduates lack the necessary qualifications for the job market (Aldossari and Bourne 2016; Bosbait and Wilson 2005; Yamada 2018). Ramady (2010: 400) pointed out that "there is a growing imbalance between the quality and quantity of occupational expertise produced by the educational system and the occupational structure demanded by the economy." The health sector accounted for 16.6% of the 591 academic programs in Saudi public universities, followed by business and management (11%) and humanities (10.3%; Ministry of Education 2018). Such a situation leads to poor skill matching for students who lack a sound theoretical grounding in certain professional fields (Bosbait and Wilson 2005). By limiting the role of technical and vocational training, the educational system in Saudi Arabia has failed to prepare its students for the global economy. Due to a rise in oil revenues, from 1969 to 1980 Saudi Arabia enjoyed an economic boom that had a significant impact on engendering negative attitudes toward technical and vocational training. During that period, most students chose to pursue university education rather than technical and vocational training, disdaining such employment in favor of corporate or public sector careers (Yamada 2018). Economic growth, which rose from an annual rate of 3% in 1960 to peak at 6.37% in 1982 (World Bank 2019), was accompanied by a population boom and corresponding increases in the working age population. During this period, the country had a shortage of engineers, manufacturers, technicians, operators, and other skilled laborers. As a result, the kingdom recruited foreign skilled nationals to fill the employment gap. Outsourcing and international hiring reduced the willingness of most Saudis to work in jobs that required substantial investments of time and effort (Mellahi and Wood 2001). The weak relationship between schools and the private sector has also led to increased demand for the recruitment of foreign skilled nationals over Saudis (Yamada 2018). In the mid-1990s, more than 65% of public sector workers were Saudis, whereas the percentage of foreign workers in the private sector was much higher (Mellahi 2000). Official data for the first quarter of 2019 showed that workers in the private sector accounted for 67.9% (22.3% Saudis and 77.7% non-Saudis) of the total workforce, followed by Saudi and foreign domestic workers (22.4%), whereas the proportion of employees in the public sector was 9.6%, 96% of whom were Saudis (General Authority of Statistics 2019a). However, the recent decrease in oil demand has had a negative impact on the country's economy, resulting in a growing unemployment rate in the face of an ever-increasing population (Arabi 2018), which totaled 33,413,660 in 2018, 38% of whom were non-Saudis (General Authority of Statistics 2019b). The government has initiated significant economic and financial reforms to end its dependency on oil and turn to the utilization of investment sources, thus emphasizing the role of the private sector and technical and vocational training (TVET) (Khashan 2017). To reduce the citizen unemployment rate, the government launched the Saudization program in 1985, which has progressively intensified requirements for the private sector to employ a certain percentage of Saudi nationals (Hussain 2020). In December 2017, the Ministry of Labor and Social Development published a ministerial resolution to incrementally increase work permit fees for foreign employees and their dependents over the following 3 years. The cost of visa renewal for expatriate workers quintupled over the period from 1994 to 2019, and the cost of hiring expatriates was increased with the introduction of compulsory health care insurance (Hussain 2020). As a result, since 2017, more than 667,000 non-Saudi workers have left the country, creating a substantial professional employment gap in the private sector (Al Omran 2018). The success of these initiatives remains limited, as foreign workers continued to occupy just over 75% of the private sector workforce as of the first quarter of 2019 (General Authority of Statistics 2019b). A major factor limiting the ability to develop a citizen-staffed private sector workforce is the stigmatization of technical and vocational careers in the society. A cultural preference for general and higher education continues to dominate certain parts of Saudi society, leading to an imbalance between supply and demand in the labor market (Yamada 2018). As Ramady (2010: 395) noted, the Saudi Arabian educational system tends to confer "high social prestige to university education, while underestimating the significance of technological and vocational education." As a result, job creation has become a serious problem; unemployment was at a record level of 12.5% for the first quarter of 2019 (General Authority of Statistics 2019a). The highest category of job seekers were those with bachelor's degrees (45.8%), while the proportion of unemployed with professional diplomas (third-level college) was dramatically lower (7.4%). To address such disparities, Saudi Arabia officially unveiled its ambitious economic plan, Vision 2030, in April 2016. Among the various core goals of the plan are increasing the country's economic sustainability through a reduction in oil dependency by 2030, establishing strategic inter-regional partnerships, and boosting productivity and domestic job opportunities (Faudot 2019). Vision 2030 includes solutions to key problems concerning human, social, economic, and environmental development to meet the demand of future generations (Alshuwaikhat and Mohammed 2017). As part of Vision 2030, the Saudi Arabian government has established a National Labor Gateway (TAQAT) program to expand entrepreneurial and vocational training programs to better exploit the growth opportunities offered by the young population (Saudi Vision 2030 n.d.). However, as indicated by ongoing employment trends, the success of the Saudi government's educational initiatives depends on the development of positive attitudes toward TVET. The major cultural and social issues in Saudi Arabia that negatively affect the TVET system concern gendered restrictions and the perceptions of most Saudis that skill-based and manual qualifications lead to less prestigious careers (Mellahi 2000). The opening of broader job and career opportunities through Vision 2030 initiatives aims to stimulate a significant shift in youth and female participation in vocational and technical education and in related occupations (Alshuwaikhat and Mohammed 2017). To evaluate the success of the government's socio-economic initiatives in changing perceptions toward TVET, this study applied capability theory to quantitatively assess the perceived impact of the economic and social transformations embodied in Vision 2030 among Saudi vocational students. More specifically, the study was structured around the following research questions: Are vocational students' attitudes concerning the economic transformation's impact on the acceptability and viability of TVET significantly affected by the respondents' gender, marital status, study program, family income, and parents' level of educational attainment? Are vocational students' attitudes concerning the social transformation's impact on the acceptability and viability of TVET significantly affected by the respondents' gender, marital status, study program, family income, and parents' level of educational attainment? This study followed Powell and McGrath's study (2014) and applied a capability approach to examine the problem. Developed by Amartya Sen (1993) and considerably further discussed by Martha Nussbaum (2007), among others, this approach advocates the importance of enhancing people's freedoms and genuine opportunities to attain well-being (Dang 2014). The main elements of the capability approach are capability, functioning, freedom, and conversion factors. Capability (opportunity) is defined as what a person is able to do or able to be, and functioning is related to achievement (Sen 2005: 153), whereas freedom is recognized as "acting freely" and "being able to choose," which are "directly conducive to 'well-being'" (Sen 1992: 50). Conversion factors describe variability functioning and capability resources (Sen 1992, 1999, 2009), and are categorized into personal, social, and environmental factors. Personal factors encompass variables such as gender, age, and health, whereas institutions and sociocultural norms are considered social conversion factors, and environmental factors include aspects of the physical surroundings such as climate, the built environment, and infrastructure (Dang 2014). The implications of the capability approach in the literature are broad; it can be used as a framework to assess people and social environments, design and evaluate policies, or suggest social modifications (Robeyns 2006). The capability approach has been mentioned extensively in the TVET literature (Powell and McGrath 2014; Lambert and Vero 2013; McGrath 2012). McGrath (2012) argued that the capability approach provides "a wider and more person-centred theory and practice of learnings-for-lives'' (p. 15). Lambert and Vero (2013) assessed the impact of firms' vocational training policies and employees' capability to aspire for learning. They found that the main determinant of the capability to aspire is related to environmental factors that include training policy, and they highlighted the importance of capability of voice which is identified as the ability to express views and concerns. Powell and McGrath (2014) discussed the use of the capability approach and its potential contribution to TVET evaluation in South Africa and highlighted its value for advancing social justice, human rights, and poverty alleviation by prioritizing the needs of people over the economy. In the context of Saudi Arabia, the government seeks to provide resources that support vocational education; however, attitudes toward TVET have been negative due to the economic boom, which has resulted in an industrial and technical sector workforce that is predominantly composed of foreign workers. As a result, Vision 2030 policies explicitly aim to change the socio-economic factors stigmatizing TVET in the country (Khashan 2017). Belcher and DeForge (2012) defined stigma as a negative attitude and perception indicating social disapproval and rejection, which influences a person's feelings of being guilty, shamed, and inferior, as Falk (2010) asserts. Specifically, Gullekson (1992) defined families' perspective toward stigma as encompassing fear, loss, lowered family esteem, shame, secrecy, distrust, anger, inability to cope, hopelessness, and helplessness. Markowitz (1998) emphasized that stigmatizing attitudes can cripple psychological well-being and promote disability by impeding social integration, the performance of social roles, and quality of life. Hence, the freedom concept can be promoted to encourage individuals to choose among different ways of living and different ways of 'being and doing' without fearing stigma (Sen 1992). From the capability perspective, the process of converting available resources into well-being is dependent on the individual, social, and environmental factors that influence the ability to convert resources into functionings (outcomes or achievements) and capabilities (real opportunities). This study examined both personal factors (gender, marital status, monthly family income, parental educational level) and social conversion factors (employability policies, income and incentives policies). These two dimensions influence how a person can convert the characteristics of the commodity into a functioning (achievement). However, such accomplishment cannot occur without freedom (choice) and equality. Sen (1992) asserted that having the freedom and capability to do something imposes the duty to consider whether to do it or not, which involves individual responsibility. In the case of employment, as Bonvin and Galster (2010) stated, "employability is more than the ability to access work; it is about the real freedom to choose the job one has reason to value" (p. 72). Hence, in the context of this study, "capabilities" is related to the ability of the students to enroll in TVET, whereas "functioning" refers to students' actual enrolment. For example, Saudi Arabia has technical colleges for both women and men (capability); however, women find it difficult to enroll in such programs (functioning) due to such social influences as unequal gender roles and disparate facilities. From this perspective, Saudi students need more than opportunities to access the skills and abilities necessary for work, they also need valuable opportunities that contribute to human flourishing within the labor market. The increasing need for TVET education Global economic transformations fuelled by technological advances have increased the demand for skilled workers, and many rapidly expanding industries are currently experiencing serious shortages of trained labor to meet the needs of the global market. As a result, large numbers of expatriates have been employed to fill the gap between demand and supply in fields such as manufacturing, services, and information technology (Kizu et al. 2018; Mellahi 2000). However, according to Looney (2004), the imbalance between low economic growth and increasing population is one of the greatest challenges impeding developing economies. Closing the gap between the needs of the labor market and the output of the education system has become a priority for sustainable economic development (Bilboe 2011). Whenever an expansion in the labor market occurs and training is closely linked to available jobs, the financial returns from vocational education and training in less developed countries have been reported to be higher than the financial returns from general education (Aizenman et al. 2018; Igarashi and Acosta 2018; Looney 2004). Rapid social, political, economic, technological, and educational transformation has contributed to changing perspectives on both the need for and the nature of vocational and technical education. In Saudi Arabia, according to statistics from the Technical and Vocational Training Corporation (2018), the percentage of students enrolling in technical college programs in 2017 increased by 17% from the previous year for 2-year technical diploma programs, by 23% for international technical colleges (excellence colleges), and by 20% for trainees in specialized institutes operated in partnership with the private sector. In a 2017 report, the World Bank argued that improving employment rates in the Gulf States depends on the ability of governments to increase the attractiveness of private sector jobs and citizens' willingness to be employed in the private sector. The Saudi Arabian government has taken up this challenge to develop its nation's human resources (Al-Rasheed 2002); however, the lack of preparedness of Saudi citizens to take up certain types of employment opportunities remains problematic. Vision 2030 has initiated reform policies to meet the expectations of a rapidly expanding young Saudi population by facilitating the transition into the labor market. From 2012 to 2017, the localized oil and gas sector grew from 40 to 75% (Hvidt 2018). In 2018, the Kingdom launched major labor market reforms to stimulate growth in other industrial sectors by extending Saudi-only jobs to include the sale of watches, eyewear, medical equipment and devices, electrical and electronic appliances, auto parts, building materials, carpets, cars and motorcycles, home and office furniture, children's clothing and men's accessories, home kitchenware, and confectionery (Young 2018). The Saudi Arabian government has supported the Technical and Vocational Training Corporation through increased funding and the implementation of initiatives to improve TVET and expand opportunities for women (Khan et al. 2017). The labor force participation rate among Saudi women has gradually increased from 14% in 1990 to 22% in 2017 (Arabi 2018). Increasing enrolment rates of women in technical and vocational training and development has been a major goal of recent initiatives (Khan et al. 2017). Among the 15 million jobs that Saudi Arabia plans to create by 2030, women are expected to occupy 3.6 million of the 11 million positions reserved for nationals (Saudi Vision 2030 n.d.). As discussed in more detail below, Saudi nationals, in general, have preferred to take up limited white-collar jobs, leaving gaps in the job market largely consisting of skilled and manual employment opportunities. Because of a preference for wanting to earn large amounts of money in white-collar jobs, many Saudi workers refuse to take jobs in other sectors of the economy, resulting in higher-than-expected levels of unemployment for such a prosperous country (Yamada 2018). Many students refrain from enrolling in technical and vocational training programs because they believe that jobs in this area offer fewer financial incentives than white-collar jobs. However, a substantial gap exists between student assumptions in this regard and the reality concerning employment opportunities and incentives following TVET. As a result of these factors, Saudi Arabia's private sector capability has plummeted because of poor skill-matching job creation policies that have not taken the demands and realities of the economy sufficiently into account (Mellahi 2000). Global socio-economic context of TVET Much of the vocational education literature has highlighted its positive effects on economic development. However, several studies have also examined the factors that negatively affect students' ambition to enroll in vocational education, which appears to be generally rooted in the lower social prestige associated with TVET. Student's willingness to engage in TVET has been found to be closely linked to prevailing social and economic attitudes (Alasmeri 2012). Olesel (2010) studied the TVET situation in Australia and found that young people from lower socio-economic backgrounds were more likely to pursue this route than those from wealthier backgrounds. An unwillingness to clearly articulate the value of vocational education compared to traditional education and promote an integrated view of education has led students to attribute decreased importance to vocational education. Chankseliani et al. (2016) reported that the divide between academic and vocational education remains strong in the United Kingdom, so that those who pursue vocational education are less positively perceived in society than university students. Studies on the situations in Ghana and Nigeria have focused on the low public perception of TVET programs in many African countries, where those who participate were viewed as having low intellectual ability and as generally comprising school dropouts or illiterates (Aryeetey et al. 2011). Essel et al. (2014) similarly reported that many Ghanaian parents discourage their children from pursuing TVET programs due to the limited academic opportunities and lack of societal prestige with which these programs are associated. They attributed the stigmatization of TVET in Ghana and other African countries to the centering of colonial education in the humanities and a postcolonial drive to increase the proportion of white-collar workers and intellectuals. Similarly, Kennedy (2011) found that Nigerian students did not pursue TVET as their first choice due to its reduced social status and prestige in their community, which was reinforced by family and peers. As he asserted, "vocational and technical education has remained a subordinate discipline in terms of societal recognition, adequate funding, and parental/children's choice" (Kennedy 2011: 172). Agrawal and Agrawal (2017) indicated that in many Asian countries, TVET is viewed as providing opportunities for poor families with lower social prestige. They studied vocational training in India and showed that despite greater returns from vocational education than from academic education, perceptions of TVET remain generally negative in Indian society. Ayub (2017) similarly cited the low social prestige of TVET in Pakistan and reported that parents had a statistically significant role in influencing students' decisions to enroll in such programs whereby less educated parents with lower income and occupation levels were more inclined to encourage their children to pursue TVET. There is a dearth of studies on the socio-economic context of TVET in the Arab Gulf states. However, Bilboe (2011) examined the primary factors contributing to the low numbers of students attending technical and vocational institutions in Kuwait and found that 51% of the study participants had not chosen vocational education as their first option, and most saw vocational education as providing limited preparation for the labor market. Rather, attending university was considered to offer better prospects for achieving higher socio-economic status. These results suggest that being ill-informed concerning what vocational education can offer is likely to contribute to decreased student interest in enrollment. Alnaqbi's studies (2016) focused on the attitudes toward vocational education and training in the United Arab Emirates and found that lower socio-demographic groups had less confidence and stigmatized themselves as being at the bottom level of the social hierarchy. Participants' negative images of TVET were largely influenced by parental choice and a desire for higher job salaries. Socio-economic context of TVET in Saudi Arabia The stigmatization of TVET Vocational education does not operate in a vacuum but can be conceptualized as part of an overall system that may be culturally deeply rooted and strongly affected by the environmental constraints of a given country or region (Mellahi 2000). Saudi Arabian vocational training values and interests are very different from those in some developed countries (Alasmeri 2012), as Saudi Arabia faces major obstacles that limit the functioning of a successful vocational training system. As in many other Arab countries, a significant stigma is still attached to TVET pathways (Sultana 2017). A large proportion of Saudi society, especially those living in tribes, refuses to engage in specific occupations that conflict with their beliefs and ideas (Mellahi 2000). According to Thompson (2018: 75), "the real issue is not competition, but rather that young Saudis have not been educated to accept new socio-economic realities." As Thompson (2018: 77) observed, "companies are begging young Saudi men to start doing manual work, but they refuse because it remains culturally unacceptable." Employability and salary play a part in students' attitudes to joining TVET. Madhi and Barrientos (2003) conducted a study to identify the conditions affecting employment and career development in Saudi Arabia. They found that available employment and career opportunities were strongly differentiated according to nationality and gender. The significant differences in career opportunities, mobility, conditions of work, and pay between Saudi and non-Saudi employees appear to have consolidated negative social attitudes among the former toward technical and vocational training. According to Mellahi (2000), "the distortion between wages for administrative jobs and skill jobs affect individuals' incentives to invest in vocational skills." The stigmatizing of TVET in Saudi Arabia is also linked to factors of social prestige, which overwhelmingly derive from parental choices. Alandas (2002) studied attitudes of freshmen in Saudi technical colleges and found that their fathers' preferences were the primary factor influencing students' pursuit of TVET. However, the results did not indicate any statistically significant differences linked to parental academic level and income. Gendered restrictions in the labor market Many women have taken advantage of the Saudi Arabian government's efforts to increase academic educational opportunities, and women now constitute almost 53% of students enrolled in Saudi Arabian universities (Koyame-Marsh 2016). However, the participation of educated women in the labor force remains relatively low. In 2015, 68% of unemployed Saudi women had Bachelor's degrees or higher, compared with only 21% of unemployed men (Koyame-Marsh 2016). Women in Saudi Arabia have traditionally been restricted to specific domains such as home economics, education, and nursing (Alfarran et al. 2018). Mellahi (2000) identified "misunderstanding of Islamic teaching, culture, and social tradition" as key factors hindering women's participation in the labor force. For many years, families did not allow women to work in physically demanding jobs such as factory production lines, leaving women with rather limited employment options as corporate secretaries or employment in service and sales industries. This constraint reduced the willingness of women to enroll in vocational education programs. Calvert and Al-Shetaiwi (2002) examined the mismatch between technical and vocational skills and jobs for women in Saudi Arabia. In that study, a survey was distributed to 220 private sector business managers in four large cities to determine what factors they believed were important in affecting women's decisions to choose TVET and work in the private sector. The results revealed that managers saw the main factors affecting women's decision to pursue TVET were related to how technical and vocational training is structured rather than women's preferences or societal pressures. Processes and instrument This study used a self-developed questionnaire to assess the influence of the economic and social transformations embodied in Vision 2030 in changing the attitudes of Saudi youth toward vocational and technical education. The design of the survey adopted some concepts from Sen's (1992, 1993, 1999, 2009) capability approach with a focus on economic, political, and social factors. According to Sen (1992), social arrangements and policies should concentrate on what people are able to do and be, on the quality of their life, and on eliminating barriers in their lives so that they have more freedom to live the kind of life that they have reason to value. The items of the survey were built in three phases. First, the researcher reviewed the policies of Vision 2030 that relate to TVET, including employment, foreign investment, women's empowerment, motivations and incentives, parent and student awareness, community effect, and social media (Saudi Vision 2030 n.d.). Nieuwenhuis and Shapiro (2004) proposed that developing TVET policies under constant and persistent pressure to reform will expand participation and reduce its negative image and stigma. The second phase focused on triangulating the policies of Vision 2030 with several studies in the literature that focused on socio-economic and political factors influencing TVET, as well as incorporating studies utilizing a capability approach (see, for example, Alandas 2002; Madhi and Barrientos 2003; Lambert and Vero 2013; McGrath 2012; Powell and McGrath 2014; Alnaqbi 2016; Sultana 2017). The third phase was conducted by discussing the preliminary survey with ten experts in the TVET field to refine the survey, and their comments were considered. Statistical analyses and percentages were derived from the questionnaire responses. The questionnaire consisted of two parts. In part one, socio-demographic data related to gender, marital status, monthly family income, study program, and parents' educational levels were gathered. Part two comprised an economic scale and a socio-cultural scale. In combination, the scales consisted of 25 items, and responses were measured on continuous 5-point Likert-type scales, ranging from (1) strongly disagree to (5) strongly agree. The first scale comprised 12 items to assess the participants' views of the economic effects of Vision 2030 and how these policies have contributed to reduce the stigma attached to TVET. The second scale comprised 13 items that focused on participants' perceptions of the socio-cultural effects of Vision 2030 and how these policies have influenced participation in TVET. Table 1 presents the items from the two scales. Table 1 Survey questions: influence of economic and social transformations on TVET participation Population and sample There is a total of 161,091 Saudi students (female = 24,154, male = 136,937) studying at technical colleges in thirty different technical colleges across the country. The unified nature of the college admission system means that the composition of student bodies in different colleges does not vary significantly. In theory, the optimal course would have been to choose a cross-section of student from all thirty colleges, but this would have been far beyond the capabilities and resources of the researchers. Students from three technical colleges in Riyadh, Dammam, and Al-Ahsa were chosen with a total 27,510 students, (15,577, 5670, and 6263, respectively). Theses colleges were chosen for three main reasons: they are major industrial and commercial cities, they attract students from all across the country, and the researchers had personal contacts in these schools to facilitate their research efforts. The study utilized a formula suggested by Kotrlik and Higgins (2001), and the estimated sample size was 384 students. $$n_{0} = \frac{{\left( t \right)^{2} *\left( p \right)\left( q \right)}}{{\left( d \right)^{2} }}$$ $$n_{0} = \frac{{\left( {1.96} \right)^{2} *\left( {0.5} \right)\left( {0.5} \right)}}{{\left( {0.05} \right)^{2} }} = 384$$ where \(n_{0}\) is the required sample size, t is the alpha level value of 0.05 in each tail = 1.96,(p)(q) = estimate of variance = 0.25, d = acceptable margin of error = 0.05. To ensure that the required sample size is obtained, the research assistants were given 1250 paper-based questionnaires to be distributed randomly in the selected cities. Unexpectedly, 1016 questionnaires were returned from the assistants, and 1007 questionnaires were completed. Rusticus and Lovato (2014) asserts that increasing sample size enhances power and precision of the study. This sample included both students studying for a two-year diploma and those taking a bachelor's degree. Out of 27,510 students, the program of bachelor's degree includes 2667 students studying in the Riyadh, Dammam, and Al-hasa (1622, 950, and 95, respectively). The Students ranged in age from 18 to 24, which is a comparatively small range. All respondents gave written informed consent to participate in the study, and their anonymity was ensured. Ethical approval was granted by King Saud University, and the Saudi Arabia Technical and Vocational Training Corporation granted ethical approval and permission for data collection. The researcher and three assistants administrated the paper-based questionnaires by visiting the classrooms and laboratories to distribute the questionnaires. Although 1016 students completed the questionnaire, nine were discarded as incomplete, leaving a total of 1007 questionnaires. Following data collection, the responses were analyzed using SPSS version 24.0. Data entry was reviewed by a third party to ensure no mistakes were made or values omitted. Descriptive statistics were calculated for the independent variables and for all survey responses, and frequency analysis recorded means, standard deviations, and percentages for all responses. Mann–Whitney U tests were conducted to identify significant differences in responses based on gender or study program, and analysis of variance (ANOVA) was conducted to assess differences based on other socio-demographic variables. Skewness and kurtosis were assessed, and Tukey post hoc tests were performed for multiple comparisons. The ANOVA model was: H0: μ1 = μ2 = μ3 = μ4 = μ5 = μ6; H1: μ1 ≠ μ2 ≠ μ3 ≠ μ4 ≠ μ5 ≠ μ6, where μ1 through μ6 represent each independent variable of gender, marital status, study program, family's income, father's level of education, and mother's level of education. The ANOVA equations include: \(SS_{total} = \sum\nolimits_{i = 1}^{n} {\left( {\bar{X}_{i} - \bar{X}} \right)^{2} }\): where SS total is the Sum of Squares (SS) based on the entire set in all the groups (independent variables). \(SS_{within} = \sum\nolimits_{i = 1}^{k} {\sum\nolimits_{i = 1}^{n} {\left( {\bar{X}_{i} - \bar{X}} \right)^{2} } }\): where SS within groups is the sum of squares within each individual group of gender, marital status, study program, family's income, father's level of education, and mother's level of education. \(SS_{between} = \sum\nolimits_{i = 1}^{k} {\left( {\bar{X}_{i} - \bar{X}} \right)^{2} }\): where SS between groups is the sum of squares between these groups, calcluated as (gender_mean−total_mean)2 + (marital status_mean−total_mean)2 + study program _mean−total_mean)2 + (family's income_mean−total_mean)2 + (father's education_mean−total_mean)2 + mother's education_mean−total_mean)2. For Mann–Whitney U tests, the equation is:\(U_{1} = R_{1} - \frac{{ n_{1} \left( {n_{1} + 1} \right)}}{2}\) or \(U_{2} = R_{2} - \frac{{ n_{s} \left( {n_{2} + 1} \right)}}{2}\): \(U_{1} = R_{1} -\) where U1 and U2 represent each respective group (gender or study program), R represents the sum of ranks in the sample, and n is the number of items in the sample. Multiple regression analyses were conducted to investigate if socio-economic variables had a significant effect on responses to the total questionnaire as well as responses to individual scales. Regression coefficients were analyzed to identify which specific variables had the largest and most significant effects. The regression model was: Y = b0 + b1X1 + b2X2 + … + b6X6: Where Y is the predicted value of the dependent variables (scores) and X1 through X6 represent each independent variable of gender, marital status, study program, family's monthly income, father's level of education, and mother's level of education; b0 denotes the value of Y when all of the independent variables are equal to zero (representing the value regardless of their socio-demographic status), and b1 through b6 are the estimated regression coefficients. Reliability and validity The survey was subjected to reliability testing before analyses were conducted. The stability or test–retest reliability of the survey instrument was obtained through pilot testing of 85 students, and the participants in this pilot-testing were excluded from the main study. Acceptable values in Cronbach's alpha are generally > 0.70 (Gliem and Gliem 2003). This analysis determined which items would be inappropriate within their currently assigned scale and guided re-assignment or deletion of the items, which resulted in a total of 12 questions for scale 1 and 13 questions for scale 2. The alpha coefficient obtained for the revised scale was 0.976, and the standardized Cronbach's alpha (score of each item with zero mean and unit variance) was nearly identical at 0.977. Reliability analyses for individual scales yielded Cronbach alpha values of 0.956 for scale 1 and 0.953 for scale 2. The factorability of the 25 final questionnaire items was analyzed in SPSS and several well-recognized criteria for the factorability of a correlation were used. First, each item correlated at least 0.5 with all other items on the questionnaire. Secondly, the Kaiser–Meyer–Olkin measure of sampling adequacy was 0.98, well above the commonly recommended value of 0.6, and Bartlett's test of sphericity was significant (χ2 (300) = 23,996.43, p = 0.000). In addition, the communalities all equaled one. Initial eigenvalues extracted from principal components analysis identified two factors with values over one. The first factor was robust and measured 8.825, and the second robust factor measured 8.420. The two-factor solution, which explained 68.9% of the variance, was selected due to the leveling off of eigenvalues after two factors. A Varimax with Kaiser rotation resulted in two factors, as shown in Table 2. Table 2 Results of Varimax with Kaiser normalization Table 3 presents the sample's socio-demographic statistics. Of the 1007 participants, approximately 56% were male, and the majority were single individuals studying for two-year diplomas. Those whose families earned SR 10,000 (USD $2667) or less per month constituted 72.1% of respondents, and most of the participants' fathers (60.8%) and mothers (49.5%) had attained at least a high school education level. Table 3 Socio-demographic statistics All respondents entered responses to each question. The results of a two-tailed Fisher's F test demonstrated that the mean values for both scales were similar (scale 1: M = 4.107, SD = 0.09; scale 2M = 4.097, SD = 0.086), with no significant difference between the means of responses to scale items (Table 4). Table 4 Comparison of means for economic (scale 1) and social (scale 2) attributes Influence of socio-demographic variables Table 5 summarizes the mean values and standard deviations for the entire questionnaire as well as each subscale. Considering the maximum possible scores of 60 and 65 for scales 1 and 2, respectively, overall responses were positive. However, some responses to individual questions merit attention. For example, responses to question 11 on scale 2 revealed that almost 82% of respondents agreed or strongly agreed that increased support for the integration of women into TVET has helped to reduce negative perceptions of training in this field. Similarly, almost 78% of respondents agreed or strongly agreed that the Saudi Arabian government's use of media to improve perceptions of the social status of technical and vocational college graduates and their role in social advancement (scale 2, Q7) had motivated their interest in technical and vocational education. Table 5 Mean questionnaire and subscale scores according to socio-demographic group When comparing total scores across both scales, mean scores for women and respondents studying for two-year diplomas were significantly higher than those for men and respondents studying for Bachelor's degrees. Mann–Whitney U tests showed significantly higher scores for females than for males on each scale (Scale 1: U = 93,130, p = 0.000, r = 0.220; Scale 2: U = 91,978.5, p = 0.000, r = 0.229), and the differences between scores based on study course were also significant (Scale 1: U = 22,341.0, p = 0.000, r = 0.188; Scale 2: U = 20,778.5, p = 0.000, r = 0.208). Although both groups expressed optimistic views, females' ratings were consistently higher than those of males. Pairwise comparisons between males and females showed that the largest differences concerned item 11 on scale 2 regarding the role of increased support for women's participation on improving societal perceptions of TVET (M = 3.8, F = 4.4) and item 7 on scale 1 regarding the ease of finding employment after TVET graduation (M = 3.5, F = 4.2), such that 69% of males agreed or strongly agreed regarding the former compared with 90% of females, and 63% of males agreed or strongly agreed with the latter compared with 83.9% of females. Means comparisons for other groups did not indicate any significant between-group differences for either the whole test or for individual scales, with the exception of mother's level of education, for which ANOVA results indicated a significant effect on total scores (F(3, 1003) = 3.04, p = 0.028). As shown in Fig. 1, participants' whose mothers had a bachelor's degree reported lower mean values across both scales; however, Tukey multiple comparisons revealed no significant differences between groups. Because the Tukey post hoc is more conservative and attempts to control the overall alpha level, it was decided to conduct individual independent sample t-tests between those whose mothers had a bachelor's degree and those whose mothers had different education levels. The results showed significant differences in total scores for both scales between those whose mothers had a bachelor's degree and those whose mothers had less than a high school diploma (t(701) = 2.426, p = 0.011), those whose mothers had high school diplomas (t(442) = 2.067, p = 0.039), and those whose mothers had earned two-year diplomas (t(246) = 2.27, p = 0.024). Although mean values exceeded 3.50 for all responses, pairwise comparisons confirmed that mean response values for participants whose mothers had a bachelor's degree were consistently lower than those in other education categories across both scales, and this variable had a significant negative effect (p < 0.05) compared with those with mothers having a two-year diploma or a high school degree on several items on scale 1, including items concerning the positive effects of providing financial incentives in the private sector (Q4), the National Transition Plan (Q6), and the ease of gaining employment after graduation (Q7). In fact, such respondents' mean response for the last item was the lowest of all questionnaire items (3.52). On scale 2, the only items for which the effect of having a mother with a bachelor's degree was not significant were question 8, concerning the effect of private companies offering good financial incentives, and question 12, concerning the existence of special projects for technical and vocational training graduates. Many of the scale 2 questions for which this variable had a significant negative effect (p < 0.05) on responses were related to societal perceptions of technical and vocational training. Indeed, the item that showed the greatest negative effect of having a mother with a bachelor's degree on both scales was question 11 regarding the effect that supporting women in TVET had on improving societal perceptions. In contrast, having a father with a two-year diploma had a positive and significant effect (p < 0.05) on one-third of the questions from scale 1 and 39% of the questions on scale 2, including questions regarding economic shifts and new labor laws, and the diversifying economy in scale 1. Notably, having a father with a two-year diploma also had a positive influence on responses to scale 2 questions regarding parents' increased awareness of the importance of technical and vocational training (Q5) and the role of increased support for women in reducing the stigma of TVET (Q11). Also notable was the significant negative effect (p < 0.05) of being divorced on responses to scale 1 items, namely, motivation due to company incentives (Q4) and young adult motivation due to an increased number of national projects (Q12). Mean values for total questionnaire scores based on mother's level of education Pairwise comparisons showed that the significance of family income varied across all items; however, the mean values of the responses of respondents whose families earned less than SR5,000 (USD $1333) per month were consistently lower than those from higher-income families regardless of gender. Having a family monthly income of SR5,000 or less had a significant negative effect (p < 0.05) on responses to half of the scale 1 items and 39% of the scale 2 items. Specifically, in terms of economic dimension (scale 1), low income had a significant negative effect on responses to whether there had been increased job opportunities due to Vision 2030 and the National Transition Plan (2020) (scale 1: Q1 and Q6). However, this group provided more positive responses regarding whether providing financial incentives (Q4), an increased number and quality of national projects (Q11 and Q9, respectively), and economic diversification (Q12) had a positive effect on TVET participation. On scale 2 (social factors), low income had a significant negative effect on several items relating to the stigma attached to TVET, including items 4, 11, and 12. A family income of SR5000–10,000 had a significant negative effect (p < 0.01) only on responses to items concerning the positive impacts of Vision 2030 on technical and professional functions (Q4, scale 2). Tables 6, 7, 8 present the results of multiple regression analyses conducted to identify the effects of socio-economic status on questionnaire results. As shown in Table 6, the combined effect of all socio-economic variables was significant both for the total questionnaire responses (F(6,1000) = 15.64, p = 0.000) and for individual scales. However, the low r squared values indicate that additional socio-economic variables may also contribute to predicting the questionnaire results. Table 6 Multiple regression summary Table 7 Coefficients for socio-economic variables and total questionnaire responses Table 8 Coefficients for socio-economic variables and individual scales When individual variables were considered, Table 7 shows that gender was the greatest predictor of overall questionnaire responses, followed by the respondents' study program levels; however, all variables except marital status and father's level of education had a significant effect on total questionnaire responses. Table 8 shows a similar pattern with regard to the variables that were the greatest predictors of participants' responses to the individual scales as well as the variables that did not have any significant impact. The survey results indicate that overall, the survey respondents were optimistic about the economic and social transformation in terms of their impact on Saudi students' attitudes toward TVET. However, women and respondents studying for two-year diplomas had a significantly more positive perception of the economic and socio-cultural effects of Vision 2030 in changing their attitude toward TVET than men and individuals with higher education levels. In contrast, those at the lowest income level (< SR5,000) had a relatively less positive outlook on the effects of the economic and social transformation than other income groups, and those whose mothers had the highest level of education (bachelor's degree) tended to be relatively less optimistic than those whose mothers were less educated, particularly regarding the socio-cultural effects of government initiatives concerning TVET participation. The distribution of responses to questions 4, 11, and 12 in scale 2 indicated relatively less optimism regarding the potential of economic and social transformation for eliminating the social stigma attached to TVET and participation in it among lower income groups, men, and individuals whose mothers had received higher education. These results partly align with Sultana's (2017) and Thompson's (2018) observations concerning continued negative perceptions of TVET in Arab societies, together with numerous other studies that highlight prejudice toward vocational education in both developing and developed countries (Agrawal and Agrawal 2017; Aryeetey et al. 2011; Chankseliani et al. 2016; Remington 2018). Regarding lower income groups' negative perceptions of TVET, Olesel's (2010) and Ayub's (2017) studies disagree with this result, indicating that young people from lower socio-economic backgrounds were more likely than students from wealthier backgrounds to participate in vocational education and training. Therefore, the respondents from lower-income groups in this study may prefer careers with satisfactory wages to alleviate their poverty and change their social class, which hinder them not to choose TVET. Johannesen-Schmidt and Eagly (2002) examined the relationship between income and social classification and found that when income increased, social stereotypes increased strongly in positive agentic characteristics. Although Alnaqbi (2016) found that lower socio-demographic groups have less confidence and stigmatize themselves as being at the bottom level of the social hierarchy, thereby increasing the potential of their considering TVET, poor salary levels were an important factor in creating this negative self-image. From the point of view of "capabilities," when individuals focus more on employment with attractive incentives as a valuable opportunity, they may choose to enroll in TVET. As quoted earlier, Bonvin and Galster (2010) state that "employability is more than the ability to access work; it is about the real freedom to choose the job one has reason to value" (p. 72). The study by Madhi and Barrientos (2003) revealed that lower career opportunities and wages appear to have consolidated negative social attitudes among Saudis toward technical and vocational training. In the case of parental educational level, the results showed that respondents whose mothers had the highest level of education (bachelor's degree) had a negative perception of TVET. Such students and their parents might be more ambivalent regarding the benefits of technical and vocational training, although it could represent an opportunity for those with lower family incomes and more limited choices. While Alandas's study (2002) indicates that there were no statistically significant differences concerning parental academic level and income for students in choosing TVET, it was shown that stigmatizing TVET was linked to the parental influence of students' decision. Similarly, both Alnaqbi (2016) and Ayub (2017) reached the same conclusion regarding the influence of parental choice on students' decisions, that as long as the freedom to make decisions is absent, students will be far less likely able to pursue TVET. Sen (1992) states that a person's capability relies on the freedom to lead different sorts of lives, through being able to choose among different ways of living and different ways of 'being and doing'. He also asserts that an individual's choice should not be linked to social or occupational stigmatization, demotion, or decrease in status (2009). From the perspective of capability, therefore, the more a prospective student takes account of their parents' educational level and feelings of social prestige (the conversion factors), the less likely they are to feel able to choose TVET. However, due to the high average mean value for scale 2, it seems clear that many of those who actually participate in such programs feel hopeful that such negative views are changing, and based on the questionnaire responses, this is at least partly due to the government's initiatives. Bilboe (2011) found that being ill-informed concerning what vocational education can offer is likely to contribute to decreased student interest in enrolment. Increasing students' awareness concerning the importance of TVET through social media, teachers, and other factors may help to reduce the stigma toward TVET. The results indicate particular optimism for the role that supporting women's participation in TVET can play in reducing the stigma attached to TVET. This contrasts with the findings of Calvert and Al-Shetawi (2002) mentioned earlier that the main factors affecting women's taking up of TVET was related to how technical and vocational training is structured rather than women's preferences or societal pressures. Our results also indicate a more optimistic view among students concerning the match between women's skills and technical and vocational jobs than was the case among the managers who participated in Calvert and Al-Shetawi's (2002) earlier study. In the period 1995–1996, women comprised only 5% of TVET students (Calvert and Al-Shetaiwi 2002) whereas, in this study, they comprised over 40% of participants. Moreover, although 83% of those surveyed by Calvert and Al-Shetaiwi (2002) claimed that there were limited technical and vocational jobs for women in the private sector, almost the same proportion of the current study's sample agreed or strongly agreed that the Saudi Arabian government's increased support for women's participation in TVET had helped to improve social perceptions of that field, thus suggesting that the Saudi Arabian government is making progress both in improving opportunities for women and reducing negative social perceptions of technical and vocational training and work. Therefore, supporting women through facilitating their needs would positively change their attitudes toward TVET. Sen (1992) sees capability, or opportunity for functionings, as the variable that should be equalized among individuals with taking their various different characteristics into consideration. If men and women hold equal primary goods, they are equally well-off, but the women might be at a disadvantage due to personal characteristics or gender roles. Moreover, as noted, the mean values for both scales demonstrated high levels of positive views among all respondents regarding both the economic and the socio-cultural effects of Saudi Arabia's efforts to advance technical and vocational training. This study can assist policy makers in updating legislation to promote relevant policies related to TVET. Specifically, it may contribute toward developing strategies to improve the societal view of vocational and technical training. Effective policies and strategies can concentrate on methods of teaching and training, improvement of educational infrastructure and environment, effective use of media and communications, and developing partnerships between companies in the private sector and vocational and technical training systems. Moreover, effective vocational guidance in secondary schools could be introduced to ensure that students are equipped with the skills, knowledge, experience, and attitudes required to make sound career decisions. Regarding the first recommendation, as part of the National Reforms in Vocational Education and Training and Adult Learning, Poland's TVET system increased the number of practical training hours to improve teaching and training methods in 2016 (Doe 2019). To address the second recommendation, the Saudi government can look to examples such as Georgia, where measures to improve the vocational education learning environment have included developing modern infrastructure and providing access to advanced technologies and facilities (Ministry of Education, Science, Culture and Sport of Georgia 2013). Finally, the European Business Forum 2014 proposed measures to improve collaboration with the private sector (ICF 2014), noting that such partnerships should be informed by the demands of the labor market as well as students' interests. This study has some limitations. The survey failed to ask students about which course they were studying. This information would have been useful in casting more light on the attitudes and responses of students studying different courses within the overall technical sector. This is a limitation which could be taken into account by future researchers. The study also focused on the attitudes of students currently enrolled in TVET programs and did not explore the views of young adults and others not participating in such training. In addition, this study did not obtain data on current employment rates for TVET program graduates, including women's employment rates. Future research should expand the sample to include young adults who are either enrolled in non-technical schools or not attending school, and longitudinal analyses would help to determine if students' views change after they have graduated and entered the employment market. TVET: Referencesz Agrawal T, Agrawal A (2017) Vocational education and training in India: a labour market perspective. J Voca Educ Train 69:246–265. https://doi.org/10.1080/13636820.2017.1303785 Aizenman J, Jinjarak Y, Ngo N, Noy I (2018) Vocational education, manufacturing, and income distribution: international evidence and case studies. Open Econ Rev 29:641–664. https://doi.org/10.1007/s11079-017-9475-7 Al Omran A (2018) Record numbers of foreign workers leave Saudi Arabia, Financial Times. https://www.ft.com/content/c710cf30-8441-11e8-a29d-73e3d454535d. Accessed 25 Sep 2019 Alandas S (2002) Attitudes of freshmen in Saudi technical colleges toward vocational-technical education. Dissertation, Ohio State University Alasmeri M (2012) Acceptance of vocational industrial training between the culture of society and the future career: an applied study on the Saudi-Japanese institute in Jeddah. J of Kin Abdul Univ Econo Admin 105:1–113. https://doi.org/10.4197/Eco.26-2.2 Aldossari M, Bourne D (2016) Nepotism and turnover intentions amongst knowledge workers in Saudi Arabia. In: Jemielniak D (ed) The laws of the knowledge workplace: changing roles and the meaning of work in knowledge-intensive environments. Routledge, London, pp 25–34 Alfarran A, Pyke J, Stanton P (2018) Institutional barriers to women's employment in Saudi Arabia. Equ Dive Inclu Inter J 37:713–727. https://doi.org/10.1108/EDI-08-2017-0159 Alnaqbi S (2016) Attitudes towards vocational education and training in the context of United Arab Emirates: a proposed framework. Inter J Bus Manag 11:31–38. https://doi.org/10.5539/ijbm.v11n1p31 Al-Rasheed M (2002) A history of Saudi Arabia. Cambridge University Press, New York Alshuwaikhat H, Mohammed I (2017) Sustainability matters in national development visions—evidence from Saudi Arabia's vision for 2030. Sustain 9:408. https://doi.org/10.3390/su9030408 Arabi K (2018) The impact of human capital on Saudi economic growth: emphasis on female human capital. Arch Busin Res 6:189–203. https://doi.org/10.14738/abr.612.5588 Aryeetey E, Doh D, Andoh P (2011) From prejudice to prestige: vocational education and training in Ghana. City and Guilds Centre for Skills Development; Council for Technical and Vocational Education and Training (COTVET-Ghana). https://unevoc.unesco.org/go.php?q=UNEVOC+Publications&lang=en&null=ft&null=INS&akt=id&st=&qs=5866. Accessed 29 Aug 2019 Ayub H (2017) Parental influence and attitude of students towards technical education and vocational training. Inter J Infor Educ Tech 7:534–538. https://doi.org/10.18178/ijiet.2017.7.7.925 Belcher J, DeForge B (2012) Social stigma and homelessness: the limits of social change. J Huma Behav Soci Envir 22:929–946. https://doi.org/10.1080/10911359.2012.707941 Bilboe W (2011) Vocational education and training in Kuwait: vocational education versus values and viewpoints. Intern J Train Res 9:256–260. https://doi.org/10.5172/ijtr.9.3.256 Bonvin J, Galster D (2010) Making them employable or capable? social integration policies at a crossroads. In: Otto H, Ziegler H (eds) Education, welfare and the capabilities approach: a European perspective. Barbara Budrich, Farmington Hills, pp 71–84 Bosbait M, Wilson R (2005) Education, school to work transitions and unemployment in Saudi Arabia. Mid East Stud 41:533–546. https://doi.org/10.1080/00263200500119258 Calvert J, Al-Shetaiwi A (2002) Exploring the mismatch between skills and jobs for women in Saudi Arabia in technical and vocational areas: the view of Saudi Arabian private sector business managers. Inter J Train Deve 6:112–124. https://doi.org/10.1111/1468-2419.00153 Chankseliani M, James Relly S, Laczik A (2016) Overcoming vocational prejudice: how can skills competitions improve the attractiveness of vocational education and training in the UK? Brit Educ Res J 42:582–599. https://doi.org/10.1002/berj.3218 Dang A (2014) Amartya Sen's capability approach: a framework for well-being evaluation and policy analysis? Rev Soc Eco 72:460–484. https://doi.org/10.1080/00346764.2014.958903 Doe, J. (2019). National reforms in vocational education and training and adult learning. https://eacea.ec.europa.eu/national-policies/eurydice/content/national-reforms-vocational-education-and-training-and-adult-learning-50_en. Accessed 21 Feb 2020 Essel O, Agyarkoh E, Sumaila S, Yankson P (2014) TVET stigmatization in developing countries: reality or fallacy? Euro J Train Dev Stud 1:27–42 Falk G (2010) Stigma: how we treat outsiders. Prometheus Books, New York Faudot A (2019) Saudi Arabia and the rentier regime trap: a critical assessment of the plan Vision 2030. Resou Poli 62:94–101. https://doi.org/10.1016/j.resourpol.2019.03.009 General Authority of Statistics (2019a) Main indicators of the labor market. https://www.stats.gov.sa/sites/default/files/labour_market_q1_2019_0.pdf. Accessed 29 Aug 2019 General Authority of Statistics (2019b) Population growth rate. https://www.stats.gov.sa/ar/indicators/1. Accessed 29 Aug 2019 Gliem R, Gliem J (2003) Calculating, interpreting, and reporting Cronbach's alpha reliability coefficient for Likert-type scales. Midwest research-to-practice conference in adult, continuing, and community education. Ohio State University, Columbus, pp 82–88 Gullekson M (1992) Stigma: families suffer too. In: Fink P, Tasman A (eds) Stigma and mental illness. American Psychiatric Press, Washington, pp 11–12 Hussain Z (2020) Nitaqat-Saudi Arabia's new labour policy: is it a rentier response to domestic discontent? In: Rajan S, Oommen G (eds) Asianization of migrant workers in the Gulf countries. Springer, Singapore, pp 151–175 Hvidt M (2018) The demographic time bomb: how the Arab Gulf countries could cope with growing number of youngsters entering the job market. Videncenter Om Det Moderne Mellemøsten.‏ https://findresearcher.sdu.dk:8443/ws/files/144658674/Hvidt_Demographic_Timebomb_Dec_18.pdf. Accessed 5 Sep 2019 ICF (2014) European business forum on vocational training business & vet–partners for growth and competitiveness. http://ec.europa.eu/education/events/20140923-business-vet_en. Accessed 21 Feb 2020 Igarashi T, Acosta P (2018) Who benefits from dual training systems? evidence from the Philippines. World Bank Policy Research Working Paper, (No. 8429). http://documents.worldbank.org/curated/en/576691525362185723/Who-benefits-from-dual-training-systems-evidence-from-the-Philippines. Accessed 15 Aug 2019 Johannesen-Schmidt M, Eagly A (2002) Diminishing returns: the effects of income on the content of stereotypes of wage earners. Pers Soc Psych Bull 28:1538–1545. https://doi.org/10.1177/014616702237581 Kennedy O (2011) Philosophical and sociological overview of vocational technical education in Nigeria. Inter J Acad Res Bus Soc Scie 1:167–175 Khan F, Aradi W, Schwalje W, Buckner E, Fernandez-Carag M (2017) Women's participation in technical and vocational education and training in the Gulf States. Inter J Train Res 15:229–244. https://doi.org/10.1080/14480220.2017.1374666 Khashan H (2017) Saudi Arabia's flawed "vision 2030". Mid Eas Quart 24(1):1–8 Kizu T, Kühn S, Viegelahn C (2018) Linking jobs in global supply chains to demand. Inter Labo Rev 158:213–244. https://doi.org/10.1111/ilr.12142 Kotrlik J, Higgins C (2001) Organizational research: determining appropriate sample size in survey research appropriate sample size in survey research. Info tech learn perf J 19(1):43–50 Koyame-Marsh S (2016) The dichotomy between the Saudi women's education and economic participation. J Devel Area 54:431–441. https://doi.org/10.1353/jda.2017.0026 Lambert M, Vero J (2013) The capability to aspire for continuing training in France: the role of the environment shaped by corporate training policy. Inter J Manpow 34:305–325. https://doi.org/10.1108/IJM-05-2013-0091 Looney R (2004) Saudization and sound economic reforms: are the two compatible? Strat Insigh 3(2) Madhi S, Barrientos A (2003) Saudisation and employment in Saudi Arabia. Care Devel Inter 8:70–77. https://doi.org/10.1108/13620430310465471 Markowitz F (1998) The effects of stigma on the psychological well-being and life satisfaction of persons with mental illness. J Heal Soc Behav 39:335–347. https://doi.org/10.2307/2676342 McGrath S (2012) Vocational education and training for development: a policy in need of a theory? Inter J Educ Devel 32:623–631. https://doi.org/10.1016/j.ijedudev.2011.12.001 Mellahi K (2000) Human resource development through vocational education in gulf cooperation countries: the case of Saudi Arabia. J Vocat Educ Train 52:329–344. https://doi.org/10.1080/13636820000200119 Mellahi K, Wood G (2001) Human resource management in Saudi Arabia. In: Budhwar P, Debrah Y (eds) Human resource management in developing countries. Routledge, London, pp 135–152 Ministry of Education (2018) Directory of specializations in higher education institutions. file:///C:/Users/ABDULAZIZ/Downloads/Dalil-Al-Takhassosat.pdf. Accessed 20 Aug 2019 Ministry of Education, Science, Culture and Sport of Georgia (2013) Vocational education and training development strategy. https://mes.gov.ge/. Accessed 20 Aug 2019 Nieuwenhuis L, Shapiro H (2004) VET system change evaluated: a comparison of Dutch and Danish VET reform. In: Descy P, Tessaring M (eds) Evaluation of systems and programmes. Third report on vocational training research in Europe, Background report no 3, Luxemburg, 2004 Nussbaum M (2007) Capabilities as fundamental entitlements: sen and social justice. In: Kaufman A (ed) Capabilities equality- basic issues and problems. Routledge, New York, pp 54–80 Olesel J (2010) Vocational education and training (VET) and young people. Educ Train 52:415–426. https://doi.org/10.1108/00400911011058352 Powell L, McGrath S (2014) Exploring the value of the capability approach for vocational education and training evaluation: reflections from South Africa. In: Carbonnier G, Carton M, King K (eds) Education, learning, training: critical issues for development. Brill Nijhoff, Leiden, pp 126–148 Ramady M (2010) The Saudi Arabian economy: policies, achievements, and challenges. Springer, Dordrecht Heidelberg Remington T (2018) Bureaucratic politics and labour policy in China. China Intern J 16(3):97–119 Robeyns I (2006) The capability approach in practice. J Polit Philo 14:351–376. https://doi.org/10.1111/j.1467-9760.2006.00263.x Rusticus S, Lovato C (2014) Impact of sample size and variability on the power and type I error rates of equivalence tests: a simulation study. Pract Asses Res Eval 19(1):11. https://doi.org/10.7275/4s9m-4e81 Saudi Vision 2030 (n.d.) Kingdom of Saudi Arabia. vision2030.gov.sa/download/file/fid/417. Accessed 20 Sep 2019 Sen A (1992) Inequality reexamined. Oxford University Press, New York Sen A (1993) Capability and well-being. In: Nussbaum M, Sen A (eds) The quality of life. Clarendon Press, Oxford, pp 30–54 Sen A (1999) Development as freedom. Knopf, New York Sen A (2005) Human rights and capabilities. J Hum Devel 6:151–166. https://doi.org/10.1080/14649880500120491 Sen A (2009) The idea of justice. Allen Lane, London Sultana R (2017) Career guidance and TVET: critical intersections in the Arab Mediterranean countries. Inter J Train Res 15:214–228. https://doi.org/10.1080/14480220.2017.1374667 Technical and Vocational Training Corporation (2018) Annual report. https://www.tvtc.gov.sa/Arabic/Documents/TVTC2018Report.pdf. Accessed 23 Aug 2019 Thompson M (2018) The Saudi 'social contract' under strain: employment and housing. In: Lynch M, Cammett M, Fabbe K (eds) Social policy in the Middle East and North Africa. Belfer Center for Science and International Affairs, Harvard World Bank (2019) Population growth. https://data.worldbank.org/indicator/SP.POP.GROW?locations=SA. Accessed 12 Aug 2019 Yamada M (2018) Can Saudi Arabia move beyond "production with rentier characteristics"? Human capital development in the transitional oil economy. Mid Eas J 72:587–609. https://doi.org/10.3751/72.4.13 Young K (2018) The difficult promise of economic reform in the Gulf. Baker III Institute for Public Policy of Rice University, Washington D.C, James A The author extends his appreciation to the Deanship of Scientific Research at King Saud University for funding this work, as well as providing the assistance in editing services through its cooperation with professional editors at Editage, a division of Cactus Communications. This work was supported by the Deanship of Scientific Research at King Saud University under Grant number NFG-7-18-01-21. Educational Policies Department, College of Education-King Saud University, P.O Box 2458, Riyadh, 11451, Saudi Arabia Abdulaziz Salem Aldossari The researcher committed to write and analyze the research, and then check it after receiving the comments from the editors and reviewers. The author read and approved the final manuscript. Correspondence to Abdulaziz Salem Aldossari. The author declare that he have no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Aldossari, A.S. Vision 2030 and reducing the stigma of vocational and technical training among Saudi Arabian students. Empirical Res Voc Ed Train 12, 3 (2020). https://doi.org/10.1186/s40461-020-00089-6 Technical and vocational education
CommonCrawl
Bifurcation of the critical crossing cycle in a planar piecewise smooth system with two zones DCDS-B Home Integrability and bifurcation of a three-dimensional circuit differential system doi: 10.3934/dcdsb.2021118 Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the "Online First" tab for the selected journal. The multi-dimensional stochastic Stefan financial model for a portfolio of assets Dimitra C. Antonopoulou 4,5, , Marina Bitsaki 3, and Georgia Karali 1,2,, Department of Mathematics and Applied Mathematics, University of Crete, GR–714 09 Heraklion, Greece Institute of Applied and Computational Mathematics, FORTH, GR–711 10 Heraklion, Greece Computer Science Department, University of Crete, Voutes University Campus, HERAKLION, Crete, GR-70013, Greece Department of Mathematical and Physical Sciences, University of Chester, Thornton Science Park, CH2 4NU, UK * Corresponding author: Georgia Karali Received April 2020 Revised February 2021 Early access April 2021 Full Text(HTML) Figure(8) / Table(5) The financial model proposed involves the liquidation process of a portfolio through sell / buy orders placed at a price $ x\in\mathbb{R}^n $, with volatility. Its rigorous mathematical formulation results to an $ n $-dimensional outer parabolic Stefan problem with noise. The moving boundary encloses the areas of zero trading. We will focus on a case of financial interest when one or more markets are considered. We estimate the areas of zero trading with diameter approximating the minimum of the $ n $ spreads for orders from the limit order books. In dimensions $ n = 3 $, for zero volatility, this problem stands as a mean field model for Ostwald ripening, and has been proposed and analyzed by Niethammer in [25], and in [7]. We propose a spherical moving boundaries approach where the zero trading area consists of a union of spherical domains centered at portfolios various prices with radii representing the half of the minimum spread. We apply Itô calculus and provide second order formal asymptotics for the stochastic dynamics of the spreads that seem to disconnect the financial model from a large diffusion assumption on the liquidity coefficient of the Laplacian that would correspond to an increased trading density. Moreover, we solve the approximating systems numerically. Keywords: Multi-D stochastic Stefan problem, spreads dynamics, portfolios management, moving boundary problem, financial model, stochastic volatility, limit order books. Mathematics Subject Classification: Primary: 91G80, 91B70, 60H30, 60H15; Secondary: 65C30. Citation: Dimitra C. Antonopoulou, Marina Bitsaki, Georgia Karali. The multi-dimensional stochastic Stefan financial model for a portfolio of assets. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021118 N. D. Alikakos, P. W. Bates and X. Chen, Convergence of the Cahn-Hilliard Equation to the Hele-Shaw Model, Arch. Rational Mech. Anal., 128 (1994), 165-205. doi: 10.1007/BF00375025. Google Scholar N. D. Alikakos and G. Fusco, Ostwald ripening for dilute systems under quasistationary dynamics, Comm. Math. Phys., 238 (2003), 429-479. doi: 10.1007/s00220-003-0833-5. Google Scholar N. D. Alikakos, G. Fusco and G. Karali, The effect of the geometry of the particle distribution in Ostwald ripening, Comm. Math. Phys., 238 (2003), 481-488. doi: 10.1007/s00220-003-0834-4. Google Scholar N. D. Alikakos, G. Fusco and G. Karali, Ostwald ripening in two dimensions- The rigorous derivation of the equations from Mullins-Sekerka dynamics, J. Differential Equations, 205 (2004), 1-49. doi: 10.1016/j.jde.2004.05.008. Google Scholar A. Altarovici, J. Muhle-Karbe and H. M. Soner, Asymptotics for fixed transaction costs, Finance Stoch., 19 (2015), 363-414. doi: 10.1007/s00780-015-0261-3. Google Scholar D. C. Antonopoulou, D. Blömker and G. D. Karali, The sharp interface limit for the stochastic Cahn-Hilliard equation, Ann. Inst. Henri Poincaré Probab. Stat., 54 (2018), 280-298. doi: 10.1214/16-AIHP804. Google Scholar D. C. Antonopoulou, G. D. Karali and A. N. K. Yip, On the parabolic Stefan problem for Ostwald ripening with kinetic undercooling and inhomogeneous driving force, J. Differential Equations, 252 (2012), 4679-4718. doi: 10.1016/j.jde.2012.01.016. Google Scholar British Pound v US Dollar Data, https://www.poundsterlinglive.com.,, Google Scholar X. Chen, The Hele-Shaw problem and area-preserving curve shortening motions, Arch. Rational Mech. Anal., 123 (1993), 117-151. doi: 10.1007/BF00695274. Google Scholar X. Chen, Global asymptotic limit of solutions of the Cahn-Hilliard equation, Journal of Differential Geometry, 44 (1996), 262-311. Google Scholar X. Chen, X. Hong and F. Yi, Existence, uniqueness and regularity of classical solutions of Mullins-Sekerka problem, Comm. Partial Differential Equations, 21 (1996), 1705-1727. doi: 10.1080/03605309608821243. Google Scholar X. Chen and M. Dai, Characterization of optimal strategy for multiasset investment and consumption with transaction costs, SIAM J. Financial Math., 4 (2013), 857-883. doi: 10.1137/120898991. Google Scholar X. Chen and F. Reitich, Local existence and uniqueness of solutions of the Stefan problem with surface tension and kinetic undercooling, J. Math. Anal. Appl., 164 (1992), 350-362. doi: 10.1016/0022-247X(92)90119-X. Google Scholar R. Cont and A. de Larrard, Price dynamics in a Markovian limit order market, SIAM J. Financial. Math., 4 (2013), 1-25. doi: 10.1137/110856605. Google Scholar R. Cont, S. Stoikov and R. Talreja, A stochastic model for order book dynamics, Oper. Res., 58 (2010), 549-563. doi: 10.1287/opre.1090.0780. Google Scholar E. Ekström, Selected Problems in Financial Mathematics, PhD Thesis, Uppsala Universitet, Sweden, 2004. Google Scholar L. C. Evans, H. M. Soner and P. E. Souganidis, Phase transitions and generalized motion by mean curvature, Comm. Pure Appl. Math., 45 (1992), 1097-1123. doi: 10.1002/cpa.3160450903. Google Scholar T. Funaki, Singular limit for stochastic reaction-diffusion equation and generation of random interfaces, Acta Math. Sin. (Engl. Ser.), 15 (1999), 407-438. doi: 10.1007/BF02650735. Google Scholar M. D. Gould, M. A. Porter, S. Williams, M. McDonald, D. J. Fenn and S. D. Howison, Limit order books, Quant. Finance, 13 (2013), 1709-1742. doi: 10.1080/14697688.2013.803148. Google Scholar V. Henderson, Prospect theory, liquidation, and the disposition effect, Management Science, 58 (2012), 445-460. Google Scholar T. Lybek and A. Sarr, Measuring Liquidity in Financial Markets, International Monetary Fund, work-in-progress, No. 02/232, 2002. Google Scholar H. M. Markowitz, Portfolio selection: Efficient diversification of investments, John Wiley and Sons, Inc., New York, 1959. Google Scholar R. C. Merton, Lifetime portfolio selection under uncertainty: The continuous-time case, Review of Economics and Statistics, 51 (1969), 247-257. doi: 10.2307/1926560. Google Scholar M. Müller, Stochastic Stefan-type problem under first-order boundary conditions, Ann. Appl. Probab., 28 (2018), 2335-2369. doi: 10.1214/17-AAP1359. Google Scholar B. Niethammer, Derivation of the LSW-theory for Ostwald ripening by homogenization methods, Arch. Rational Mech. Anal., 147 (1999), 119-178. doi: 10.1007/s002050050147. Google Scholar B. Niethammer, The LSW model for Ostwald ripening with kinetic undercooling, Proc. Roy. Soc. Edinburgh Sect. A, 130 (2000), 1337-1361. doi: 10.1017/S0308210500000718. Google Scholar W. Ostwald, Blocking of Ostwald ripening allowing long-term stabilization, Z. Phys. Chem., 37 (1901), 385 pp. Google Scholar C. Parlour and D. Seppi, Handbook of Financial Intermediation & Banking, North-Holland (imprint of Elsevier), Amsterdam, eds. A. Boot and A. Thakor, 2008. Google Scholar Z. Zheng, Stochastic Stefan problems: Existence, uniqueness, and modeling of market limit orders, PhD Thesis, University of Illinois at Urbana-Champaign, 2012. Google Scholar G. Zimmerman, 2 Portfolio Protection Strategies That Don't Work - and 2 That Do, Advisors Voices, 2016. https://www.nerdwallet.com/blog/investing/2-portfolio-protection-strategies-dont-work/ Google Scholar Figure 1. Solid phase $ \mathcal{D}(0) $ of $ I = 3 $ initial circular domains (discs) in $ \mathbb{R}^2 $, where $ \mathbb{R}^2-\mathcal{D}(0) $ consists the initial liquid phase, and $ \Gamma(0) = \Gamma_1(0)\cup\Gamma_2(0)\cup\Gamma_3(0) $ Figure Options Download as PowerPoint slide Figure 2. Radii dynamics of $ 4 $ balls at the solid phase at the left, and radii dynamics of $ 100 $ balls at the solid phase at the right Figure 3. Radii dynamics of $ 2 $ balls at the solid phase Figure 4. Radius dynamics of one ball at the solid phase with relatively large spread at the left, and radius dynamics of one ball at the solid phase with relatively small spread at the right Figure 5. 100 realizations of $ R(t) $, for $ t\in[0,15] $, with first order approximation Figure 6. 100 realizations of $ R(t) $, for $ t = 15 $ (first order approximation) Figure 7. 100 realizations of $ R(t) $, for $ t\in[0,15] $, with second order approximation Figure 8. 100 realizations of $ R(t) $, for $ t = 15 $ (second order approximation) Table 1. A sample of 5 quotes for asset 1 Time $ t_j $ $ A_1(t_j) $ $ B_1(t_j) $ $ spr_1(t_j) $ $ \frac{A_1(t_j)+B_1(t_j)}{2} $ 9:00 30.25 29.75 0.5 30 9:02 30.75 29.50 1.25 30.125 9:06 31.50 29.00 2.50 30.25 Sum 158.5 146.25 12.25 152.375 $ \bar{spr}_1 $ $ 12.25/5=2.45 $ $ lspra_1 $ $ \ln(158.5)-\ln(146.25)=0.080437 $ $ x_{c1} $ $ \ln(152.375/5)=3.417 $ Sum 76.75 74.25 2.50 75.50 $ \bar{spr}_2 $ $ 2.50/5=0.5 $ $ lspra_2 $ $ \ln(76.75)-\ln(74.25)=0.03312 $ $ x_{c2} $ $ \ln(75.50/5)=2.715 $ Sum 110.5 95 15.50 102.75 $ \bar{spr}_3 $ $ 15.50/5=3.1 $ $ lspra_3 $ $ \ln(110.5)-\ln(95)=0.15114 $ $ x_{c3} $ $ \ln(102.75/5)=3.023 $ Table 4. Number of shares sold, and liquidity coefficient Asset $ w_i $ $ a_i=w_i/\bar{spr}_i $ $ w_i/w_{\rm tot} $ $ a_i w_i/w_{\rm tot} $ 1 550 550/2.45=224.49 550/1600=0.34375 77.168 2 750 750/0.5=1500 750/1600=0.46875 703.125 3 300 300/3.1=96.774 300/1600=0.1875 18.145 Sum 1600 $ \alpha_{\rm in}=798.438 $ Table 5. Number of shares sold, and liquidity coefficient in logarithmic scale Asset $ w_i $ $ w_i/lspr_i $ $ w_i/w_{\rm tot} $ $ \frac{w_i}{lspra_i}\frac{w_i}{w_{\rm tot}} $ 1 550 550/0.080437=6837.64 550/1600=0.34375 2350.438 2 750 750/0.03312=22644.92 750/1600=0.46875 10614.806 3 300 300/0.15114=1984.91 300/1600=0.1875 372.170 Sum 1600 $ \alpha=13337.414 $ Hua Chen, Shaohua Wu. The moving boundary problem in a chemotaxis model. Communications on Pure & Applied Analysis, 2012, 11 (2) : 735-746. doi: 10.3934/cpaa.2012.11.735 O. Guès, G. Métivier, M. Williams, K. Zumbrun. Boundary layer and long time stability for multi-D viscous shocks. Discrete & Continuous Dynamical Systems, 2004, 11 (1) : 131-160. doi: 10.3934/dcds.2004.11.131 Arnaud Debussche, Julien Vovelle. Diffusion limit for a stochastic kinetic problem. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2305-2326. doi: 10.3934/cpaa.2012.11.2305 Piotr B. Mucha. Limit of kinetic term for a Stefan problem. Conference Publications, 2007, 2007 (Special) : 741-750. doi: 10.3934/proc.2007.2007.741 Lihua Bian, Zhongfei Li, Haixiang Yao. Time-consistent strategy for a multi-period mean-variance asset-liability management problem with stochastic interest rate. Journal of Industrial & Management Optimization, 2021, 17 (3) : 1383-1410. doi: 10.3934/jimo.2020026 Lixin Wu, Fan Zhang. LIBOR market model with stochastic volatility. Journal of Industrial & Management Optimization, 2006, 2 (2) : 199-227. doi: 10.3934/jimo.2006.2.199 Huiling Li, Xiaoliu Wang, Xueyan Lu. A nonlinear Stefan problem with variable exponent and different moving parameters. Discrete & Continuous Dynamical Systems - B, 2020, 25 (5) : 1671-1698. doi: 10.3934/dcdsb.2019246 Kais Hamza, Fima C. Klebaner, Olivia Mah. Volatility in options formulae for general stochastic dynamics. Discrete & Continuous Dynamical Systems - B, 2014, 19 (2) : 435-446. doi: 10.3934/dcdsb.2014.19.435 Yan Zhang, Yonghong Wu, Benchawan Wiwatanapataphee, Francisca Angkola. Asset liability management for an ordinary insurance system with proportional reinsurance in a CIR stochastic interest rate and Heston stochastic volatility framework. Journal of Industrial & Management Optimization, 2020, 16 (1) : 71-101. doi: 10.3934/jimo.2018141 Xiaoshan Chen, Fahuai Yi. Free boundary problem of Barenblatt equation in stochastic control. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1421-1434. doi: 10.3934/dcdsb.2016003 Fujun Zhou, Junde Wu, Shangbin Cui. Existence and asymptotic behavior of solutions to a moving boundary problem modeling the growth of multi-layer tumors. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1669-1688. doi: 10.3934/cpaa.2009.8.1669 Claude-Michel Brauner, Josephus Hulshof, Luca Lorenzi. Stability of the travelling wave in a 2D weakly nonlinear Stefan problem. Kinetic & Related Models, 2009, 2 (1) : 109-134. doi: 10.3934/krm.2009.2.109 Jin Ma, Xinyang Wang, Jianfeng Zhang. Dynamic equilibrium limit order book model and optimal execution problem. Mathematical Control & Related Fields, 2015, 5 (3) : 557-583. doi: 10.3934/mcrf.2015.5.557 Donatella Danielli, Marianne Korten. On the pointwise jump condition at the free boundary in the 1-phase Stefan problem. Communications on Pure & Applied Analysis, 2005, 4 (2) : 357-366. doi: 10.3934/cpaa.2005.4.357 Laurent Devineau, Pierre-Edouard Arrouy, Paul Bonnefoy, Alexandre Boumezoued. Fast calibration of the Libor market model with stochastic volatility and displaced diffusion. Journal of Industrial & Management Optimization, 2020, 16 (4) : 1699-1729. doi: 10.3934/jimo.2019025 Hiroshi Matsuzawa. A free boundary problem for the Fisher-KPP equation with a given moving boundary. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1821-1852. doi: 10.3934/cpaa.2018087 Shi'an Wang, N. U. Ahmed. Optimum management of the network of city bus routes based on a stochastic dynamic model. Journal of Industrial & Management Optimization, 2019, 15 (2) : 619-631. doi: 10.3934/jimo.2018061 Lorella Fatone, Francesca Mariani, Maria Cristina Recchioni, Francesco Zirilli. Pricing realized variance options using integrated stochastic variance options in the Heston stochastic volatility model. Conference Publications, 2007, 2007 (Special) : 354-363. doi: 10.3934/proc.2007.2007.354 Yuan Tan, Qingyuan Cao, Lan Li, Tianshi Hu, Min Su. A chance-constrained stochastic model predictive control problem with disturbance feedback. Journal of Industrial & Management Optimization, 2021, 17 (1) : 67-79. doi: 10.3934/jimo.2019099 Mahadevan Ganesh, Brandon C. Reyes, Avi Purkayastha. An FEM-MLMC algorithm for a moving shutter diffraction in time stochastic model. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 257-272. doi: 10.3934/dcdsb.2018107 PDF downloads (226) HTML views (315) Dimitra C. Antonopoulou Marina Bitsaki Georgia Karali Article outline
CommonCrawl
What is the largest integer $n$ such that $$(1 + 2 + 3 + \cdots+ n)^2 < 1^3 + 2^3 + \cdots+ 7^3?$$ Recall that $$(1 + 2 + 3 + \ldots + n)^2 = 1^3 + 2^3 + 3^3 +\ldots + n^3.$$ Thus we have that for $n\geq 7$, $(1 + 2 + 3 + \ldots + n)^2 = 1^3 + 2^3 + 3^3 +\ldots + n^3 \geq 1^3 + 2^3 +\ldots + 7^3$, while $(1 + 2 + 3 + \ldots + 6)^2 = 1^3 + 2^3 + 3^3 +\ldots + 6^3$, which is less than the desired sum. So our answer is $\boxed{6}.$
Math Dataset
A soccer team has $22$ available players. A fixed set of $11$ players starts the game, while the other $11$ are available as substitutes. During the game, the coach may make as many as $3$ substitutions, where any one of the $11$ players in the game is replaced by one of the substitutes. No player removed from the game may reenter the game, although a substitute entering the game may be replaced later. No two substitutions can happen at the same time. The players involved and the order of the substitutions matter. Let $n$ be the number of ways the coach can make substitutions during the game (including the possibility of making no substitutions). Find the remainder when $n$ is divided by $1000$. There are $0-3$ substitutions. The number of ways to sub any number of times must be multiplied by the previous number. This is defined recursively. The case for $0$ subs is $1$, and the ways to reorganize after $n$ subs is the product of the number of new subs ($12-n$) and the players that can be ejected ($11$). The formula for $n$ subs is then $a_n=11(12-n)a_{n-1}$ with $a_0=1$. Summing from $0$ to $3$ gives $1+11^2+11^{3}\cdot 10+11^{4}\cdot 10\cdot 9$. Notice that $10+9\cdot11\cdot10=10+990=1000$. Then, rearrange it into $1+11^2+11^3\cdot (10+11\cdot10\cdot9)= 1+11^2+11^3\cdot (1000)$. When taking modulo $1000$, the last term goes away. What is left is $1+11^2=\boxed{122}$.
Math Dataset
Doubling time and half-life of exponential growth and decay Due date: Sept. 25, 2020, 11:59 p.m. A population of sea lions is decreasing at a rate of 5% per year. If the population continues to decline at this rate, to what fraction of the original population size will the population decline after 10 years? (Keep at least 4 significant digits.) If the population is decreasing at a rate of 5% per year, then by what number do we need to multiply the population size each year? We didn't tell you how many sea lions there were to start out with, so apparently, your answer shouldn't depend on that value. If you like, you can define a variable to represent the initial population size (for example $p_0$); then, you can multiply that initial population size by a certain number once for each year that passes. However, since we are asking for what fraction of the original population size is left (for example, what fraction of the original $p_0$), you need to divide by that original population size to get your answer. In the end, the variable you chose for initial population size should drop out. To what fraction of the original population size will the population decline after $n$ years? (Online, enter exponentiation using ^, so enter $a^b$ as a^b.) Same procedure as for the previous part, except that you have multiply by that number $n$ times. To find how long it will take for the population to decline by one-half, follow these steps. Set the expression from part (b) equal to one-half. $ = \frac{1}{2}$ Take the logarithm of both sides of that equation. (Online, you can use either ln or log for logarithm; both are interpreted a logarithm base $e$. In this case, it doesn't matter what base logarithm you use. Also, don't simplify your answers yet, but leave them as the logarithm of your previous answers.) Using the log of power rule, bring down the exponent from the left hand side in front of the logarithm. Solve for the value of $n$. The result is a ratio of logarithms. $n=$ $\approx$ (In the first blank, write the ratio of logarithms. In the second blank, give a decimal approximation with at least 4 significant digits.) To repeat, how long will it take for the population to decline by one-half? (The second blank is for a unit.) You can use a similar procedure to find out how long it will take for the population to decline to one-tenth its original size. Set the expression from part (b) equal to one-tenth. $ = \frac{1}{10}$ (Online, you can use either ln or log for logarithm; both are interpreted a logarithm base $e$. In this case, it doesn't matter what base logarithm you use.) To repeat, how long will it take for the population to decline to one-tenth of its original size? If the current population size is 100,000, how long will it take for the population drop down to 5,000 sea lions? (Keep at least 4 significant digits. Second blank is for a unit.) To get to 5,000 sea lions, you need to get down to what fraction of the original population? The rest is the same as the previous problems. Bacteria are growing in a beaker so that the population size increases by 14.87% every minute. If $b_t$ is the bacteria population size in minute $t$, set up a dynamics system model that describes the evolution of the population size. $b_{t+1} - b_t =$ , for $t=0,1,2,3 \ldots$ How long does it take the population size to double? $T_{\text{double}} = $ Your answer will look a lot prettier if you round to four significant digits. (In this case, this means round to the nearest thousandth, as there should be one digit to the left of the decimal.) The second blank is for a unit. Hint: first rewrite the dynamical system into function iteration form. Then, solve the system for $b_t$. Then, find the condition for doubling. Alternatively, use the formula for doubling time. If the population continues to grow at this rate, by what factor does the population size increase in one hour? In two hours? In four hours? You have two ways to solve this. One way is to go back to the solution of the dynamical system and plug in the number of minutes elapsed for each interval. A different way is to calculate how many times the population size doubled in each interval and use that to infer by what factor the population size increased. An experiment is begun at midnight with just a few bacteria so that the fraction of the beaker that the bacteria occupy is approximately $0.00000005959 = 5.959 \times 10^{-8}$. With this initial condition, the bacteria completely fill the beaker after two hours, at 2 AM. At what time was the beaker half full? The beaker was half full at PM AM Write your answer in the form: hh:mm AM/PM. Round your answer to the nearest minute. There are two ways to answer this problem. One is the hard way, which is to start your calculation at midnight. The second way is to completely ignore the information you were given at midnight. Imagine the researchers realized before 2AM that the bacteria were about to overflow the beaker. They found three more empty beakers of the same size as the original beaker so that they had a total of four beakers to hold the bacteria. At what time did the bacteria fill all four beakers? Thinking about the doubling time could be helpful. The polymerase chain reaction is a means of making multiple copies of a DNA segment from only a minute amount of original DNA. The procedure consists of a sequence of multiple cycles. During the course of one cycle, each DNA segment present is duplicated. Suppose you begin with 1 picogram = 0.000000000001 g of DNA. If $d_n$ is the amount of DNA in grams after $n$ cycles, write a discrete dynamical system with initial condition from which the amount of DNA present at the end of each cycle can be computed. $d_{n+1} = $ for $n=0,1,2,3,\ldots$ $d_0 =$ How many grams of DNA would be present after 30 cycles? grams. (Keep at least 3 significant digits in your answer.) In the first days of life, the cells in a human embryo divide into two cells approximately every day. After fertilization, the new life consists of a single cell. If the number of cells continued to double every day, how many weeks would it take the embryo to grow to the size of a human adult, containing approximately 100 trillion ($10^{14}$) cells? (Keep at least four significant digits in your response.) Suppose after someone gets lead poisoning, no further lead is introduced into the bloodstream so that the amount of lead in the bloodstream decreases by 11% per week. Let $p_t$ be the amount of lead, measured in μg/dl (micrograms per deciliter), in the bloodstream $t$ weeks after the lead exposure. (See dynamical system exploration page for more on the lead decay model.) If we write a dynamical system describing the lead decay in difference form, \begin{align*} p_{t+1} - p_t &= a p_t\\ p_0 &= p_0, \end{align*} what is the value of the parameter $a$? $a=$ If we write a dynamical system describing the lead decay in function iteration form, \begin{align*} p_{t+1} &= b p_t\\ p_0 &= p_0, \end{align*} what is the value of the parameter $b$? $b=$ In general, what is the relationship between $a$ and $b$? $b =$ If the initial lead concentration is 64 μg/dl, how long does it take to drop to 32 μg/dl? weeks. To 10 μg/dl (the standard elevated blood lead level for adults)? weeks. weeks. To 5 μg/dl (the standard elevated blood lead level for children)? Does the time required for the lead to drop to half its initial concentration depend on the value of the initial lead concentration? no yes Previous: Doubling time and half-life of exponential growth and decay Next: Online quiz: Quiz 2
CommonCrawl
Thermodynamics and economic feasibility of acetone production from syngas using the thermophilic production host Moorella thermoacetica Stephanie Redl ORCID: orcid.org/0000-0002-7297-43791, Sumesh Sukumara1, Tom Ploeger2, Liang Wu2, Torbjørn Ølshøj Jensen1, Alex Toftgaard Nielsen1 & Henk Noorman2,3 Biotechnology for Biofuelsvolume 10, Article number: 150 (2017) | Download Citation Syngas fermentation is a promising option for the production of biocommodities due to its abundance and compatibility with anaerobic fermentation. Using thermophilic production strains in a syngas fermentation process allows recovery of products with low boiling point from the off-gas via condensation. In this study we analyzed the production of acetone from syngas with the hypothetical production host derived from Moorella thermoacetica in a bubble column reactor at 60 °C with respect to thermodynamic and economic feasibility. We determined the cost of syngas production from basic oxygen furnace (BOF) process gas, from natural gas, and from corn stover and identified BOF gas as an economically interesting source for syngas. Taking gas–liquid mass transfer limitations into account, we applied a thermodynamics approach to derive the CO to acetone conversion rate under the process conditions. We estimated variable costs of production of 389 $/t acetone for a representative production scenario from BOF gas with costs for syngas as the main contributor. In comparison, the variable costs of production from natural gas- and corn stover-derived syngas were determined to be higher due to the higher feedstock costs (1724 and 2878 $/t acetone, respectively). We applied an approach of combining thermodynamic and economic assessment to analyze a hypothetical bioprocess in which the volatile product acetone is produced from syngas with a thermophilic microorganism. Our model allowed us to identify process metrics and quantify the variable production costs for different scenarios. Economical production of bulk chemicals is challenging, making rigorous thermodynamic/economic modeling critical before undertaking an experimental program and as an ongoing guide during the program. We intend this study to give an incentive to apply the demonstrated approach to other bioproduction processes. Syngas fermentation for the production of fuels and chemicals has received increasing attention during the last years [1] and is on the way to commercialization [2]. The fermentation of syngas to various biochemicals is based on the use of acetogenic bacteria that can metabolize carbon monoxide, carbon dioxide, and hydrogen [1]. Syngas fermentation processes on their way to commercialization are typically based on carbon monoxide-rich waste gases derived from industry [3]. Another potential source of syngas is reformed natural gas or biogas [4]. The use of gasified, lignin-rich waste biomass would broaden the spectrum of feedstock used for syngas fermentation tremendously, and help replace the fossil carbon. Furthermore, biomass contains a lignin mass fraction of up to 44.5% (for woody biomass) [5]. Lignin is recalcitrant to enzymatic hydrolysis, and its aromatic constituents are not readily consumable by microbes. Alternatively, the lignin fraction could be converted to syngas for biological conversion. However, the production of syngas from biomass would add an additional cost factor to the production process. A multitude of acetogenic bacteria have been described to date [6]. Moorella thermoacetica has been initially used to elucidate the Wood–Ljungdahl pathway (WLP) that enables acetogens to generate energy by fixation of CO2 or CO with acetate as main product [7]. Hereby CO can serve as carbon source and electron donor (following Eq. 1), while CO2 as carbon source requires another electron donor such as H2 (following Eq. 2) [7]. $$\begin{aligned}& 4 {\text{CO}} + 2 {\text{H}}_{2} {\text{O }} \to {\text{CH}}_{3} {\text{COOH}} + 2 {\text{CO}}_{2} ;\\ & \quad \Delta_{r} G^{0} = - 196\,{\text{kJ}}/{\text{mol,}} \end{aligned}$$ $$\begin{aligned}& 2{\text{CO}}_{2} + 4{\text{H}}_{2} \to {\text{CH}}_{3} {\text{COOH}} + 2{\text{H}}_{2} {\text{O}};\\ & \quad \Delta_{r} G^{0} = - 95\,{\text{kJ}}/{\text{mol}} \end{aligned}$$ The WLP is a well-described pathway [8]. During autotrophic growth, there is no net ATP generated via substrate-level phosphorylation. Energy is solely conserved in chemiosmotic processes [9]. The spectrum of enzymes involved in the electron transport chain as well as the type of cation used to generate the electrochemical gradient differs among acetogens. Although M. thermoacetica is relatively well studied, the exact mechanisms of autotrophic energy conservation have not yet been elucidated and different mechanisms have recently been proposed [10,11,12]. The product range of M. thermoacetica is limited to acetate but could be broadened by the introduction of heterologous pathways. Although the development of basic tools enabling genetic engineering has been published [13,14,15], heterologous expression of industrially relevant product pathways has not been reported for M. thermoacetica. An interesting heterologous product candidate is for example acetone. Since M. thermoacetica grows at an elevated temperature (optimum 55–60 °C [7]), products with low boiling point such as acetone (boiling point 56 °C [16]) would allow for easy, inexpensive product recovery through gas stripping. Acetone is used industrially as a solvent and as precursor of plastics and resins [17] and has an annual production of more than 7 million tons with a market growth of 3–4% per year [18]. The acetone market price reached a value below 1 $/kg in 2015 [19]. The US market size for acetone is around 1.4·106 t/year (assuming 90% of capacity) [20]. Acetone could be produced in M. thermoacetica by introducing a heterologous acetone pathway, such as the one found in Clostridium acetobutylicum [21]. Using an engineered strain of M. thermoacetica in which the acetone pathway is expressed, it would be possible to convert syngas into acetone at an elevated fermentation temperature. Heterologous acetone production was recently reported in a mesophilic acetogen [22]. The production of a biochemical such as acetone from a chosen feedstock via a heterologous pathway on a commercial level is dependent on the physiology of the production host, on process technology, and on economics. Biological conversion of syngas to acetone is only thermodynamically feasible if the substrate provides enough energy to cover the energy requirements for cell maintenance and growth [23,24,25]. Thus, metabolic pathways have to exist to harvest the energy provided by the substrate to generate net ATP. The profitability of the process is dependent on the costs of the substrate and processing costs, as well as the predicted costs to develop the technology. We have evaluated the process of acetone production by gas fermentation using the thermophilic production host M. thermoacetica in a multidisciplinary approach in which we combine the assessment of metabolic and economic feasibility. Techno-economic analysis of gas fermentation in scientific literature is sparse, and to our knowledge, no process analysis has been conducted for syngas fermentation with a thermophilic production strain. Few studies are published for production with mesophilic production strains [26,27,28]. Furthermore, this study exemplifies the potential of thermophiles for the large-scale production of biocommodities, especially those with a relatively low boiling point such as acetone, ethanol, i-propanol, isoprene, or methyl ethyl ketone [29]. In the present study we simulated the production of 30 kt/year acetone (<15% of the annual global growth) from syngas using the acetogen Moorella thermoacetica as hypothetical production host. Thermodynamic calculations and calculations regarding bioreactor design were executed with MS Excel. Determination of the price for syngas We have determined the variable cost of production of syngas derived from three different sources, namely industrial waste gases, reformed natural gas, and biomass. We implemented acid gas removal, gas reforming, and reverse water–gas shift reaction (rWGS) to determine, from these diverse sources, the cost of syngas with a composition of comparable CO content. An overview over the process steps and costs are shown in Table 1. Table 1 Overview of syngas production costs Industrial waste gas The off-gas produced during the basic oxygen furnace (BOF) process of steelmaking is rich in CO, has a low content of contaminants, and is known as a suitable substrate for gas fermentation [30]. For this study, we assumed that basic oxygen furnace gas comes free of charge. According to Handler et al. [31], "steel mill exhaust gases are not currently utilized by any United States mills". The BOF gas with a CO content of 70 mol% (composition of the gas in Additional file 1: Table S1) undergoes acid gas removal [30] to increase the CO content to 81 mol%. This step leads to a price of 27 $/t CO, which equals 7.6·10−4 $/mol CO (Additional file 1: Table S2). Natural gas reforming has been around for several decades. The steam reforming process converts the methane feedstock present in the natural gas to syngas in the presence of steam. Auto thermal reforming (ATR) offers several advantages compared to traditional two-step reforming such as simplicity of design and operation as well as reduced preheating utility consumption [32]. Therefore, based on the values in the literature [32], we assumed that the syngas is generated in a 2:1 (H2:CO) ratio utilizing ATR. Subsequently, the syngas exiting the reformer at 1050 °C and 25 bar pressure is cooled prior to being sent to the rWGS reactor, which is one of the most widely explored options [33]. This process was simulated in SuperPro Designer®. The process converts CO2 and H2 to CO under high temperature, based on kinetics obtained from literature [33]. Subsequently, the exiting gases are passed through a condenser at 3 °C to remove the large amount of water generated as a byproduct, while the gases are sent to the fermenter. Additional file 1: Tables S3 and S4 illustrate the breakdown of the operational costs contributing towards the process to achieve the desired conversion. We determined a cost of 298 $/t CO (0.0084 $/mol CO) for the production of syngas from natural gas, of which 146 $/t CO (0.0041 $/mol CO) arise from the cost for natural gas. Biomass-derived waste gas As a third source of gas we evaluated corn stover-derived syngas. Corn stover would be harvested within a 50 mile radius and transported to the feedstock storage. In our production scenario, 33% of the corn stover in the field is harvested, and the rest has to remain on the land in order to recover the nutrients and prevent excessive erosion [34]. Information regarding feedstock and logistics was obtained from Thompson and Tyner [35]. At the factory, the corn stover bales would be preprocessed (grinding and briquetting). Data related to preprocessing were obtained from Lin et al. [36]. The preprocessed biomass is gasified in a fluidized bed reactor at ca. 870 °C (low temperature gasification). The gasifier unit was selected to have a capacity of 2000 t corn stover briquettes per day. A mass fraction of 52% of the preprocessed feedstock was retained as syngas and as impurities. After removal of impurities, the obtained syngas has a composition of 30% CO, 2% H2, 53% CO2, and 15% H2O (by mass). Equipment details and the syngas composition were acquired from literature [37]. Subsequently, a reverse water–gas shift (rWGS) reaction and a drying step were included to increase the CO content of the syngas. The rWGS reaction and water removal were simulated with SuperPro Designer® [38]. The composition of the rWGS-treated and dried syngas, which is comparable to the composition of the syngas derived from BOF gas and from natural gas is shown in Additional file 1: Table S5. The costs related to syngas production are listed in detail in Additional file 1: Tables S6–S8. The price of corn stover briquettes ready for gasification was determined to be 139 $/t. Gasification was determined to be 10 $/t preprocessed feedstock. Taking gasification, cleaning, rWGS, and drying into consideration, 1 t of preprocessed feedstock is converted to syngas containing 10 kmol CO. The costs of rWGS were determined to be 0.16 $ to produce syngas containing 1 kmol CO. Therefore, the price for syngas was 0.015 $/mol CO (536 $/ton CO). Thermodynamics and process reaction In the process reaction the conversion of carbon and nitrogen sources and other reactants to the products and cell mass is described, where the stoichiometric ratios are determined by the conservation of elements, electrical charge, and energy [24]. The rate of the process reaction (in C-mol/h) is dependent on the specific growth rate and maintenance energy requirements of the microorganism. To obtain the process reaction, firstly the catabolic reaction of product formation was set up, with ν i as the reaction coefficient of each reactant i. Then, the Gibbs energy of the catabolic reaction at standard conditions (T = 25 °C, c l = 1 M), Δ r G 0, was determined using the Gibbs energy of formation, \(\Delta_{f} G_{i}^{0}\), of the reactants (Eq. 3). $$\Delta_{r} G^{0} \left[ {{\text{kJ}}/{\text{mol}}} \right] = \mathop \sum \nolimits \nu_{i} \cdot \Delta_{f} G_{i}^{0} .$$ Additionally, the reaction enthalpy Δ r H 0 at standard conditions was determined using Eq. 4. $$\Delta_{r} H^{0} \left[ {{\text{kJ}}/{\text{mol}}} \right] = \mathop \sum \nolimits \nu_{i} \cdot \Delta_{f} H_{i}^{0} .$$ The values of Δ f G 0 and Δ f H 0 are listed in Additional file 1: Table S9. The Gibbs energy of the reaction, Δ r G 0, was corrected for the process temperature T [K], applying the Gibbs–Helmholtz equation (Eq. 5). $$\begin{aligned}\Delta_{r} G^{T} \left[ {{\text{kJ}}/{\text{mol}}} \right] &= \Delta_{r} G^{0} \cdot \left( {T/298.15\,{\text{K}}} \right) + \Delta_{r} H^{0} \\ \quad& \times (1 - T/298.15\,{\text{K}}), \end{aligned}$$ Δ r G T was further corrected for the concentration of the gaseous substrate and the concentration of the products in the fermentation broth using Eq. 6 [39], with the concentration of each reactant i to the power of its stoichiometric coefficient ν i . $$\Delta_{r} G^{T,c} [{\text{kJ}}/{\text{mol}}] = \Delta_{r} G^{T} + R \cdot T \cdot \ln \left( {c_{i}^{{\nu_{i} }} } \right) \cdot 10^{ - 3} .$$ Subsequently, the Gibbs energy normalized to one mol of carbon source was determined by dividing Δ r G T,c by the stoichiometric coefficient ν of the carbon source. The energy released by the catabolic reaction is required for cell growth and maintenance. Hence, the anabolic reaction, describing cell mass formation, was set up, using C1H1.8O0.5N0.2 (M = 24.6 g/C-mol) as an approximation for the ash-free cell mass composition [24]. The energy requirement for autotrophic growth of 1 C-mol cell mass, a G , amounts to approximately 1000 kJ/C-mol [23]. Using this value, the catabolic reaction rate that is required to supply the energy required for the growth of 1 C-mol cell mass, can be derived. The anabolic and catabolic reactions normalized to 1 C-mol of cell mass were combined to obtain the overall reaction of growth, with the stoichiometric coefficients \(\nu_{i}^{\text{growth}}\). To determine the amount of substrate that provides the energy which is required to maintain the cell mass, an approximation for the maintenance energy requirement (m G) was needed. Tijhuis et al. provided data on the maintenance energy requirement for a large range of aerobic and anaerobic bacteria and concluded that the value of m G is mainly influenced by the process temperature and that the influence of carbon source and strain is negligible [40]. An approximation for the temperature dependency of m G for anaerobic bacteria according to Tijhuis et al. is shown in Eq. 7. $$m_{\text{G}} [{\text{kJ}/ {\text{C-}}}\text{mol}/{\text{h}}] = 3.3 \cdot {\text{e}}^{{\left[ {\left( { - 69,400/R} \right) \cdot \left( {1/T - 1/298.15\,{\text{K}}} \right)} \right]}} .$$ In this way the catabolic reaction providing enough energy to maintain 1 C-mol cell mass could be formed (with the stoichiometric coefficients \(\nu_{i}^{\text{main}}\)). Finally, to obtain the process reaction, the cell mass-specific rates (q-rates) of production and consumption of every compound, including the heat released by the reaction, were determined by adding up the catabolic and anabolic sub-reactions (Eqs. 8, 9). $$q_{i} [{\text{C-mol}}/{\text{h}}] = \nu_{i}^{\text{main}} + \mu \cdot \nu_{i}^{\text{growth}} ,$$ $$q_{\text{heat}} [{\text{kJ}}/{\text{h}}] = \Delta H_{\text{main}} + \mu \cdot \Delta H_{\text{growth}} .$$ A bubble column reactor with a defined height of 30 m and a diameter of 6 m was chosen for the study (reactor volume of 848 m3). On the one hand the reactor height should be maximized to reach a high substrate conversion [41], thereby reducing the number of reactors, and thus the capital cost required to meet the desired production metrics. On the other hand, the reactor height was kept well below the practical limit of 40 m of conventional bioreactors [42]. We chose a height to diameter ratio (aspect ratio) of 5, which is a typical value for bubble column reactors in an industrial setting [42]. The pressure in the top part of the reactor (p t) was set to atmospheric pressure (101,325 Pa). The pressure at the bottom of the reactor (p b) equals the sum of the pressure in the top part of the reactor (p t) and the hydrostatic pressure and is therefore a function of the broth volume: using the height of the ungassed liquid column (h), the broth density ρ (assumed to equal the density of water since the concentration of cell mass and other compounds is relatively low, as will be discussed below), and the gravitational constant g, p b were determined (Eq. 10). The back pressure asserted by the gas compressed into the reactor was neglected. $$p_{\text{b}} [{\text{Pa}}] = p_{\text{t}} + h \cdot \rho \cdot g.$$ The logarithmic mean pressure (p) in the reactor vessel was obtained using Eq. 11 [43]. $$p [{\text{Pa}}] = (p_{\text{b}} - p_{\text{t}} )/\ln (p_{\text{b}} /p_{\text{t}} ).$$ Gas–liquid mass transfer The average rate of gas flow (F av) was obtained using Eq. 12. $$F_{\text{av}} [{\text{m}}^{3} /{\text{h}}] = \left[ {\left( {R_{\text{in}} + R_{\text{out}} } \right) \cdot 0.5 \cdot R \cdot T} \right]/p .$$ The pressure-corrected average superficial gas velocity \(v_{\text{gs}}^{\text{c}}\) is dependent on the averaged volumetric gas flow rate through the broth column and the cross-sectional area of the reactor (Eq. 13) [43]. Parameters influencing the average superficial gas velocity \(v_{\text{gs}}^{\text{c}}\) (compare Additional file 1: Figure S2) were chosen such that \(v_{\text{gs}}^{\text{c}}\) did not exceed 0.15 m/s, which is a conventional value for bubble column reactors with a diameter of up to 10 m [44]. $$v_{\text{gs}}^{\text{c}} [{\text{m}}/{\text{s}}] = F_{av} /A/3600.$$ The gas has to be transferred across the gas–liquid interfacial area around the gas bubbles. The liquid-phase mass transfer coefficient k L and the interfacial area a are both dependent on physical properties and on operation conditions, but are usually merged in their empirical cross-product, k L a [45]. The value of k L a was corrected for the process temperature using Eq. 14 and a temperature correction factor θ = 1.022 [46]. $$k_{\text{L}} a [1/{\text{s}}] = k_{\text{L}} a (20\,^\circ {\text{C}}) \cdot \theta^{{T - 293.15\,{\text{K}}}} .$$ The volumetric mass transfer coefficient k L a can be derived using Eq. 15 (derivation: see Additional file 1). $$k_{\text{L}} a (20\,^\circ {\text{C}}) [1/{\text{s}}] = 0.32 \cdot \left(D_{i} /D_{{{\text{O}}_{ 2} }} \right)^{0.5} \cdot \left( { v_{\text{gs}}^{\text{c}} } \right)^{0.7} .$$ The diffusion coefficient D was obtained by correcting the standard diffusion coefficient D 0 for the process temperature T, using the dynamic viscosity at 298.15 K, µ 0 (Eq. 16). $$D [{\text{cm}}^{2} /{\text{s}}] = \left( {T/298.15\,{\text{K}}} \right) \cdot \left( {\mu^{0} /\mu_{T} } \right) \cdot D^{0} .$$ The values for µ 0 at 298.15 K and µ T at the process temperature were obtained with the Gas Viscosity Calculator online tool [47]. The values for D 0 were obtained from [48] and are, as well as the values for µ 0, listed in Additional file 1: Table S10. The gradient between the concentration of a compound in the gas phase and in the liquid phase serves as the driving force for the gas to overcome the gas–liquid interface [45]. The rate with which the gas enters the liquid phase, the transfer rate (TR), was calculated according to [45], using the dissolved gas concentration at equilibrium (c*) and the average concentration in the liquid phase (c l) shown in Eq. 17. It was assumed that c l of CO equals 1% of c*, due to the constant uptake by the microorganisms. $${\text{TR}} [{\text{mol}}/{\text{m}}^{3} /{\text{h}}] = k_{\text{L}} a \cdot \left( {c^{*} - c_{\text{l}} } \right) = k_{\text{L}} a \cdot 0.99 \cdot c^{*} .$$ The concentration of CO2 in the liquid phase was calculated under the assumption that in a steady state the volumetric production rate of CO2 by the cell mass, which follows the process reaction and V liq, equals the transport of CO2 from the liquid phase to the gas phase, according to Eq. 18. $$R_{{{\text{CO}}_{ 2} }} [{\text{mol}}/{\text{m}}^{3} /{\text{h}}] = k_{\text{L}} a \cdot (c^{*} - c_{\text{l}} ).$$ The dissolved gas concentration at equilibrium (c*) is dependent on the solubility of the gas (expressed in Henry's constant). The value of c* was calculated with the mol-fraction of the incoming gas y, the temperature-corrected Henry's law constant H T [49], and the logarithmic mean pressure (p) using Eq. 19. $$c^{*} [{\text{mol}}/{\text{m}}^{3} ] = H_{\text{T}} \cdot y \cdot p.$$ To obtain the temperature-corrected Henry's law constant H T, the constant for solubility in water at standard temperature (H 0) was corrected for the process temperature T using the correction factor k (Eq. 20) [49]. The values of H 0 and k are listed in Additional file 1: Table S10. $$H_{\text{T}} [{\text{mol}}/{\text{m}}^{3} /{\text{bar}}] = H^{0} \cdot {\text{e}}^{{[k \cdot (\left( {1/T} \right) - (1/298.15 K))]}} \cdot 10^{3} .$$ The gas holdup of the reactor (ε) describes the average volume fraction of the gas in the reactor and was calculated using the superficial gas velocity \(v_{\text{gs}}^{\text{c}}\) according to [50] (Eq. 21). We assumed that the headspace volume is negligible. $$\varepsilon = 0.6 \cdot (v_{\text{gs}}^{\text{c}} )^{0.7} .$$ The inflow of fresh syngas into the reactor needs to be compressed. The power required to compress the gas was calculated using Eq. 22 (isentropic gas compression) [51]. We assumed an efficiency of 70%, which is the lowest value for isentropic efficiencies [51]. $$\begin{aligned} P[W] &= \frac{\gamma }{\gamma - 1} \cdot p_{1} \cdot V_{1} \cdot \left[ {\left( {\frac{{p_{2} }}{{p_{1} }}} \right)^{{\left( {\gamma - 1} \right)/\gamma }} - 1} \right]\\ & \quad \times \left( {100/70} \right).\end{aligned}$$ The syngas stream from the reforming unit enters the compressor with atmospheric pressure (p 1 = 101,325 Pa). The gas is introduced at the bottom of the reactors. Therefore the discharge pressure p 2 equals p b, the pressure at the bottom of the reactor. The ratio of the specific heat capacity at constant pressure (c p) and at constant volume (c v) is designated as γ (Eq. 23) [51]. $$\gamma = \frac{{c_{\text{p}} }}{{c_{\text{V}} }}.$$ The specific heat capacities at constant pressure (c p) and at constant volume (c v) for the gas mixtures were determined using Eqs. 24 and 25 [52]. $$c_{\text{p}} = \mathop \sum \nolimits y_{i} c_{{{\text{p}},i}} ,$$ $$c_{\text{v}} = \mathop \sum \nolimits y_{i} c_{{{\text{v}},i}} .$$ The values of c p,i and c v,i are listed in Additional file 1: Table S11. In order to increase the overall conversion efficiency, a part of the off-gas from the downstream processing unit is recycled to the reactor. Compression of the recycled gas is described in the product recovery section. Product recovery Off-gases from the fermenter comprise CO2, N2, H2O, acetone, as well as unused CO and H2. Process simulators (AspenPlus® [52] and SuperPro Designer® [38]) were used to simulate and validate the costs pertaining to the product recovery and to estimate the energy consumed by various process configurations [38]. The first step in the product recovery scheme was to separate the acetone–water mixture from the gases in the outlet of the fermenter. In order to achieve the desired separation, a condenser was simulated at 283 K and 22 atm (a compressor and cooler precedes the condenser to achieve this condition). Based on the simulated schemes, all CO, N2, and H2 are removed from the top of the condenser as a gas, while water and acetone are recovered from the bottom as liquid condensate. However, a fraction of CO2 is dissolved along with the condensate. The vapor stream from the condenser at 22 atm needs to pass through a turbine, followed by a heater to match the feed conditions to the fermenter. Although the overall scheme is not heat integrated, the heat is recovered from the previous cooling operation (cooler preceding the condenser) rather than introducing fresh utility to supply utility for the heater. Since the three components (acetone, water, and dissolved CO2) present in the mixture can be separated by exploiting their relative volatilities, distillation is selected for subsequent purification. In this study, the evaluated schemes were run to achieve higher product concentrations with minimal losses. Simulations were performed on configurations with varying conditions (see Additional file 2). To achieve a higher level of purity, a process scheme with two distillation columns was employed. The first distillation column was used to remove most of the CO2 from the liquid mixture, while the second one is utilized to recover the product acetone, with high purity (greater than 99.1%) as a distillate fraction. Additional file 1: Table S12 summarizes the process configurations and the preliminary design choices made to achieve this separation. Additional file 1: Figure S1 shows the process flow diagram of the downstream processing unit. Heat balance The net rate of heat generation by fermentation was set up (Eq. 26), taking into consideration the heat released by the process reaction, the heat generated during compression of the fresh syngas, and the cooling effect of acetone and water evaporation. The heating/cooling requirements for condensation and distillation from the off-gas were accounted for in the simulation of downstream processing, as described above. $$\Delta H_{\text{net}} = \Delta H_{r} + \Delta H_{\text{gas}}^{\text{comp}} + \Delta H_{\text{acetone}}^{\text{evap}} + \Delta H_{\text{water}}^{\text{evap}} .$$ The contribution of the compression of the fresh gas and the recycled gas to the net heat balance was calculated using Eq. 27 [34]. $$\Delta H^{\text{comp}} \,[{\text{kJ}}/{\text{h}}] = R_{i} \cdot c_{\text{v}} \cdot \left( {T - T_{2} } \right).$$ The specific molar heat capacity at constant volume, c v, was determined using Eq. 25. The temperature of the compressed gas (T 2) was determined with Eq. 28 [51]. $$T_{2} = T_{1} \cdot (p_{2} /p_{1} )^{{\left( {\gamma - 1} \right)/\gamma }} .$$ The fresh syngas has a temperature of T 1 = 297 K. The extent of the cooling effect for compounds entering the vapor phase (ΔH evap) was calculated for water and acetone. ΔH evap was determined by multiplying the rate of acetone or water evaporation (in mol/h), respectively, and the heat of vaporization \(\Delta H_{i}^{\text{vap}}\) at 60 °C (Eq. 29). For \(\Delta H_{i}^{\text{vap}}\) values see Additional file 1: Table S13. The amount of acetone entering the vapor phase per hour equaled the hourly acetone production rate. $$\Delta H^{\text{evap}} \,[{\text{kJ}}/{\text{h}}] = R_{i}^{\text{vap}} \cdot \Delta H_{i}^{\text{vap}} .$$ The rate of water evaporation was determined using Raoult's law [53]. The value of p vap is listed in Table S13. $$R_{{{\text{H}}_{ 2} {\text{O}}}}^{\text{vap}} = R_{\text{total}}^{\text{out}} \cdot \left( {p_{{{\text{H}}_{ 2} {\text{O}}}}^{\text{vap}} /p_{\text{t}} } \right).$$ After summing up the aforementioned values (Eq. 26), the net heat generated by fermentation, ΔH net (in kJ/h), was used to calculate the hourly cooling water requirement R chill (Eq. 31) using the molar heat capacity of water c p and the temperature difference ΔT between the process temperature and the temperature of the chilled water. $$R_{\text{chill}} [{\text{mol}}/{\text{h}}] = \Delta H_{\text{net}} /c_{\text{p}} \cdot \Delta T.$$ Product concentration Since a steady-state system was assumed, the acetone concentration in the fermentation broth was calculated under the assumption that the rate of production (R p) equals the rate of acetone leaving the reactor with the off-gas (F out). Equation 32 was used to obtain the partial pressure of acetone (p acetone) in the off-gas. $$F_{\text{out}} \cdot (p_{\text{acetone}} /p) \cdot \left( {n/V} \right) = F_{\text{out}} \cdot (p_{\text{acetone}} /p) \cdot (p/R \cdot T) = R_{\text{p}} .$$ Using the Henry's solubility constant of acetone, H cp, the acetone concentration in the fermentation broth (c acetone) could be derived from its partial pressure p acetone (Eq. 33). $$c_{\text{acetone}} \, [ {\text{mol}}/{\text{m}}^{3} ] = p_{\text{acetone}} \cdot H_{{T,{\text{acetone}}}} .$$ Cell mass concentration and productivity The amount of cell mass (in C-mol) follows the specific product formation rate (q p) and the total acetone production rate (R p) (Eq. 34). $$n_{\text{CM}} [{\text{C-mol}}] = R_{\text{p}} /q_{\text{p}} .$$ Subsequently the cell mass concentration c CM could be determined (Eq. 35). $$c_{\text{CM}} [{\text{C-mol/m}}^{3} ] = n_{\text{CM}} /V_{\text{liq}} = n_{\text{CM}} /\left( {\left( {1 - \varepsilon } \right) \cdot V_{\text{reactor}} } \right).$$ Determination of the variable production costs When translating utilities into costs, the calculations were based on an electricity cost of 0.08 $/kWh, which is the average industrial electricity price in the state of Indiana in 2014 [54]. The cost for chilled water (4 °C) of 0.05 $/m3 was derived from the SuperPro Designer® database. This study has been based on a hypothetical facility located in the Midwest of the US. The syngas is fed into a bubble column reactor with a height of 30 m and diameter of 6 m in which the production strain converts the gaseous substrate into acetone as the sole product. Acetone leaves the reactor with the off-gas and is recovered in subsequent condensation and distillation steps (Fig. 1). The annual production was set to 30 kt/year, with 330 days per year plant operation, the production has to be at least 3.79∙103 kg/h in order to reach the desired production metrics. The study was conducted in a multi-level approach consisting of the following three parts: bacterial physiology, bioreactor design, and cost analysis (Fig. 2). Those parts were implemented such that the output of thermodynamic calculations is directly connected with the reactor design and cost estimations and vice versa. Process overview for the biological production of acetone from syngas. The fresh CO-rich gas is mixed with recycled gas and introduced into the reactor at the flow rate R in. The recycled gas leaves the condensation unit with high pressure and is passed through a turbine (T) to adjust the pressure and to generate electricity, while syngas requires compression (C). The bubble column reactor has a height of 30 m and a diameter of 6 m. CO entering the liquid phase is assumed to be completely converted to acetone by the production strain Moorella thermoacetica. Acetone leaves the reactor with the off-gas; acetone and evaporated water are condensed and then separated in a distillation step. The water from the product recovery is recycled in the reactor Study approach. The presented model to estimate the variable costs of acetone production from CO with M. thermoacetica can be broken down into 3 parts. Thermodynamics: assuming an energy requirement of 62 kJ/C-mol/h for maintenance, and 1000 kJ/C-mol for growth, and a specific growth rate of 0.10 h−1, the process reaction was established. The process reaction, which describes the rate of conversion of CO, H2O, and the nitrogen source to CO2, cell mass, and acetone, is depending on the concentration of the reactants. The concentration of the gases and acetone in liquid was determined by taking gas–liquid mass transfer limitations into account. Bioreactor: the reactor dimensions (30 m height, 6 m diameter), the gas inflow rate R in, and the composition of the syngas were fixed. The gas transfer rate into the liquid under the chosen process conditions was determined depending on the ratio of fresh and recycled gas. The gas transfer rate determines the amount of substrate that is available to the cell mass and was used as input in the process reaction. For the thermodynamic calculations and calculations on gas–liquid mass transfer, the process temperature of 60 °C was taken into account. Cost analysis: the production rate of the whole plant was set to 30 kt/year and determined eventually the sizing of the plant as well as the variable costs of production Prior to the model implementation we studied the metabolic pathways of M. thermoacetica to analyze which components of the syngas can serve as substrate for acetone production and the theoretical conversion yield as described in more detail below. We determined the cost for three different syngas sources and rejected those that are, based on the feedstock unit cost and the theoretical yield, not economically viable. In the first part of the model (Fig. 2), we applied the principle of anaerobic product formation, maintenance, and growth to derive the substrate conversion rate. In the second part, the bioreactor design was taken into account to determine the amount of substrate that is available to the cell mass. We identified parameters related to fermentation and plant sizing and estimated in the third part of the study the variable costs of production. ATP yield for production of acetone CO, CO2, and H2 are the main components of syngas. M. thermoacetica can grow autotrophically with CO as carbon source and electron donor, or with CO2 as carbon source and H2 as electron donor [7]. Whether CO and H2/CO2 can serve as substrate for the production of acetone is dependent on the net ATP production of the conversion. Figure 3 shows an overview of the pathways from H2/CO2 or CO, respectively, to acetyl-CoA and acetone. Based on the mechanism of energy generation in M. thermoacetica proposed by Schuchmann and Müller [11], no ATP would be produced per mol acetone for growth on H2 and CO2. However, for growth on CO as carbon and energy source, 1 mol ATP would be gained per mol of acetone. Hence, as there is no net gain of ATP when CO2 serves as carbon source with H2 as electron source, we assumed that only CO can serve as substrate for the production of acetone. Alternative scenarios, which would allow the utilization of H2/CO2 alongside CO, are addressed in the discussion section. When CO serves as the only carbon source, 1 mol acetone is produced from 8 mol CO; the theoretical carbon yield is therefore 0.125 mol acetone/mol CO. ATP generation for acetone production as the sole end product. According to the mechanism of energy conservation for autotrophic growth in M. thermoacetica, 1 mol ATP, 2 mol NADH, and NADPH each, are required in the Wood–Ljungdahl pathway (WLP) for the fixation and conversion of CO2 to acetyl-CoA. When CO2 serves as carbon source, reduced ferredoxin is required to reduce CO2 to CO. This mol reduced ferredoxin which is additionally available to the cell when CO serves as electron donor and carbon source, which explains the ATP generation when CO serves as substrate. acac acetoacetate, acac-CoA acetoacetyl-CoA, ac-CoA acetyl-CoA, ac-P acetyl phosphate, ATP adenosine triphosphate, CODH/ACS CO dehydrogenase/acetyl-CoA synthase, ECH membrane-associated [NiFe]-hydrogenase, Fd ferredoxin (oxidized form), Fd 2− ferredoxin (reduced form), HydABC electron-bifurcating ferredoxin- and NAD-dependent [FeFe]-Hydrogenase, NAD + nicotinamide adenine dinucleotide (oxidized form), NADH nicotinamide adenine dinucleotide (reduced form), NADP + nicotinamide adenine dinucleotide phosphate (oxidized form), NADPH nicotinamide adenine dinucleotide phosphate (reduced form), NfnAB electron-bifurcating transhydrogenase Syngas sources The CO-rich gas feed can be derived from various sources. We estimated the cost of syngas with a CO content of 33–38 mol/m3 derived from industrial waste gas, natural gas, and biomass. The theoretical conversion yield was used to identify syngas sources that have the potential to be utilized for an economically viable biological production of acetone. As shown in Table 1, we estimated a cost of 7.6·10−4 $/mol CO (27 $/t CO) for off-gas from a BOF process in the steelmaking industry after acid gas removal. With a carbon yield of 0.125 mol acetone/mol CO, the costs for the substrate would equal 0.11 $/kg acetone, which is 10–25% of the recent acetone selling price. For syngas derived from natural gas (0.0084 $/mol CO or 298 $/t), a conversion of CO to acetone by the maximum theoretical yield of 0.125 mol acetone/mol CO would lead to a substrate cost of 1.16 $/kg acetone, which is above the recent acetone selling price. For the production of syngas with a high CO content from corn stover, multiple process steps are required. The price for syngas was determined to be 0.015 $/mol CO (536 $/ton CO). Taking into account the theoretical conversion yield, the substrate-related cost of 2.1 $/kg acetone would make the process economically uninteresting. Therefore, only syngas derived from BOF gas has the potential to be economically viable. Thermodynamics and bacterial physiology For the simulated scenario, the process temperature was set to 60 °C, which is within the optimal range for M. thermoacetica (55–60 °C) [7]. Growth profiles of M. thermoacetica on CO have previously been published by Kerby and Zeikus [55]: We extracted data from Fig. 2 of the publication (using WebPlotDigitizer [56]), and determined a specific growth rate of around 0.10 h−1, which we used for this study. The Gibbs free energy released by the reaction of CO to acetone amounts to −323 kJ/mol under standard conditions (Eq. 36). The standard molar Gibbs energy of formation Δ f G 0 of the single reactants is listed in Additional file 1: Table S9. $$8 {\text{CO}} + 3 {\text{H}}_{ 2} {\text{O}} \to 1 {\text{C}}_{ 3} {\text{H}}_{ 6} {\text{O}} + 5 {\text{CO}}_{ 2} ;\quad \Delta_{r} G^{0} = - 3.22.8\,{\text{kJ}}/{\text{mol}} .$$ The Gibbs energy of the reaction was corrected for the process temperature of 60 °C (using Eq. 5) to obtain Δ r G T = −305.0 kJ/mol. Δ r G T was subsequently corrected for the concentration of the reactants (using Eq. 6) to obtain Δ r G T,c, the Gibbs energy of the reaction at process conditions. The concentration of the reactants changes with the chosen fermentation parameters such as gas flow and gas recycle rate. However, due to the low CO, but high CO2 concentration in the fermentation broth, the absolute value of Δ r G T,c is lower than that of Δ r G T. The Gibbs energy Δ r G T,c released during product formation is used by the cell mass for maintenance and cell growth [24]. The rate of substrate conversion for product and cell mass formation under the respective process conditions is eventually summarized in the process reaction. Box 1 exemplifies how the process reaction can be derived. Box 1. Process reaction (CO to acetone) Catabolic reaction (Eqs. 3, 4); in mol: \(- 8\,{\text{CO}} - 3\,{\text{H}}_{ 2} {\text{O}} + 1\,{\text{C}}_{ 3} {\text{H}}_{ 6} {\text{O}} + 5\,{\text{CO}}_{ 2} ;\) $$\Delta_{r} G^{0} = - 322.8\,{\text{kJ}}/{\text{mol}};\quad \Delta_{r} H^{0} = - 475.5\,{\text{kJ}}/{\text{mol}}$$ Correction for process temperature T = 333.15 K (Eq. 5): $$\begin{aligned} \Delta_{r} G^{T} &= \Delta_{r} G^{0} \cdot \left( {T/298.15\,{\text{K}}} \right) + \Delta_{r} H^{0} \cdot (1 - T/298.15\, {\text{K}}) \hfill \\ &= - 322.8{\text{ kJ/mol}} \cdot \left( {333.15{\text{ K}} / 298.15{\text{ K}}} \right) + \left( { - 475.5{\text{ kJ/mol}}} \right) \cdot \left( {1 - 333.15{\text{ K}} / 298.15{\text{ K}}} \right) \hfill \\ &= - 305.0{\text{ kJ/mol}} \hfill \\ \end{aligned}.$$ Correction for concentration of the reactants according to Eq. 6; for c acetone = 0.50 M; c CO = 0.001 M; \(c_{{{\text{CO}}_{ 2} }}\) = 0.003 M $$\begin{aligned} \Delta_{r} G^{T,c} &= \Delta_{r} G^{T} + R \cdot T \cdot \ln \left( {c_{i}^{{\nu_{i} }} } \right) \cdot 10^{ - 3} \hfill \\ &= - 305.0{\text{ kJ/mol}} + 8.3145{\text{ J/K/mol}} \cdot 333.15{\text{ K}} \cdot \ln \left( {0.50^{1} \cdot 0.001^{ - 8} \cdot 0.003^{5} } \right) \cdot 10^{ - 3} \hfill \\ &= - 269.7{\text{ kJ/mol}} \hfill \\ \end{aligned}$$ Gibbs free energy per mol substrate: \(\Delta G_{\text{CO}} = ( - 269.7\,{\text{kJ}}/{\text{mol}})/8 = 33.71\,{\text{kJ}}/{\text{mol}} .\) Anabolic reaction (1); in mol: \(- 2.1\,{\text{CO}} - 0.60\,{\text{H}}_{ 2} {\text{O}} - 0.20\,{\text{NH}}_{4}^{ + } + 1.0\,{\text{CH}}_{1.8} {\text{O}}_{ 0. 5} {\text{N}}_{ 0. 2} + 1.1\,{\text{CO}}_{ 2} + 0.20\,{\text{H}}^{ + } .\) Maintenance energy requirement: m G = 62 kJ/C-mol/h; ratio of maintenance energy requirement and Gibbs free energy per mol substrate: mG/ΔG CO = 1.8; maintenance reaction (2): catabolic reaction to maintain 1 mol of cell mass per hour; in mol/h: \(- { 1}. 8 {\text{ CO }} - \, 0. 6 8 {\text{ H}}_{ 2} {\text{O }} + \, 0. 2 3 {\text{ C}}_{ 3} {\text{H}}_{ 6} {\text{O }} + { 1}. 1 3 {\text{ CO}}_{ 2} + { 1}. 1\cdot 10 2 {\text{ kJ}}/{\text{h}}.\) Growth energy requirement: a G = 1000 kJ/C-mol; ratio of growth energy requirement and Gibbs free energy per mol substrate: aG/ΔG CO = 29.66. Catabolic reaction to provide energy to grow 1 C-mol of cell mass (3); in mol: \(- { 29}. 66 {\text{ CO }} - { 11}. 1 2 {\text{ H}}_{ 2} {\text{O }} + { 3}. 71 {\text{ C}}_{ 3} {\text{H}}_{ 6} {\text{O }} + { 18}.54 {\text{ CO}}_{ 2} + {\text{ 1759 kJ}}.\) Growth reaction: combination of anabolic (1) and catabolic reaction (3) for growth of 1 mol cell mass (4); in mol: \(- {\text{ 32 CO }} - {\text{ 12 H}}_{ 2} {\text{O }} - \, 0. 20{\text{ NH}}_{4}^{ + } + { 1}.0{\text{ CH}}_{ 1. 8} {\text{O}}_{0. 5} {\text{N}}_{0. 2} + { 3}. 7 1 {\text{ C}}_{ 3} {\text{H}}_{ 6} {\text{O }} + { 19}. 6 4 {\text{ CO}}_{ 2} + \, 0. 20{\text{ H}}^{ + } + 1. 8\cdot 10^{ 2} {\text{kJ}}\) Process reaction (with μ = 0.10 h−1) according to Eq. 8: Combination of maintenance reaction (2) and growth reaction (4); q i -rates in mol/h: \(\begin{aligned} &( - 1. 8- 3 2\cdot 0. 10){\text{ CO}} + ( - 0. 6 8- 1 2\cdot 0. 10){\text{ H}}_{ 2} {\text{O}} + ( - 0. 2\cdot 0. 10){\text{ NH}}_{4}^{ + } + 1\cdot 0. 10{\text{ CH}}_{ 1. 8} {\text{O}}_{0. 5} {\text{N}}_{0. 2} \hfill \\ & + (0. 2 3+ 0. 10 \cdot 3. 7 1){\text{ C}}_{ 3} {\text{H}}_{ 6} {\text{O}} + ( 1. 1 3+ 1 9. 6 4\cdot 0. 10){\text{ CO}}_{ 2} + (0. 2\cdot 0. 10){\text{ H}}^{ + } \hfill \\ &+ ( 1. 4\cdot 10^{ 2} + 1. 8\cdot 10^{ 2} \cdot 0. 10){\text{ kJ}}/{\text{h}} \hfill \\ &= - 5.0{\text{ CO}} - 1. 9 {\text{ H}}_{ 2} {\text{O}} - 0.0 2 {\text{ NH}}_{4}^{ + } + 0. 10{\text{ CH}}_{ 1. 8} {\text{O}}_{0. 5} {\text{N}}_{0. 2} + 0. 60{\text{ C}}_{ 3} {\text{H}}_{ 6} {\text{O}} + 3. 1 {\text{ CO}}_{ 2} + 0.0 2 {\text{ H}}^{ + } + 1. 6\cdot 10^{ 2} \,{\text{kJ}}/{\text{h}}. \end{aligned}\) Process reaction per mol of acetone; in mol/h: \(- 8. 3 {\text{ CO}} - 3. 2 {\text{ H}}_{ 2} {\text{O}} - 0.0 3 {\text{ NH}}_{4}^{ + } + 0. 1 7 {\text{ CH}}_{ 1. 8} {\text{O}}_{0. 5} {\text{N}}_{0. 2} + 1.0{\text{ C}}_{ 3} {\text{H}}_{ 6} {\text{O}} + 5. 2 {\text{ CO}}_{ 2} + 0.0 3 {\text{ H}}^{ + } + 2. 7\cdot 10^{ 2} {\text{ kJ}}/{\text{h}}.\) Bioreactor considerations The given reactor size, the operating pressure and temperature, and the syngas flow rate determine the transfer capacity of the gases into the broth. The amount of gas, which is available to the cell as substrate, is restricted by the low solubility of the gases. Hence, the gas–liquid mass transfer most likely becomes the rate-limiting step of the syngas-to-acetone conversion. As shown in Fig. 1, the gas at the molar flow rate R in (in mol/h) is compressed into the bioreactor at the bottom of the reactor, at pressure p b. The k L a for CO was determined (using Eqs. 12–15) and the gas transfer rate of CO into the liquid was calculated using Eq. 17 under the assumption that the CO concentration is kept low by the constant uptake by the production host, and it was therefore estimated to be 1% of c*(CO). The concentration of CO2 in the liquid phase was calculated using Eq. 18, with the CO2 production rate from the process reaction. The rate at which CO enters the liquid phase was obtained by multiplying the CO transfer rate TR(CO) with the liquid volume V liq. Box 2 shows an example of how the transfer rate of CO is calculated. The gas leaving the bioreactor consists of gas which was not absorbed into the liquid phase and of CO2, which is produced by M. thermoacetica during the conversion of CO to acetone. Additionally, the off-gas contains the produced acetone and water. Acetone and water are removed from the off-gas in a condensation step. Acetone is separated from the water in a subsequent distillation step. We accounted for the loss of product when determining the number of reactors required to meet the desired hourly production (8% of the product is lost in the downstream processes). Additional product losses which occur in the steps from the purified to the final shipped products, for example during packaging, were neglected. The product recovery was simulated as described in the methods section. In our simulation, the water separated from the acetone was recycled to the reactor. The off-gas from the condensation/distillation step, consisting of H2, CO2, and CO, can be mixed with fresh syngas and recycled to the reactor. The choice of the recycle rate (as percentage of R in) is a trade-off between the production rate and the utility costs for gas compression on the one hand, and costs for fresh syngas on the other hand, and will be addressed in the next section. The gas transfer rate (TR) is dependent on two terms: the volumetric mass transfer coefficient k L a, and the concentration of the gas in the liquid, c liq. The k L a-term is dependent on the average superficial gas velocity, determined by the average gas flow rate and pressure. The c liq-term, however, is dependent on the partial pressure of the gas going into the reactor (c g), which is in turn determined by the gas transfer rate TR if the gas is, at least partly, recycled. Therefore, the composition of the gas injected into the bioreactor changes with every recycling round and converges to a steady state for a set recycle rate. Additional file 1: Figure S2 illustrates how the fermentation parameters and terms related to the gas–liquid mass transfer influence each other. Depending on the rate of gas recycling, the off-gas, which will be purged, contains a certain amount of CO. Since CO is considered as a pollutant [57], the CO emission of the production process has to be limited. Cost of measures, such as flaring [58], was not taken into account in this study. Box 2. Calculation of the CO transfer rate For R in = 8·105 mol/h; R out = 7·105 mol/h; p b = 3.5·105 Pa; p = 2·105 Pa; A = 28 m2; c*(CO) = 1 mol/m3; Calculation of the pressure-corrected gas flow (Eq. 12): $$F_{\text{av}} [{\text{m}}^{3} /{\text{h}}] = \left[ {\left( {R_{\text{in}} + R_{\text{out}} } \right) \cdot 0.5 \cdot R \cdot T} \right]/p = 10.4 \cdot 10^{3} \,{\text{m}}^{3} /{\text{h}}$$ Calculation of the superficial gas velocity (Eq. 13): $$v_{\text{gs}}^{\text{c}} = F_{\text{av}} /A = 10. 4\cdot 10^{ 3} {\text{m}}^{ 3} /{\text{h}}/ 2 8 {\text{ m}}^{ 2} = {\text{ 371 m}}/{\text{h }} = \, 0. 10 3 {\text{ m}}/{\text{s}}$$ Calculation of the volumetric mass transfer coefficient (Eqs. 14, 15): $$\begin{aligned} k_{\text{L}} a &= 0.32 \cdot (D_{\text{CO}} /D_{{{\text{O}}_{ 2} }} ) \cdot \left( { v_{\text{gs}}^{\text{c}} } \right)^{0.7} \cdot \theta^{{T - 293.15\,{\text{K}}}} \hfill \\ &= 0. 3 2\cdot \left( { 2.0 8\cdot 10^{ - 5} / 2. 1 5\cdot 10^{ - 5} } \right) \cdot 0. 10 3^{0. 7} \cdot 1.0 2 2^{ 3 3 3. 1 5- 2 9 8. 1 5} \hfill \\ &= 0. 1 3 5\,{\text{s}}^{ - 1} = 4 8 6\,{\text{h}}^{ - 1} \hfill \\ \end{aligned}$$ Calculation of the CO transfer rate (Eq. 17): $${\text{TR(CO)}} = k_{\text{L}} a \cdot \left( {c^{*} - c_{\text{l}} } \right) = k_{\text{L}} a \cdot 0.99 \cdot c^{*} = 481\,{\text{mol/m}}^{ 3} / {\text{h}}$$ Parameters for plant optimization Because the system is considered in steady state, all the CO which enters the liquid phase, R liq(CO), will be converted by the cell mass. In our fermentation set up, the reactor size (30 m height, 6 m diameter) and the composition of the syngas are fixed. Additionally, the pressure-corrected superficial gas velocity \(v_{\text{gs}}^{\text{c}}\) was kept below 0.15 m/s. The molar flow rate of the gas into the reactor, R in, and the ratio of recycled gas, R rec, could be varied. However, several optimization constraints restrict the choice of R in and R rec in an industrial setting: Concentration of acetone in liquid The concentration of acetone in the fermentation broth was determined using Eqs. 32 and 33, assuming steady state: acetone leaves the reactor with the outflowing gas stream at the same rate as it is produced by the cell mass. Two factors have an effect on the acetone concentration in the fermentation broth: Firstly, the acetone concentration is positively correlated to the production rate, and the production rate decreases with increasing R rec values. Secondly, the acetone concentration decreases with higher gas outflow rates (when R in high), due to the gas-stripping effect. Hence, the acetone concentration can be kept low when both R in and R rec are high. Tests in our lab showed that M. thermoacetica strain ATCC 39073 can tolerate acetone concentrations up to 30 g/l without being affected in its growth behavior (unpublished data). Number of reactors required to meet the desired production The more CO is available to the cell mass, the more acetone is produced. This can be achieved by high gas inflow (R in high) and low recycle rate (R rec low). With increasing acetone production per reactor, fewer reactors are required to achieve the desired acetone production. Variable costs of production Increasing the acetone production by raising the flow of fresh syngas comes at a cost: the variable costs for feedstock and gasification are rising. Additionally, it has to be taken into account that increasing the gas recycle rate (R rec/R in high) leads to efficient utilization of the substrate. However, a high gas recycling rate increases the number of reactors required to meet the desired production metrics. The variable costs of syngas production and fermentation are crucial optimization parameters in the process design. The costs can be categorized into pre-fermentation costs (that is syngas production) and fermentation-related costs. As described above, we determined that the cost for syngas was derived from BOF gas, natural gas, and corn stover. Only BOF-derived syngas with a cost of 7.6·10−4 $/mol CO is, based on the theoretical conversion yield, an interesting source for syngas to date. As fermentation-related variable costs we took into account the costs for chilled water, the power requirements for gas compression, and product recovery. Other fermentation-related costs, such as media sterilization, disposal of fermentation residue, and media components were not taken into account. To determine the requirements of chilled water, the heat balance of the reaction was set up, and the rate of chilled water was determined using Eq. 31 and translated into costs (0.05 $/m3 chilled water). Box 3 contains examples of how the heat balance was set up and how the cooling requirements can be determined. The power requirements for gas compression were calculated using Eq. 22. Box 4 exemplifies how those power requirements are determined. The power requirements for product recovery (condensation and distillation) were retrieved from simulations with SuperPro Designer® and AspenPlus® and converted into costs assuming 0.08 $/kWh. Further details on the selection of the downstream process scheme are described in the Additional file 2. Box 3. Heat balance and calculation of requirements for chilled water Net heat balance: \(\Delta H_{\text{net}} = \Delta H_{r} + \Delta H_{\text{gas}}^{\text{comp}} + \Delta H_{\text{acetone}}^{\text{evap}} + \Delta H_{\text{water}}^{\text{evap}} .\)Heat released by the cell mass per reactor (obtained from the process reaction), e.g. \(\Delta H_{r} = - 1.5 \cdot 10^{4} \,{\text{MJ}}/{\text{h;}}\)Rate of heat generated by gas compression: \(R_{\text{gas}} = 4 \cdot 10^{5} \,{\text{mol/h}}\; (T_{2} { = 430}\,{\text{K;}}\;c_{v} { = 2} . 1 4\cdot 1 0^{ - 2} \,{\text{kJ/(mol}} \cdot {\text{K));}}\) $$\Delta H_{\text{gas}}^{\text{comp}} = 4 \cdot 10^{5} {\text{mol}}/{\text{h}} \cdot 2.14 \cdot 10^{ - 2} {\text{kJ}}/({\text{mol}} \cdot {\text{K}}) \cdot (333\,{\text{K}} - 430\,{\text{K}}) = - 9.4 \cdot 10^{2} {\text{MJ}}/{\text{h}} .$$ Rate of acetone evaporation equals the acetone production rate, e.g. \(R_{\text{acetone}}^{\text{vap}} = 3.6 \cdot 10^{4} \,{\text{mol}}/{\text{h}} .\) Rate of water evaporation; e.g. \(R_{{{\text{H}}_{ 2} {\text{O}}}}^{\text{vap}} = 1.1 \cdot 10^{5} {\text{mol}}/{\text{h}} .\) Using the heat of vaporization for water and acetone at 60 °C (Additional file 1: Table S13): $$\Delta H^{\text{vap}} ({\text{H}}_{ 2} {\text{O}}) = 42.6\,{\text{kJ}}/{\text{mol}};\quad \Delta H^{\text{vap}} ({\text{acetone}}) = 29.0\,{\text{kJ}}/{\text{mol}}$$ $$\Delta H_{\text{acetone}}^{\text{evap}} = R_{\text{acetone}}^{\text{vap}} \cdot \Delta H^{\text{vap}} ({\text{acetone}}) = 3.6 \cdot 10^{4} {\text{mol}}/{\text{h}} \cdot 29.0\,{\text{kJ}}/{\text{mol}} = 1.0 \cdot 10^{3} {\text{MJ}}/{\text{h}}$$ $$\Delta H_{\text{water}}^{\text{evap}} = R_{\text{water}}^{\text{vap}} \cdot \Delta H^{\text{vap}} ({\text{water}}) = 1.1 \cdot 10^{5} {\text{mol}}/{\text{h}} \cdot 42.6\,{\text{kJ}}/{\text{mol}} = 4.7 \cdot 10^{3} {\text{MJ}}/{\text{h}}$$ Calculation of the net heat balance: $$\begin{aligned} \Delta H_{\text{net}} &= \Delta H_{r} + \Delta H_{\text{gas}}^{\text{comp}} + \Delta H_{\text{acetone}}^{\text{evap}} + \Delta H_{\text{water}}^{\text{evap}} \hfill \\ &= ( - 1.5 \cdot 10^{4} - 9.4 \cdot 10^{2} + 1.0 \cdot 10^{3} + 4.7 \cdot 10^{3} ){\text{MJ}}/{\text{h}} &= - 10^{4} \,{\text{MJ}}/{\text{h}} \hfill \\ \end{aligned}$$ Calculation of required amount of cooling water (Eq. 30): $$R_{\text{chill}} = |\Delta H_{\text{net}} |/(c_{p} \cdot \Delta T) = (10^{10} \,{\text{J}}/{\text{h}})/((71.19\,{\text{J}}/{\text{mol}}/{\text{K}}) \cdot (333 - 277){\text{K}}) = 2.5 \cdot 10^{6} \,{\text{mol}} = 45\,{\text{m}}^{3} /{\text{h}}$$ Costs for cooling water: 45 m3/h∙0.05 $/m3 = 2.3 $/h Box 4. Calculation of power requirements for gas compression Power requirement to compress the syngas into the reactor (Eq. 21): $$P[W] = \frac{\gamma }{\gamma - 1} \cdot p_{1} \cdot V_{1} \cdot \left[ {\left( {\frac{{p_{2} }}{{p_{1} }}} \right)^{(\gamma - 1)/\gamma } - 1} \right] \cdot (100/70)$$ p 1 = 1.0∙105 Pa; for p 2 = p b = 3.5∙105 Pa; Compression of 7∙103 m3/h; Composition syngas e.g.: CO (81 mol%), CO2 (0 mol%), H2 (2 mol%); N2 (17 mol%) $$\gamma_{\text{gas}} = \frac{{c_{\text{p}}^{\text{gas}} }}{{c_{\text{v}}^{\text{gas}} }} = \frac{{\sum {y_{i} c_{{{\text{p}},i}} } }}{{\sum {y_{i} c_{{{\text{v}},i}} } }} = 1.40$$ $$\begin{aligned} P[W] &= \frac{{\gamma_{\text{gas}} }}{{\gamma_{\text{gas}} - 1}} \cdot p_{1} \cdot V_{1} \cdot \left[ {\left( {\frac{{p_{2} }}{{p_{1} }}} \right)^{{(\gamma_{\text{gas}} - 1)/\gamma_{\text{gas}} }} - 1} \right] \cdot (100/70) \hfill \\ & = 3.5 \cdot 1.0 \cdot 10^{5\,} \,{\text{Pa}} \cdot \frac{{7 \cdot 10^{3} }}{3600}\,{\text{m}}^{3} /{\text{s}} \cdot \left[ {\left( {\frac{{3.5 \cdot 10^{5} \,{\text{Pa}}}}{{1.0 \cdot 10^{5} \,{\text{Pa}}}}} \right)^{0.29} - 1} \right] \cdot (100/70) = 426\,{\text{kW}} \hfill \\ \end{aligned}$$ Analysis of a fermentation scenario We tested process scenarios with BOF gas-derived syngas. R rec/R in combinations were varied to find a process set-up at which the above-mentioned parameters of acetone concentration, plant sizing (number of reactors), and variable costs are within a reasonable range. Here we present the outcome of a production scenario in which the gas flow rate in the reactor (R in) was set to 6∙105 mol/h. At this gas flow rate the superficial gas velocity \(v_{\text{gs}}^{\text{c}}\) (corrected for the average gas flow in the reactor) equals 0.082 m/s. We tested different R rec/R in combinations and their influence on the process parameters. In a scenario where the gas compressed into the reactor contains 20 mol% recycled gas (R rec = 1.2∙105 mol/h), the acetone concentration in the broth (21 g/l) stays below the toxicity limit. Our model predicts an hourly biological acetone production of rate 2225 kg/h (concentration of cell mass 1.3 g/l; productivity: 2.29 g/g/h). Under the given process conditions the reactor off-gas has an acetone content of 6 mol%. We simulated the acetone recovery by condensation and distillation with SuperPro Designer® and AspenPlus®. Additional file 1: Table S14 illustrates the composition of the off-gas obtained at the top outlet of the fermenter, which is received by the downstream operations as feedstock. The purity of the final product is 99% and we determined a loss of maximal 8% h−1. Accounting for the product recovery loss, 2058 kg final product would be produced per hour in the analyzed scenario. To reach the desired production metrics of 3.79·103, two reactors would be required. For this scenario we determined variable production costs of 0.389 $/kg acetone. The contributions to the costs are: 34.1% for gaseous substrate, 0.3% for chilled water, 21.5% for gas compression, and 44.1% for downstream processing. The utilities for downstream processing are listed in detail in Additional file 1: Tables S15 and S16. At the presented scenario, the CO-to-acetone conversion reaches 74% of the theoretical carbon yield. To increase the yield, a higher gas recycle rate could be implemented. However, increasing the gas recycle rate would not be beneficial for the number of reactors required to meet the desired production metrics. Economic feasibility of acetone production from syngas In this study, we have analyzed the conversion of syngas to acetone using the hypothetical thermophilic production strain Moorella thermoacetica with regard to thermodynamic considerations of the bacterial physiology, to bioreactor design limitations, and to economic feasibility. We have estimated the costs for syngas with a CO content above 80 mol% derived from three different sources and only BOF gas was identified as an interesting syngas source from an economic perspective. Therefore we have determined the other main variable production cost (gas compression, downstream processing, and chilled water) for a representative production process. Those variable production costs sum up, together with the costs for the gaseous substrate, to 389 $/t. As mentioned before, off-gas is not utilized in US steel mills to date. Therefore we assume that BOF gas comes free of charge. In Europe, however, only 25% of the BOF gas is flared and the rest is utilized for the generation of electricity and heat [32]. We tested a scenario in which the presented acetone production process would be implemented in a scenario where BOF gas is not underutilized, that is, compensation for the feedstock is required. Assuming an additional cost of 0.0036 $/mol CO for BOF gas (see Additional file 1), would increase the variable production cost to 1018 $/t acetone, which would not lead to a profitable process to date. Alternative sources for syngas besides those analyzed in this study can be considered. Biogas for example is another source of CH4-rich gas which could be reformed to a CO-rich syngas. However, biogas has a significant fraction of CO2 [59]. Therefore, an additional acid gas removal step would be required to reach a gas composition of natural gas before reforming. This would add an additional cost to the already high syngas production costs from natural gas of 298 $/t CO. This makes syngas derived from biogas less interesting as CO source for the production process in this study. Approach of this study Utilization of H2/CO2 We assumed that the production organism M. thermoacetica would be converting only CO to acetone, since there is no pathway existing to generate net ATP from the conversion of H2/CO2 to acetone [60]. Although no net ATP is generated, alternative metabolic reactions would allow H2/CO2 to serve as substrate: firstly, acetate could be generated as byproduct. The second alternative would require that conversion of CO to acetone would deliver the energy required for cell maintenance and growth. The latter scenario could be realized by metabolic engineering strategies to ensure metabolization of H2/CO2 with net ATP generation. However, shifting the composition of the biomass-derived syngas towards CO using rWGS reaction is a minor contributor to the overall production costs, meaning the benefit of engineered H2/CO2 utilization would be small. However, conversion of CO results in the production of a certain amount of CO2: for the production of acetone, 0.625 mol CO2 is produced per mol converted CO. CO2 is diluting the off-gas considerably, thereby making the gas recycling less effective. An option would be the removal of CO2 from the off-gas. Several techniques for CO2 capture from gases are described [61]. Thermodynamics approach The approach of using the principles of thermodynamics to estimate the conversion rate has to be used with caution for acetogenic bacteria. The metabolism of acetogens is known to perform close to thermodynamic limits [11], and process conditions (reactant concentration, pressure, temperature) might have a disproportionately high impact on the estimated free energy of the product reaction. Therefore, erroneous assumptions can have significant impact on the outcome of the study. The thermodynamics approach is based on the energy requirements for cell maintenance, and no accurate values have been reported in literature for M. thermoacetica. In the metabolic model published in 2015, a maintenance requirement of 0.12 mmol ATP/g/h was used [12]. With around 46.2 kJ energy conserved per mol ATP for homoacetogenic bacteria [62], that would equal 5.5·10−3 kJ/g/h (0.14 kJ/C-mol/h assuming 24.6 g/C-mol), which seems a surprisingly low value compared to the 62 kJ/C-mol/h used in this study. The non-growth associated maintenance ATP requirement for E. coli, as comparison, is reported to be 8.39 mmol ATP/g/h [63]. Acquiring more accurate values for the maintenance energy requirement from experimental data would increase the accuracy of our model. Since suboptimal culturing conditions increase the maintenance energy requirement [40], it is relevant to retrieve the data under fermentation conditions that resemble an industrial set-up. From the data generated with our model, the CO uptake rates can be determined. The CO uptake rate is around 323 mmol CO/g/h for the production scenario presented. This value is relatively high when compared to CO uptake rates described for acetogens in literature [12, 64, 65]. A possible reason is a difference in the growth rate. In this study, we assumed a growth rate of 0.1 h−1 (as published by Kerby and Zeikus [55]). When assuming a growth rate of 0.01 h−1 (as described by Islam et al. for growth on CO [12]), the uptake rate predicted with our model decreases to 141 mmol CO/g/h. Another reason for potentially overestimating the CO uptake rate can be the maintenance energy requirement, which might be lower than the estimated 62 kJ/C-mol/h (as discussed above). When lowering the maintenance energy requirement to 20 kJ/C-mol/h (with µ = 0.01 g/g/h), the average uptake rate decreases to 59 mmol CO/g cell mass. Additionally, it is reported that high concentrations of dissolved CO are inhibitory for acetogens, and that the process is at a certain gas supply rate biologically limited instead of gas transfer limited [65]. However, the influence of changes to our model which result in lower CO uptake rates have minor impact on the outcome of our analysis regarding production cost and plant sizing. In 2015, Chen et al. published a spatiotemporal metabolic model for bubble column reactors with the acetogen C. ljungdahlii, in which model iHN637 was integrated [66], and a similar approach could be applied to perform an economic analysis for the process presented in this study. However, integration of model iAI558 of M. thermoacetica, in which for example a novel mechanism of energy conservation was implemented [12], would have based the study on different assumptions regarding the metabolism of the production strain. Future implementation of an updated version of iAI558 including the acetone pathway would nonetheless be possible. Reactor design Traditionally, continuous stirred-tank reactors (CSTR) are employed in syngas fermentation. Stirring breaks the gas bubbles and thereby increases the interfacial area and the gas retention time [67]. However, stirring increases the power usage. An alternative, suitable for industrial applications, are bubble column reactors [67], which we chose for this study. More sophisticated bioreactor set-ups that increase the gas–liquid mass transfer could further improve the yield. This could for example be achieved with microbubble dispersion stirred-tank reactors. Microbubbles, which have an average diameter of only 50 µm compared to the normal 3–5 mm bubble diameter, offer a significantly higher gas–liquid interfacial area [68], but generation of microbubbles will also require extra energy and costs. Biofilm reactors are another option, and can result in an increased interfacial area between substrate and the production host. M. thermoacetica is reported to be capable of forming thin biofilms [69]. To determine the requirement for chilled water, only fermentation-related processes (heat generated by the cell mass, evaporating water and acetone, heat released during adiabatic compression) were taken into account. Other energy requirements, which for example arise during syngas production or product recovery, were accounted for when determining the utilities. Additionally, the variable costs of production which do not occur continuously, such as sterilization, costs for media components, and disposal of the acetone loss, were omitted. However, this study is intended to serve as a preliminary feasibility analysis, with a focus on variable costs of production as the main criterion for an economically viable process. In a more elaborate model an overall integrated heat balance, a more comprehensive overview of the variable production costs as well as fixed operating costs and capital costs could be implemented. In this study, we have analyzed the feasibility of acetone production from syngas from three different sources using the thermophilic acetogen M. thermoacetica as a hypothetical production host with regard to metabolic and economic aspects. Syngas contains H2, CO2, and CO as potential substrate. However, when acetone is the sole end product, ATP is only generated when CO is used as substrate. We have determined the costs for syngas with a CO content higher than 81 mol% from BOF gas, from natural gas, and from biomass. We identified syngas derived from BOF gas as the only syngas source to date which is economically promising for the production of acetone. For different fermentation scenarios with varying gas feed and gas recycle rates, we analyzed the variable cost of production and the cost contribution of the single process steps, the number of reactors required to produce at the desired rate of 30 kt/year, the efficiency of the gas utilization, and parameters related to cell mass and productivity. This was done by setting up the process reaction in which the rate of acetone formation from CO under the process conditions is described. The amount of available substrate was determined by the rate of CO transferred into the fermentation broth, in turn depending on the chosen process parameters. We presented data for a representative fermentation scenario in which 6∙105 mol/h gas, containing 4.8∙105 mol/h syngas derived from BOF gas and 1.2∙105 mol/h recycled off-gas, is fed in a bubble column and converted to acetone by M. thermoacetica at 60 °C. The variable production costs comprising the cost for syngas, gas compression, chilled water, and product recovery were determined to be 389 $/t, with the cost for syngas as the main contributor. Here, we have illustrated an application of the thermodynamics approach, in which the rate of acetone production is derived from the Gibbs energy of product formation, the maintenance and growth energy requirements, and the growth rate, for the formation of a volatile compound from a gaseous substrate. As the approach is based on certain assumptions, such as the maintenance energy requirement, experimental data would increase the accuracy of our model. Since the heterologous expression of the acetone pathway in M. thermoacetica has not been reported so far, the study is based on a hypothetical production strain. We hope that further development of the genetic toolbox for M. thermoacetica or similar thermophilic acetogens will soon make heterologous acetone pathway expression possible, since this will enable experimental studies at reactor scale. This study exemplifies the importance of a metabolic feasibility analysis and we encourage other researchers to apply the presented approach to other bioproduction scenarios in order to estimate the economic viability of the process and to obtain insights into potential bottlenecks. A: cross-sectional area of reactor; in m2; a G : energy requirement for growth of 1 mol cell mass; in kJ/C-mol; ATP: adenosine triphosphate; BOF: basic oxygen furnace; c CM: concentration of cell mass; in C-mol/m3; c g: concentration in gas phase; c i : concentration of compound i; c liq: concentration in liquid phase; c p: specific molar heat capacity at constant pressure; in kJ/mol/K; c v: specific molar heat capacity at constant volume; in kJ/mol/K; c*: dissolved gas concentration at equilibrium; in mol/m3; D 0: standard diffusion coefficient; in cm2/s; D i : diffusion coefficient; in m2/h; F av: average gas flow rate; in m3/h; F out : flow rate of gas leaving the reactor; in m3/h; g: gravitational constant; in m/s2; h: height; in m; H 0: Henry's law solubility constant at standard temperature 298.15 K; in mol/kg/bar; H T: Henry's law solubility constant corrected for temperature T; in mol/m3/Pa; k: temperature correction factor for Henry's law constant; k L a: volumetric mass transfer coefficient; in s−1; m G: maintenance energy requirement for 1 mol cell mass; in kJ/C-mol/h; n CM: molar amount of cell mass; in C-mol; p: logarithmic mean pressure in the reactor vessel; in Pa; p b: pressure at bottom of reactor; in Pa; p i : partial pressure of compound i; in Pa; \(p_{i}^{\text{vap}}\): vapor pressure; in Pa; p t : pressure at top of reactor; in Pa; q heat: cell mass-specific rate of heat production; q i : cell mass-specific rate of production or consumption of reactant i; R: gas constant; 8.314 (m3 Pa)/K/mol; R chill: rate of chilled water; in mol/h; R gas: rate of syngas inflow; in mol/h; R in: rate of gas inflow; in mol/h; R liq: rate of transfer from gas to liquid phase; in mol/h; R out: rate of gas outflow; in mol/h; R p: rate of production; in mol/h; R rec: rate of recycled gas inflow; in mol/h; rWGS: reverse water–gas shift reaction; T: temperature; TR: gas transfer rate; in mol/m3/h; \(v_{\text{gs}}^{\text{c}}\): pressure-corrected superficial gas velocity; V liq: volume broth; in m3; V reactor: volume reactor; in m3; WLP: Wood–Ljungdahl pathway; y: mol-fraction of the gas; Δ f G i 0 : Gibbs energy of formation of compound i at standard conditions (T = 298.15 K); in kJ/mol; Δ f H i 0 : heat formation of compound i at standard conditions (T = 298.15 K); in kJ/mol; ΔH comp: rate of heat released by gas compression; in kJ/h; ΔH evap: rate of vaporization heat; in kJ/h; ΔH growth: rate of heat released by growth reaction; in kJ/h; ΔH main: rate of heat released by maintenance reaction; in kJ/h; \(\Delta H_{i}^{\text{vap}}\): heat of vaporization for compounds i; in kJ/mol; Δ r G 0: Gibbs energy of a reaction at standard conditions (T = 298.15 K and c i = 1 M); in kJ/mol; Δ r G T: Gibbs energy of a reaction at process temperature T; in kJ/mol; Δ r G T,c: Gibbs energy of a reaction corrected for process temperature and concentration of reactants; in kJ/mol; Δ r H 0: enthalpy of reaction at standard conditions; in kJ/mol. ε: holdup of reactor; γ: ratio of specific molar heat capacity at constant pressure and at constant volume; µ: growth rate; in h−1; µ 0: dynamic viscosity at 298.15 K; ν i : stoichiometric coefficient of reactant i; \(v_{i}^{\text{growth}}\): stoichiometric coefficient of reactant i in growth reaction;\(v_{i}^{\text{main}}\): stoichiometric coefficient of reactant i in maintenance reaction; ρ: density; in kg/m3; θ: correction factor to calculate k L a at process temperature. atm: atmosphere; J: Joule; K: Kelvin; m: meter; M: "molar", 1 M = 1 mol/liter; °C: degree Celsius; Pa: Pascal; t: tons, 103 kg. Latif H, Zeidan AA, Nielsen AT, Zengler K. Trash to treasure: production of biofuels and commodity chemicals via syngas fermenting microorganisms. Curr Opin Biotech. 2014;27:79–87. Schiel-Bengelsdorf B, Dürre P. Pathway engineering and synthetic biology using acetogens. FEBS Lett. 2012;586(15):2191–8. LanzaTech. Low carbon fuel project achieves breakthrough low carbon fuel project achieves breakthrough. http://www.lanzatech.com/low-carbon-fuel-project-achieves-breakthrough/. Accessed Nov 2016. Dürre P, Eikmanns BJ. C1-carbon sources for chemical and fuel production by microbial gas fermentation. Curr Opin Biotech. 2015;35:63–72. Vassilev SV, Baxter D, Andersen LK, Vassileva CG, Morgan TJ. An overview of the organic and inorganic phase composition of biomass. Fuel. 2012;94:1–33. Drake HL, Küsel K, Matthies C. Acetogenic Prokaryotes. In: Rosenberg E, DeLong EF, Lory S, Stackebrandt E, Thompson, F, editors. The prokaryotes: Prokaryotic Physiology and Biochemistry. Berlin: Springer; 2013. p. 3–60. Drake HL, Daniel SL. Physiology of the thermophilic acetogen Moorella thermoacetica. Res Microbiol. 2004;155(10):869–83. Ragsdale SW, Pierce E. Acetogenesis and the Wood–Ljungdahl pathway of CO2 fixation. BBA Proteins Proteom. 2008;1784(12):1873–98. Drake HL, Gößner AS, Daniel SL. Old acetogens, new light. Ann NY Acad Sci. 2008;1125(1):100–28. Mock J, Wang S, Huang H, Kahnt J, Thauer RK. Evidence for a hexaheteromeric methylenetetrahydrofolate reductase in Moorella thermoacetica. J Bacteriol. 2014;196(18):3303–14. Schuchmann K, Müller V. Autotrophy at the thermodynamic limit of life: a model for energy conservation in acetogenic bacteria. Nat Rev Microbiol. 2014;12:809–21. Islam MA, Zengler K, Edwards EA, Mahadevan R, Stephanopoulos G. Investigating Moorella thermoacetica metabolism with a genome-scale constraint-based metabolic model. Integr Biol. 2015;7:869. Kita A, Iwasaki Y, Sakai S, Okuto S, Takaoka K, Suzuki T, et al. Development of genetic transformation and heterologous expression system in carboxydotrophic thermophilic acetogen Moorella thermoacetica. J Biosci Bioeng. 2013;115(4):347–52. Kita A, Iwasaki Y, Yano S, Nakashimada Y, Hoshino T, Murakami K. Isolation of thermophilic acetogens and transformation of them with the pyrF and kan r genes. Biosci Biotech Bioch. 2013;77:301. Iwasaki Y, Kita A, Sakai S, Takaoka K, Yano S, Tajima T, et al. Engineering of a functional thermostable kanamycin resistance marker for use in Moorella thermoacetica ATCC 39073. FEMS Microbiol Lett. 2013;343:8–12. Buckingham J, editor. Dictionary of organic compounds. 6th ed. London: Chapman and Hall; 1996. Burridge E. Acetone. ICIS Chem Bus. 2007;272(20):21. Rajeev M. Pandia. The phenol-acetone value chain: prospects and opportunities. http://www.platts.com/IM.Platts.Content/ProductsServices/ConferenceandEvents/2013/ga001/presentations/26Sept_16.25_%20Rajeev%20Pandia.pdf. Accessed May 2016. S&P Global Platts. Acetone: European spot price rise; US export pricing stable; Asian price unchanged. http://www.platts.com/news-feature/2015/petrochemicals/global-solvents-overview/index. Accessed Jun 2016. ICIS. US chemical profile: acetone. http://www.icis.com/resources/news/2010/01/11/9323851/us-chemical-profile-acetone/. Accessed Nov 2016. Lütke-Eversloh T, Bahl H. Metabolic engineering of Clostridium acetobutylicum: recent advances to improve butanol production. Curr Opin Biotech. 2011;22(5):634–47. Hoffmeister S, Gerdom M, Bengelsdorf FR, Linder S, Flüchter S, Öztürk H, et al. Acetone production with metabolically engineered strains of Acetobacterium woodii. Metab Eng. 2016;36:37–47. Heijnen JJ, van Dijken JP. In search of a thermodynamic description of biomass yields for the chemotrophic growth of microorganisms. Biotech Bioeng. 1992;39(8):833–58. Heijnen JJ. Impact of thermodynamic principles in systems biology. In: Wittmann C, Krull R, editors. Biosystems engineering II. Berlin: Springer; 2010. p. 139–62. Heijnen JJ. Bioenergetics of microbial growth. In: Flickinger MC, Drew SW, editors. Encyclopedia of bioprocess technology. Hoboken: Wiley; 2002. p. 267–290. Spath PL, Dayton DC. Preliminary screening-technical and economic assessment of synthesis gas to fuels and chemicals with emphasis on the potential for biomass-derived syngas: (NREL/TP-510-34929); 2003. Piccolo C, Bezzo F. A techno-economic comparison between two technologies for bioethanol production from lignocellulose. Biomass Bioenerg. 2009;33(3):478–91. Choi D, Chipman DC, Bents SC, Brown RC. A techno-economic analysis of polyhydroxyalkanoate and hydrogen production from syngas fermentation of gasified biomass. Appl Biochem Biotech. 2010;160(4):1032–46. Linstrom PJ, Mallard WG, editors. NIST chemistry WebBook: NIST standard reference database number 69. Gaithersburg MD, 20899. Molitor B, Richter H, Martin ME, Jensen RO, Juminaga A, Mihalcea C, et al. Carbon recovery by fermentation of CO-rich off gases–turning steel mills into biorefineries. Bioresour Technol. 2016;215:386–96. Handler RM, Shonnard DR, Griffing EM, Lai A, Palou-Rivera I. Life cycle assessments of ethanol production via gas fermentation: anticipated greenhouse gas emissions for cellulosic and waste gas feedstocks. Ind Eng Chem Res. 2015;55(12):3253–61. Pei P, Korom SF, Ling K, Nasah J. Cost comparison of syngas production from natural gas conversion and underground coal gasification. Mitig adapt strategies glob chang 2016;21(4): 629–643. Bustamante Felipe, Enick Robert, Rothenberger Kurt, Howard Bret, Cugini Anthony, Ciocco Michael, et al. Kinetic study of the reverse water gas shift reaction in high-temperature, high pressure homogenous systems. Fuel Chem Div Prepr. 2002;47(2):663. Towler G, Sinnott RK. Chemical engineering design: principles, practice and economics of plant and process design. Amsterdam: Elsevier; 2012. Thompson JL, Tyner WE. Corn stover for Bioenergy Production: Cost estimates and farmer supply responses. Purdue University. 2011. https://www.extension.purdue.edu/extmedia/EC/RE-3-W.pdf. Accessed Sept 2016. Lin T, Rodríguez LF, Davis S, Khanna M, Shastri Y, Grift T, et al. Biomass feedstock preprocessing and long-distance transportation logistics. Glob Change Biol Bioenergy. 2015;8:160–70. Swanson RM, Platon A, Satrio JA, Brown RC. Techno-economic analysis of biomass-to-liquids production based on gasification. Fuel. 2010;89:S11–9. SuperPro Designer®: Intelligen, Inc., Scotch Plains, NJ, USA. Alberty RA. Thermodynamics of biochemical reactions. New York: Wiley; 2005. Tijhuis L, van Loosdrecht MC, Heijnen JJ. A thermodynamically based correlation for maintenance Gibbs energy requirements in aerobic and anaerobic chemotrophic growth. Biotech Bioeng. 1993;42(4):509–19. Kantarci N, Borak F, Ulgen KO. Bubble column reactors. Process Biochem. 2005;40(7):2263–83. Kadic E, Heindel TJ. An introduction to bioreactor hydrodynamics and gas-liquid mass transfer. Hoboken: John Wiley & Sons; 2014. Blanch HW, Clark DS. Biochemical Engineering. 2nd ed. New York: CRC Press; 1995. van Baten JM, Krishna R. Scale effects on the hydrodynamics of bubble columns operating in the heterogeneous flow regime. Chem Eng Res Des. 2004;82(8):1043–53. Kadic E, Heindel TJ. An introduction to bioreactor hydrodynamics and gas–liquid mass transfer. New York: Wiley; 2014. Heijnen JJ, Van't Riet K. Mass transfer, mixing and heat transfer phenomena in low viscosity bubble column reactors. Chem Eng J. 1984;28(2):B21–42. LMNO Engineering, Research, and Software, Ltd. Gas viscosity calculator. http://www.lmnoeng.com/Flow/GasViscosity.php. Cussler EL. Diffusion, mass transfer in fluid systems. 2nd ed. New York: Cambridge University Press; 1997. Sander R. Compilation of Henry's law constants for water as solvent. Atmos Chem Phys. 2015;15:4399–4981. Van't Riet K, Tramper J. Basic bioreactor design. Boca Raton: CRC Press; 1991. Peters MS, Timmerhaus KD. Plant design and economics for chemical engineers. 2nd ed. New York: McGraw-Hill International Editions: Chemical & Petroleum Engineering Series; 1991. Aspen Plus® V8.6: Aspen technology; Bedford, Massachusetts, USA. Shavit A, Gutfinger C. Thermodynamics: from concepts to applications. Boca Raton: CRC Press; 2008. U.S. Energy Information Administration. Electric sales, revenue, and average price: industrial sector. http://www.eia.gov/electricity/sales_revenue_price/pdf/table8.pdf. Kerby R, Zeikus JG. Growth of Clostridium thermoaceticum on H2/CO2 or CO as energy source. Curr Microbiol. 1983;8(1):27–30. Rohatgi A. WebPlotDigitizer. http://arohatgi.info/WebPlotDigitizer. U. S. Environmental Protection Agency. Review of national ambient air quality standards for carbon monoxide. Final rule. https://www.gpo.gov/fdsys/pkg/FR-2011-08-31/pdf/2011-21359.pdf. Joyner WM. Volume I: Stationary-point and area sources. In: Supplement A to compilation of air-pollutant emission factors.1986. https://www3.epa.gov/ttn/chief/ap42/oldeditions/4th_edition/ap42_4thed_suppa_oct1986.pdf. Accessed Jan 2017. Weiland P. Biogas production: current state and perspectives. Appl Microbiol Biot. 2010;85(4):849–60. Bertsch J, Müller V. Bioenergetic constraints for conversion of syngas to biofuels in acetogenic bacteria. Biotechnol Biofuels. 2015;8(1):1. Li B, Duan Y, Luebke D, Morreale B. Advances in CO2 capture technology: a patent review. Appl Energ. 2013;102:1439–47. Cueto-Rojas HF, van Maris AJ, Wahl SA, Heijnen JJ. Thermodynamics-based design of microbial cell factories for anaerobic product formation. Trends Biotechnol. 2015;33(9):534–46. Feist AM, Henry CS, Reed JL, Krummenacker M, Joyce AR, Karp PD, et al. A genome-scale metabolic reconstruction for Escherichia coli K-12 MG1655 that accounts for 1260 ORFs and thermodynamic information. Mol Syst Biol. 2007;3(1):121. Chen J, Gomez JA, Höffner K, Barton PI, Henson MA. Metabolic modeling of synthesis gas fermentation in bubble column reactors. Biotechnol Biofuels. 2015;8(1):1–12. Hu P, Rismani-Yazdi H, Stephanopoulos G. Anaerobic CO2 fixation by the acetogenic bacterium Moorella thermoacetica. AIChE J. 2013;59:3176. Chen WY, Liew F, Koepke M. Recombinant microorganisms and uses therefore (US 20130323820 A1); 2013. Munasinghe PC, Khanal SK. Biomass-derived syngas fermentation into biofuels: opportunities and challenges. Bioresour Technol. 2010;101(13):5013–22. Bredwell MD, Worden RM. Mass-transfer properties of microbubbles. 1. Experimental studies. Biotechnol Progr. 1998;14(1):31–8. Nevin KP, Hensley SA, Franks AE, Summers ZM, Ou J, Woodard TL, et al. Electrosynthesis of organic compounds from carbon dioxide is catalyzed by a diversity of acetogenic microorganisms. Appl Environ Microb. 2011;77(9):2882–6. SR, SS, TP, LW, TØJ, ATN, HN contributed to the design of the study. SR, SS, TP, LW acquired and interpreted data. SR wrote the manuscript. All authors read and approved the final manuscript. The authors thank Kalpana Samant, Zheng Zhao, Carolina Villa-Sanin, Christian Lieven, and Kai Zhuang for valuable discussion and contribution to the study. This work was supported by the Novo Nordisk Foundation. Additionally, this work was supported by the European Union Seventh Framework Programme—ITN FP7/2012/317058 to SR. The Novo Nordisk Foundation Center for Biosustainability, Technical University of Denmark, Kongens Lyngby, Denmark Stephanie Redl , Sumesh Sukumara , Torbjørn Ølshøj Jensen & Alex Toftgaard Nielsen DSM Biotechnology Center, PO Box 1, 2600 MA, Delft, The Netherlands Tom Ploeger , Liang Wu & Henk Noorman Department of Biotechnology, Technical University Delft, Delft, The Netherlands Henk Noorman Search for Stephanie Redl in: Search for Sumesh Sukumara in: Search for Tom Ploeger in: Search for Liang Wu in: Search for Torbjørn Ølshøj Jensen in: Search for Alex Toftgaard Nielsen in: Search for Henk Noorman in: Correspondence to Stephanie Redl. Additional file 1. Additional information, including Tables S1–S16; Figures S1, S2; derivation of Eq. 15; calculation of electricity generation with BOF gas. Additional file 2. Selection of downstream process scheme. Syngas fermentation Syngas Biomass gasification Basic oxygen furnace Techno-economic evaluation Thermophilic fermentation Biochemical production Corn stover
CommonCrawl
Volume 20, Number 3 (2014), 1372-1403. Approximating class approach for empirical processes of dependent sequences indexed by functions Herold Dehling, Olivier Durieu, and Marco Tusche More by Herold Dehling More by Olivier Durieu More by Marco Tusche Full-text: Open access Enhanced PDF (336 KB) Article info and citation We study weak convergence of empirical processes of dependent data $(X_{i})_{i\geq0}$, indexed by classes of functions. Our results are especially suitable for data arising from dynamical systems and Markov chains, where the central limit theorem for partial sums of observables is commonly derived via the spectral gap technique. We are specifically interested in situations where the index class $\mathcal{F}$ is different from the class of functions $f$ for which we have good properties of the observables $(f(X_{i}))_{i\geq0}$. We introduce a new bracketing number to measure the size of the index class $\mathcal{F}$ which fits this setting. Our results apply to the empirical process of data $(X_{i})_{i\geq0}$ satisfying a multiple mixing condition. This includes dynamical systems and Markov chains, if the Perron–Frobenius operator or the Markov operator has a spectral gap, but also extends beyond this class, for example, to ergodic torus automorphisms. Bernoulli, Volume 20, Number 3 (2014), 1372-1403. First available in Project Euclid: 11 June 2014 Permanent link to this document https://projecteuclid.org/euclid.bj/1402488943 Digital Object Identifier doi:10.3150/13-BEJ525 Mathematical Reviews number (MathSciNet) MR3217447 Zentralblatt MATH identifier Empirical processes indexed by classes of functions dependent data Markov chains dynamical systems ergodic torus automorphism weak convergence Dehling, Herold; Durieu, Olivier; Tusche, Marco. Approximating class approach for empirical processes of dependent sequences indexed by functions. Bernoulli 20 (2014), no. 3, 1372--1403. doi:10.3150/13-BEJ525. https://projecteuclid.org/euclid.bj/1402488943 [1] Andrews, D.W. and Pollard, D. (1994). An introduction to functional central limit theorems for dependent stochastic processes. International Statistical Review 62 119–132. [2] Beutner, E. and Zähle, H. (2012). Deriving the asymptotic distribution of U- and V-statistics of dependent data using weighted empirical processes. Bernoulli 18 803–822. Mathematical Reviews (MathSciNet): MR2948902 Digital Object Identifier: doi:10.3150/11-BEJ358 Project Euclid: euclid.bj/1340887003 [3] Bickel, P.J. and Wichura, M.J. (1971). Convergence criteria for multiparameter stochastic processes and some applications. Ann. Math. Statist. 42 1656–1670. Mathematical Reviews (MathSciNet): MR383482 Digital Object Identifier: doi:10.1214/aoms/1177693164 Project Euclid: euclid.aoms/1177693164 [4] Billingsley, P. (1968). Convergence of Probability Measures. New York: Wiley. [5] Borovkova, S., Burton, R. and Dehling, H. (2001). Limit theorems for functionals of mixing processes with applications to $U$-statistics and dimension estimation. Trans. Amer. Math. Soc. 353 4261–4318. Digital Object Identifier: doi:10.1090/S0002-9947-01-02819-7 [6] Bradley, R.C. (2007). Introduction to Strong Mixing Conditions. Vols 1–3. Heber City, UT: Kendrick Press. [7] Collet, P., Martinez, S. and Schmitt, B. (2004). Asymptotic distribution of tests for expanding maps of the interval. Ergodic Theory Dynam. Systems 24 707–722. Digital Object Identifier: doi:10.1017/S0143385703000476 [8] Dedecker, J., Doukhan, P., Lang, G., León R., J.R., Louhichi, S. and Prieur, C. (2007). Weak Dependence: With Examples and Applications. Lecture Notes in Statistics 190. New York: Springer. [9] Dedecker, J. and Prieur, C. (2007). An empirical central limit theorem for dependent sequences. Stochastic Process. Appl. 117 121–142. Digital Object Identifier: doi:10.1016/j.spa.2006.06.003 [10] Dehling, H. and Durieu, O. (2011). Empirical processes of multidimensional systems with multiple mixing properties. Stochastic Process. Appl. 121 1076–1096. [11] Dehling, H., Durieu, O. and Volný, D. (2009). New techniques for empirical processes of dependent data. Stochastic Process. Appl. 119 3699–3718. [12] Dehling, H. and Philipp, W. (2002). Empirical process techniques for dependent data. In Empirical Process Techniques for Dependent Data 3–113. Boston, MA: Birkhäuser. Digital Object Identifier: doi:10.1007/978-1-4612-0099-4_1 [13] Dhompongsa, S. (1984). A note on the almost sure approximation of the empirical process of weakly dependent random vectors. Yokohama Math. J. 32 113–121. [14] Dolgopyat, D. (2004). Limit theorems for partially hyperbolic systems. Trans. Amer. Math. Soc. 356 1637–1689 (electronic). Digital Object Identifier: doi:10.1090/S0002-9947-03-03335-X [15] Donsker, M.D. (1952). Justification and extension of Doob's heuristic approach to the Komogorov–Smirnov theorems. Ann. Math. Statist. 23 277–281. Mathematical Reviews (MathSciNet): MR47288 [16] Doob, J.L. (1949). Heuristic approach to the Kolmogorov–Smirnov theorems. Ann. Math. Statist. 20 393–403. [17] Doukhan, P. and Louhichi, S. (1999). A new weak dependence condition and applications to moment inequalities. Stochastic Process. Appl. 84 313–342. Digital Object Identifier: doi:10.1016/S0304-4149(99)00055-1 [18] Doukhan, P., Massart, P. and Rio, E. (1995). Invariance principles for absolutely regular empirical processes. Ann. Inst. Henri Poincaré Probab. Stat. 31 393–427. [19] Dudley, R.M. (1966). Weak convergences of probabilities on nonseparable metric spaces and empirical measures on Euclidean spaces. Illinois J. Math. 10 109–126. Project Euclid: euclid.ijm/1256055206 [20] Dudley, R.M. (1978). Central limit theorems for empirical measures. Ann. Probab. 6 899–929. Digital Object Identifier: doi:10.1214/aop/1176995384 Project Euclid: euclid.aop/1176995384 [21] Durieu, O. (2008). A fourth moment inequality for functionals of stationary processes. J. Appl. Probab. 45 1086–1096. Digital Object Identifier: doi:10.1239/jap/1231340235 Project Euclid: euclid.jap/1231340235 [22] Durieu, O. and Jouan, P. (2008). Empirical invariance principle for ergodic torus automorphisms; genericity. Stoch. Dyn. 8 173–195. [23] Durieu, O. and Tusche, M. (2012). An empirical process central limit theorem for multidimensional dependent data. J. Theoret. Probab. DOI:10.1007/s10959-012-0450-3. Digital Object Identifier: doi:10.1007/s10959-012-0450-3 [24] Hennion, H. and Hervé, L. (2001). Limit Theorems for Markov Chains and Stochastic Properties of Dynamical Systems by Quasi-Compactness. Lecture Notes in Math. 1766. Berlin: Springer. [25] Le Borgne, S. (1999). Limit theorems for non-hyperbolic automorphisms of the torus. Israel J. Math. 109 61–73. Digital Object Identifier: doi:10.1007/BF02775027 [26] Leonov, V.P. (1960). On the central limit theorem for ergodic endomorphisms of compact commutative groups. Dokl. Akad. Nauk SSSR 135 258–261. [27] Neuhaus, G. (1971). On weak convergence of stochastic processes with multidimensional time parameter. Ann. Math. Statist. 42 1285–1295. [28] Nolan, D. and Pollard, D. (1987). $U$-processes: Rates of convergence. Ann. Statist. 15 780–799. Digital Object Identifier: doi:10.1214/aos/1176350374 Project Euclid: euclid.aos/1176350374 [29] Ossiander, M. (1987). A central limit theorem under metric entropy with $L_{2}$ bracketing. Ann. Probab. 15 897–919. [30] Philipp, W. (1984). Invariance principles for sums of mixing random elements and the multivariate empirical process. In Limit Theorems in Probability and Statistics, Vols I, II (Veszprém, 1982). Colloquia Mathematica Societatis János Bolyai 36 843–873. Amsterdam: North-Holland. [31] Philipp, W. and Pinzur, L. (1980). Almost sure approximation theorems for the multivariate empirical process. Z. Wahrsch. Verw. Gebiete 54 1–13. [32] Rio, E. (1998). Processus empiriques absolument réguliers et entropie universelle. Probab. Theory Related Fields 111 585–608. Digital Object Identifier: doi:10.1007/s004400050179 [33] Straf, M.L. (1972). Weak convergence of stochastic processes with several parameters. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. II: Probability Theory 187–221. Berkeley, CA: Univ. California Press. [34] van der Vaart, A.W. and Wellner, J.A. (1996). Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Series in Statistics. New York: Springer. Bernoulli Society for Mathematical Statistics and Probability Bernoulli Society IMS Supported Journal New content alerts Email RSS ToC RSS Article A sequential empirical CLT for multiple mixing processes with application to $\mathcal{B}$-geometrically ergodic Markov chains Dehling, Herold, Durieu, Olivier, and Tusche, Marco, Electronic Journal of Probability, 2014 Nonconventional limit theorems in discrete and continuous time via martingales Kifer, Yuri and Varadhan, S. R. S., The Annals of Probability, 2014 Estimating the spectral gap of a trace-class Markov operator Qin, Qian, Hobert, James P., and Khare, Kshitij, Electronic Journal of Statistics, 2019 Spectral gap for stochastic energy exchange model with nonuniformly positive rate function Sasada, Makiko, The Annals of Probability, 2015 Some limit results for Markov chains indexed by trees Czuppon, Peter and Pfaffelhuber, Peter, Electronic Communications in Probability, 2014 Optimal rate of convergence for nonparametric change-point estimators for nonstationary sequences Hariz, Samir Ben, Wylie, Jonathan J., and Zhang, Qiang, The Annals of Statistics, 2007 Some almost sure results for unbounded functions of intermittent maps and their associated Markov chains Dedecker, J., Gouëzel, S., and Merlevède, F., Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 2010 Almost sure invariance principle for dynamical systems by spectral methods Gouëzel, Sébastien, The Annals of Probability, 2010 $L^2$ Convergence of Time Nonhomogeneous Markov Processes: I. Spectral Estimates Deuschel, Jean-Dominique and Mazza, Christian, The Annals of Applied Probability, 1994 Chernoff-type bound for finite Markov chains Lezaud, Pascal, The Annals of Applied Probability, 1998 euclid.bj/1402488943
CommonCrawl
Markov chains on a measurable state space A Markov chain on a measurable state space is a discrete-time-homogeneous Markov chain with a measurable space as state space. History The definition of Markov chains has evolved during the 20th century. In 1953 the term Markov chain was used for stochastic processes with discrete or continuous index set, living on a countable or finite state space, see Doob.[1] or Chung.[2] Since the late 20th century it became more popular to consider a Markov chain as a stochastic process with discrete index set, living on a measurable state space.[3][4][5] Definition Denote with $(E,\Sigma )$ a measurable space and with $p$ a Markov kernel with source and target $(E,\Sigma )$. A stochastic process $(X_{n})_{n\in \mathbb {N} }$ on $(\Omega ,{\mathcal {F}},\mathbb {P} )$ is called a time homogeneous Markov chain with Markov kernel $p$ and start distribution $\mu $ if $\mathbb {P} [X_{0}\in A_{0},X_{1}\in A_{1},\dots ,X_{n}\in A_{n}]=\int _{A_{0}}\dots \int _{A_{n-1}}p(y_{n-1},A_{n})\,p(y_{n-2},dy_{n-1})\dots p(y_{0},dy_{1})\,\mu (dy_{0})$ is satisfied for any $n\in \mathbb {N} ,\,A_{0},\dots ,A_{n}\in \Sigma $. One can construct for any Markov kernel and any probability measure an associated Markov chain.[4] Remark about Markov kernel integration For any measure $\mu \colon \Sigma \to [0,\infty ]$ we denote for $\mu $-integrable function $f\colon E\to \mathbb {R} \cup \{\infty ,-\infty \}$ the Lebesgue integral as $\int _{E}f(x)\,\mu (dx)$. For the measure $\nu _{x}\colon \Sigma \to [0,\infty ]$ defined by $\nu _{x}(A):=p(x,A)$ we used the following notation: $\int _{E}f(y)\,p(x,dy):=\int _{E}f(y)\,\nu _{x}(dy).$ Basic properties Starting in a single point If $\mu $ is a Dirac measure in $x$, we denote for a Markov kernel $p$ with starting distribution $\mu $ the associated Markov chain as $(X_{n})_{n\in \mathbb {N} }$ on $(\Omega ,{\mathcal {F}},\mathbb {P} _{x})$ and the expectation value $\mathbb {E} _{x}[X]=\int _{\Omega }X(\omega )\,\mathbb {P} _{x}(d\omega )$ for a $\mathbb {P} _{x}$-integrable function $X$. By definition, we have then $\mathbb {P} _{x}[X_{0}=x]=1$. We have for any measurable function $f\colon E\to [0,\infty ]$ the following relation:[4] $\int _{E}f(y)\,p(x,dy)=\mathbb {E} _{x}[f(X_{1})].$ Family of Markov kernels For a Markov kernel $p$ with starting distribution $\mu $ one can introduce a family of Markov kernels $(p_{n})_{n\in \mathbb {N} }$ by $p_{n+1}(x,A):=\int _{E}p_{n}(y,A)\,p(x,dy)$ for $n\in \mathbb {N} ,\,n\geq 1$ and $p_{1}:=p$. For the associated Markov chain $(X_{n})_{n\in \mathbb {N} }$ according to $p$ and $\mu $ one obtains $\mathbb {P} [X_{0}\in A,\,X_{n}\in B]=\int _{A}p_{n}(x,B)\,\mu (dx)$. Stationary measure A probability measure $\mu $ is called stationary measure of a Markov kernel $p$ if $\int _{A}\mu (dx)=\int _{E}p(x,A)\,\mu (dx)$ holds for any $A\in \Sigma $. If $(X_{n})_{n\in \mathbb {N} }$ on $(\Omega ,{\mathcal {F}},\mathbb {P} )$ denotes the Markov chain according to a Markov kernel $p$ with stationary measure $\mu $, and the distribution of $X_{0}$ is $\mu $, then all $X_{n}$ have the same probability distribution, namely: $\mathbb {P} [X_{n}\in A]=\mu (A)$ for any $A\in \Sigma $. Reversibility A Markov kernel $p$ is called reversible according to a probability measure $\mu $ if $\int _{A}p(x,B)\,\mu (dx)=\int _{B}p(x,A)\,\mu (dx)$ holds for any $A,B\in \Sigma $. Replacing $A=E$ shows that if $p$ is reversible according to $\mu $, then $\mu $ must be a stationary measure of $p$. See also • Harris chain • Subshift of finite type References 1. Joseph L. Doob: Stochastic Processes. New York: John Wiley & Sons, 1953. 2. Kai L. Chung: Markov Chains with Stationary Transition Probabilities. Second edition. Berlin: Springer-Verlag, 1974. 3. Sean Meyn and Richard L. Tweedie: Markov Chains and Stochastic Stability. 2nd edition, 2009. 4. Daniel Revuz: Markov Chains. 2nd edition, 1984. 5. Rick Durrett: Probability: Theory and Examples. Fourth edition, 2005.
Wikipedia
How to deliver materials necessary for terraforming? A lifeless moon of a gas giant (which orbits a K-type star [4200K] at about 0.7 AU) is chosen to become home to a space colony. Colonists plan to transform it from a barren rock into a garden of Eden. Since this moon lacks water, nitrogen, and $CO_2$, they need to be mined in the asteroid belt and other gas giant moons and then delivered to the terraforming site. What is the most effective way to deliver these materials? The colonists are looking for a delivery method that would provide results (such as atmosphere and free-flowing water) within years or decades. If it is absolutely impossible they can go into suspended animation and wake up in shifts to monitor the progress. It would also be nice to avoid: changes in orbit or rotation of the moon; significant damage to the moon's surface; creation of a debris cloud around the moon (the colonists are strongly opposed to space littering); loss of already delivered materials (as seen in case of comet or asteroid bombardment). Technological level The colonists have access to the following technologies: fully automated and robotised asteroid mining; space travel at 1/10 of the speed of light; terraforming technologies (however, only one project has been completed successfully by the time of their departure); genetic engineering; suspended animation. Technologies that are envisioned by scientists of today but cannot be built because of technical difficulties (materials, money, political will) are fine. However, something like teleportation is not possible unless it can be explained by existing science. science-based space-colonization terraforming OlgaOlga $\begingroup$ By the looks of your conditions you're opposed to direct ballistic delivery $\endgroup$ – Separatrix Dec 26 '17 at 20:03 $\begingroup$ @Separatrix, if ballistic delivery is indeed the only feasible way to do it, I will live with it. But other methods will be preferrable. $\endgroup$ – Olga Dec 26 '17 at 20:06 $\begingroup$ One important read when considering terraforming projects is the wiki on approaches to terraforming Venus. It's gives good insight into various problems with such an undertaking. en.m.wikipedia.org/wiki/Terraforming_of_Venus $\endgroup$ – Stephan Dec 26 '17 at 20:19 $\begingroup$ since most of the material you want to deliver directly to the atmosphere bombardment really has an advantage, it vaporizes the material at the same time it delivers it. of course drops do not need to be random you can use the bombardment to sculpt the surface to suit your needs. $\endgroup$ – John Dec 26 '17 at 21:15 $\begingroup$ I was thinking with ridiculously convoluted gravity assists but the time frame doesn't permit that. How is can travel at 1/10 of the speed of light not the answer? Also, your "terraforming technologies" had better include knowing how to start a dead planet's dynamo. In the given time frame, that'd be more of a handwave then the engine. $\endgroup$ – Mazura Dec 26 '17 at 23:41 Provenance of terraforming materials. You mention getting things from the asteroid belt. There may be an easier way. Since you can move cargo fast (0.1c) you can afford to get the cargo from farther away. Asteroids are generally rocky with possibly some ices on top. Moons of Saturn are generally icy with some rocks in the middle. There are a lot of moons of Saturn (and the moons of Uranus and Neptune are probably good targets as well). Collectively, they have far more ammonia and water than you could ever use to terraform a planet. So why not simply drag a few small to mediums sized moons of a gas giant into orbit around your planet and prepare to send them down? Something not mentioned is the need to refine the materials. If you want the proper elements to be added to your moon in a matter of decades, then you have to be careful about what you add. Like any good culinary creation, you must measure your ingredients carefully. You ingredients are bits of and/or whole moons. So how do you measure them? You have to melt them. You can utilize fractional distillation to melt away the various compounds. If you slowly cook (heat) a comet, all the Carbon monoxide will melt first (68 K), then methane (~91 K), ammonia (195 K), carbon dioxide (217 K) and finally water (273 K). All those temperatures are pretty far away from each other, so simply melt the ice ball slowly, and then separate the solid bits from the liquid at each step. Now you have a set of liquid or slushy balls in space. If you were smart, you would do this far from the sun, so the carbon monoxide and methane will refreeze for you before transport. You now have a bunch of ice balls of reasonably pure compounds ready to go smash into your planet! In the comments you say you want a plant with about 0.75 Earth's radius and mass; and 0.7 Earth's gravity. That doesn't work exactly, but going with some numbers that fit the bill more or less, let us assume your moon has radius 0.9 Earth's, density 0.8 Earth's, to get surface gravity 0.72 of Earth's. Mass ends up being 0.58 of Earth's. Since surface area is proportional to radius, squared, we will need about 80% of the Earth's atmosphere, oceans, and biological matter. An atmosphere will need 20% oxygen and 80% inert gas; nitrogen is the most common inert gas and should do nicely. The requirements for our moon will be $3.3\times10^{18}$ kg of nitrogen and $2.1\times10^{17}$ kg of oxygen. The ocean will need $1.1\times0^{21}$ kg of water (though this could vary widely, depending on how wet you want the planet). Lastly, the biosphere will need at least $1\times10^{12}$ kg of carbon. To provide these ingredients, we can add three compounds primarily. Ammonia can be used to generate atmospheric nitrogen; Carbon dioxide can be transformed in to atmospheric oxygen; and water is just water. At the ratio of two Ammonia per one diatomic nitrogen and one carbon dioxide per diatomic oxygen, our shopping list is roughly: $1\times10^{21}$ kg water $4\times10^{18}$ kg ammonia $2\times10^{17}$ kg carbon dioxide The great thing about these ingredients is that they are three of the most common compounds in the outer solar system. They also provide plenty of surplus material for making a biosphere: Carbon Dioxide has extra carbon and ammonia has extra hydrogen. No need to add methane, there are plenty of fossil fuels to go around! How to not make a mess The next challenge is to not make too big of a mess when you deliver your materials. Here are the various factors you outlined. How not to significantly damage the moon's surface Without an atmosphere, your moon will likely have a surface covered in fine regolith similar to what covers Luna and Mars. If this surface is hit by impacts from space, the dust will end up mostly settled back into the surface. So from this perspective, there isn't too much to damage done by hitting the planet with space snowballs; the holes will be filled by dust (relatively) soon after impact. Lunar regolith has a density about 2/3 of lunar surface rocks (and Earth rocks), so the holes will be filled with a material that will be reasonably solid. Newton's depth approximation for impacts is $$D\approx L\frac{\rho_i}{\rho_p}$$ where L is the length (or diameter, if spherical) of the projectile and $\rho_i$ and $\rho_p$ are the densities of the impactor and planet, respectively. Note that this approximation nowhere includes the velocity of the impactor. Let us assume that planet has a similar crust density to Earth (2500 kg/m$^3$), while the delivered volatiles, such as CO$_2$, water, and ammonia each have densities less than 1000 kg/m$^3$. Assuming we want to limit impact depth to 200 m so we don't make craters too large, we can throw objects up to 500m in diameter at the surface without making too much of a mess. How not to not make a debris cloud Putting stuff back into space will both anger your space-junk-OCD Chief Engineer and represent a loss of materials. We don't want to do that. How can we avoid it? First, we have to figure out escape velocity of our planet. From this post, we see that radius and density are both proportional to surface gravity. As calculated above, we have radius 0.9 Earth's, density 0.8 Earth's, to get surface gravity 0.72 of Earth's. Mass ends up being 0.58 of Earth's. Escape velocity is calculated here as $\sqrt{2gr}$, where $g$ is gravity and $r$ is radius. Given the factors above, escape velocity of your moon is 0.8 of Earth's, or 9000 m/s. To ensure nothing goes into space, we will make the average ejecta velocity from our impact craters no more than 4000 m/s. In this post, I perform calculations on the height of an ejecta plume. This model calculates ejection velocity as a function of distance from the impact site. We want the ejecta velocity at the edge of the impactor to be less than 4000 m/s. If you work out the equation, you find that the maximum ejecta speed is proportional only to the impact velocity, and not to the mass or radius of the impactor (although the density of the impactor is very important). Ultimately, the relationship is $$4000 \text{ m/s} = .1313v_i.$$ Thus, for a 4000 m/s ejecta, the maximum impact speed must be about 30 km/s. How not to eject volatile gasses into space Of the gasses you are interested in, the two lightest and therefore most likely to escape are water (molar mass 18) and ammonia (molar mass 17). Therefore, we must figure out how to keep those gasses on the planet upon impact. First, lets look at the ejecta plume from the last problem. Using basic kinematics, a particle (of ammonia) ejected at 4000 m/s, will reach a height of about 1100 km (Don't worry! I know that this is well into space, but without orbital velocity, it is coming back down!). The time it takes to get all the way up there is about 400 seconds, and the escape velocity at this height is about 8200 m/s. Using the calculations in the answers to this post, we can figure out how hot an ammonia particle must be at this height to escape the moon's gravity. A particle must reach about 40000 K to escape under these conditions. Ouch! Now, individual particles are able to escape because the molecular distribution of kinetic energy has some variance to it. However, given that the escape velocity at the top of the ejecta blast is still about the same as the last linked post's calculated necessary escape velocity to hold gasses over geological time (8500 m/s at Earth's distance from the Sun), I think we can assume that very little of our gaseous ejecta will hit space. How not to change the orbit and rotation of a moon I had some more in depth calculations here, but they are not really needed. As long as you have the technology to accelerate things to 0.1c, I assume you have sufficient space horsepower to aim your delivered payloads as you like. If that is the case, then you simply hit the moon from all directions, so the net force of the impacts is zero. Find a suitable mid-size moon. Melt it. Separate the various compounds into chunks of no more than 250m radius. Throw them into your planet at impact speeds of less than 30 km/s. Very little will escape into space. Profit! kingledionkingledion $\begingroup$ The Chief Engineer is very pleased with your attention to his desire to keep space nice and clean. He is also very impressed with your suggestions. He wonders if strategic melting of frozen materials can be accomplished with mirrors? $\endgroup$ – Olga Dec 27 '17 at 1:18 $\begingroup$ @Olga Mirrors would likely work too. I suggested solar panel powered lasers since they give you fine control over wavelength and heat delivered. You don't want to melt your moon all willy-nilly, you probably need to take years to heat it evenly and drive off the volatiles in just the right way so you can recover them. $\endgroup$ – kingledion Dec 27 '17 at 1:27 I would think that mining of the asteroid belt, either manned or automated, could be done to break up the chunks into smaller than SUV-sized pieces that could be launched at the moon. This would avoid any littering of rocket or other man-made materials. It would leave craters, but sizes this small wouldn't be as devastating as a full comet or asteroid. With it impacting the surface, it could help disbursement some, as well as creating friction heat to help bring up the temp of a barren planetoid. Larger pieces could be used to make divots large enough to be a lake or reservoir, without the heavy machinery current methods require. Smaller pieces will avoid large blow back out of the intended atmosphere. Heavily pounding the rocky surface will actually help pulverize it into more easily planted soil. There will likely need to be significant changes to the moons surface for humans to live there, so why not do it with the pot-shots of delivering material before we move in? Running water will change the surface, as will plants and the new weather patterns. Also, adding mass in the form of air, water, etc. will change the orbit of the moon, so that is unavoidable, to a certain extent. We have changed the orbit of the Earth by creating lakes with dams and other water reservoirs. An advantage of orbital bombardment is that it helps judge the level of available atmosphere. As the atmosphere forms, more and more friction will be shown on the debris. Once it gets near Earth density, most of the sub-SUV sized debris will never even hit the surface. This friction has the advantage of further disbursing the O2, N2, H2O, and other materials/minerals you are likely to need on the surface and in the atmosphere. Using robotic miners would be faster than manned mining, but there could be a mixture of both, since the robots are likely to need maintenance. There's always the need for people to feed their families, so there's likely the "adventure seeker" that's willing to spend their time earning hazard pay for asteroid mining. After all, robots are expensive (they keep breaking) and humans are comparatively cheap (since there aren't enough jobs on Earth). There's no need to render the materials to a refined state, just into small enough chunks. There could be a need to prevent certain volatile materials/substances from getting to the moon, but with the vast volume you are looking to fill, small pockets of even chlorine gas aren't likely to matter. And if you ship it with some sodium, it might even help, as in making salt. There's the high likelihood of needing to use some sort of genetic modification of the micro and macro biological elements of the first stage of plants. The plants would need to be adapted to that exact environment. Not all plants can deal with the rocky, low CO2, low O2, low temp, low gravity, low moisture area you are talking about. These would likely need to also be high yield plants and microbes that would output high levels of O2, N2, and lots of other things to be able to create an atmosphere in even 100 years. This flora would also need to be able to break into the rocky surface to get the required minerals they need. They may also need to be highly susceptible to a specific chemical or spray that would kill the fast spreading biome, so more Earth-like tame plants could be brought in, without fear of being killed off by the original planet evolving life forms. Even 1/10th of the speed of light is really fast. This would allow us to go from the Earth to the asteroid belt in hours or days, rather than the current months, so a manned expedition is well within range for this speed. We would just need to make sure that we don't send any material into the moon at that speed. You could, however, have a large transport that collects from the miners, then shoots over to the moon, slows down to open it's doors to offload/bombard the planet with the small fragments, then returns to the collection point. With 1/10th c, this could potentially be done in many points in the asteroid belt with nearly constant delivery to the moon. The Martian Way, by Isaac Asimov, did something slightly similar. It is about a Mars colony that was having a shortage of water, asking the Earth to supply it. An Earth politician advocated against giving them more, citing a shortage of supply, so the Mars colony found their own solution. They sent out a group in a large rocket to find a large, mostly ice, asteroid to bring back. They ended up embedding the rocket into the asteroid and using it as a source of fuel to get home. They ended up with more than enough water for themselves, but had to expend a sizable portion of it to get it there and land it, rather than just crash it. https://en.wikipedia.org/wiki/The_Martian_Way computercarguycomputercarguy Large scale projects like this need to consider the economics of moving all that material around the Solar System. You will need to apply energy to move it from whatever orbits it is currently in, then, since you are opposed to ballistic impact, more energy to match the orbital speed of the target and deliver it at minimal speeds. Depending on where the materials are in relation to the target, you have several choices. If you are in a farther orbit from the material source than the local sun, you can use high performance solar sails to tow the materials into the appropriate orbits. The sail can accelerate to the target planet, then "tack" by turning the trust vector against the direction of travel to match the orbital speed. Solar sail accelerating to the target Solar sail decelerating to the target While the usual image of solar sails is vast, slow moving devices, high performance sails with accelerations of 1mm/sec^2 can move across the Solar System at impressive speeds, a one way trip from Earth to Pluto at these speeds would only take 3 years (although that is a flypast). The real key is to set up a "pipeline" and send materials in a steady stream. While it may take 3 years for the first "package" to arrive, once the pipeline is filled, there is a steady stream of materials on the way. K Eric Drexler pioneered the idea of thin film solar sails as far back as the 1970's Using systems of mirrors at the target to reflect sunlight onto fast moving solar sails to assist slowing them down solves two issues, not only do you have finer control of incoming sails, but you can also use the solar energy when not controlling sails to provide energy to the surface, to assist in liquifying solids or turning liquid materials into gasses (an extreme case would be to focus solar energy onto the surface of Mars and boil Oxygen from the iron oxide on the surface. This is obviously energy intensive and inefficient, but with sufficient energy you can do almost anything). Looking the other way, you could set up continental sized mirrors or platoons of mirrors to accelerate solar sails from the far reaches of the Solar System to send cut up pieces of comets back to the inner Solar System for your terraforming project. Given the weaker sunlight and vast distances, you might be looking at a decade before the first deliveries from the "pipeline" arrive, but once again, once the pipeline is filled, you have a steady stream of deliveries. Without knowing important issues like the actual distances between the supply sources and targets, orbital velocities and so on, the answer is hand waved, but the ever useful Atomic Rockets site has a lot of relevant information and equations to work with so you can calculate delivery times, velocity changes etc. ThucydidesThucydides $\begingroup$ I am still calculating distances, orbits, and velocities. So, unfortunately, I am unable to provide more detailed information at this time. However, your answer is incredibly helpful. $\endgroup$ – Olga Dec 27 '17 at 1:26 Not the answer you're looking for? Browse other questions tagged science-based space-colonization terraforming or ask your own question. How to Effectively Collect and Recycle Space Junk? What impact is required for a visible (from Earth) ejecta plume on Earth's Moon, and would the Moon survive? What is the smallest planetary mass that can prevent 'me' from flying off into space? Gravity of a Super-Earth Terraforming for robots Mining Suns For Terraforming? Local terraforming Habitable environment on a big moon of a gas giant lacking magnetosphere Genetic engineering as an alternative to magnetosphere (radiation protection) Radiation Protection in Mechanical Counterpressure Spacesuit The Colonist - Part I: Construction The One Problem With Terraforming and Colonizing a Super-Earth Space station design for long-term safety and durability
CommonCrawl
\begin{definition}[Definition:Conjugate Symmetric Mapping] Let $\C$ be the field of complex numbers. Let $\F$ be a subfield of $\C$. Let $V$ be a vector space over $\F$ Let $\innerprod \cdot \cdot: V \times V \to \mathbb F$ be a mapping. Then $\innerprod \cdot \cdot: V \times V \to \mathbb F$ is '''conjugate symmetric''' {{iff}}: :$\forall x, y \in V: \quad \innerprod x y = \overline {\innerprod y x}$ where $\overline {\innerprod y x}$ denotes the complex conjugate of $\innerprod x y$. \end{definition}
ProofWiki
Stokes' theorem questions and answers Use Stokes' Theorem to evaluate \int_CF \space dr , where C is oriented counterclockwise as... Use Stokes' Theorem to evaluate {eq}\int_CF \space dr {/eq}, where C is oriented counterclockwise as viewed from above. {eq}F(x, y, z) = (x + y^2)i + (y + z^2)j + (z + x^2)k {/eq}, C is the triangle with vertices (5, 0, 0), (0, 5, 0), and (0, 0, 5). Stokes' theorem: Stokes' theorem states that, given a closed curve C, the circulation of a vector field F equals the flow of its rotational through an arbitrary surface S with C as the edge, and oriented according to the rule of the right hand. $$C=\int_C F= \iint_S (rot F) \, dS $$ Answer and Explanation: Find the curl of the vector field F. {eq}\vec F = (x + y^2)i + (y + z^2)j + (z + x^2)k\\ \vec F = P \vec i +Q \vec j + R \vec k\\ curl F= \begin{pmatrix}\vec i&\vec j&\vec k\\ \frac {\partial}{\partial x}&\frac {\partial}{\partial y}&\frac {\partial}{\partial z}\\P&Q&R\end{pmatrix}\\ {/eq} {eq}curl F= (\frac {\partial R}{\partial y} - \frac {\partial Q}{\partial z}, \frac {\partial P}{\partial z} - \frac {\partial R}{\partial x}, \frac {\partial Q}{\partial x} - \frac {\partial P}{\partial y}) \\ {/eq} {eq}curl F= (0-2z,0-2x,0-2y)\\ curl F= (-2z,-2x,-2y)\\ {/eq} Find the equation of the plane that joins the given points {eq}A(5, 0, 0), B(0, 5, 0), and \,\,\, C(0, 0, 5) {/eq} To calculate the equation of the plane we use the formula: {eq}det \begin{pmatrix}x-x_a&y-y_a&z-z_a&\\x_b-x_a&y_b-y_a&z_b-z_a\\ x_c-x_a&y_c-y_a&z_c-z_a \end{pmatrix}=0\\ {/eq} {eq}det \begin{pmatrix}x-5&y&z&\\-5&5&0\\-5&0&5\end{pmatrix}=0\\ {/eq} {eq}x+y+z=5\\ {/eq} Applying the Stokes' Theorem. {eq}I=\int_C F= \iint_S (rot F) \, dS \\ \vec F = (x + y^2)i + (y + z^2)j + (z + x^2)k\\ curl F= (-2z,-2x,-2y) {/eq} Applying rectangular coordinates. {eq}r(y,z)=\left\{ \begin{array}{ll} x=x \\ y=y \,\,\,\,\,\, 0\leq x \leq 5\\ z=5-x-y \,\,\,\,\,\, 0 \leq y \leq 5-x\\ \end{array} \right. {/eq} Calculate the fundamental vector product. {eq}r_y \, x \, r_z = \begin{pmatrix}\vec i&\vec j&\vec k\\ 1 & 0 & -1 \\ 0 & 1& -1 \end{pmatrix}\\ r_y \, x \, r_z = 1 \vec i +0 \vec j +0 \vec k\\ r_y \, x \, r_z= (1, 1, 1) {/eq} Calculate the integral. {eq}(r_y \, x \, r_z) \cdot (rot F)=(1, 1, 1) \cdot (-2z,-2x,-2y)\\ (r_y \, x \, r_z) \cdot (rot F)= -2z,-2x,-2y\\ (r_y \, x \, r_z) \cdot (rot F)= -2(5-x-y),-2x,-2y\\ (r_y \, x \, r_z) \cdot (rot F)= 10\\ C=\int_C F= \int_{0}^{5} \int_{0}^{5-x} 10 \, dydx \\ C= \int_{0}^{5} \left. 10y \right|_{0}^{5-x} \, dx \\ C= \int_{0}^{5} 50-10x \, dx \\ C= \left. 50x -5x^2 \right|_{0}^{5}\\ C= 125 {/eq} The result of the integral is: {eq}\displaystyle \Longrightarrow \boxed{125} {/eq} Become a member and unlock all Study Answers Our experts can answer your tough homework and study questions. Ask a question Ask a question Learn more about this topic: Get access to this video and our entire Q&A library Stokes' Law: Definition & Application from UPSEE Paper 1: Study Guide & Test Prep Related to this Question Use Stokes' Theorem to evaluate \int_{C} F \cdot... Use Stokes' Theorem to evaluate \iint_{S} curl... Use Stokes' Theorem to evaluate \int \int_S ... When S is the open hemisphere x^2 + y^2 + z^2... How does the orientation of a normal vector in... What is the difference between strokes theorem and... Find the work done by the force field F=... A sphere of radius r moving with velocity v... Use Stoke's Theorem to evaluate \oint_C\left... Indicate if the following statements are true or... Consider the vector field F(x, y, z) =... The magnetic field B due to a small current loop... Use Stoke's Theorem to find the work done by the... Let \Omega be an upper half sphere in... Use the Stoke's Theorem to evaluate \iint_S... Use Stokes' Theorem to evaluate \int_{C}... Evaluate the integral below, where C is the curve... The integra ='false' \int_s ( \bigtriangledown... Verify strokes' theorem for the vector function... The liquid of constant density \rho and constant... Reynolds Number: Definition & Equation Fick's First Law: Definition, Derivation & Examples What is Viscosity? - Definition, Measurement & Equation Bulk Density: Definition & Calculation The Venturi Effect and Blood Flow Elastic Collisions in One Dimension Restoring Forces & Oscillation: Definition & Examples Stress Strain Curve: Definition & Yield Point Poisson's Ratio: Definition & Equation What is Terminal Velocity? - Definition, Formula, Calculation & Examples Young's Modulus: Definition & Equations Law of Conservation of Angular Momentum Surface Tension: Definition, Causes, Measurement & Formula Center of Buoyancy: Definition & Formula Thermal Conductivity: Definition, Equation & Calculation The Relationship Between Angular Momentum & Torque Modulus of Rigidity: Definition & Equation Gauge Pressure: Definition & Formula Soil Texture Triangle: Definition & Use Displacement Current: Definition & Function Algebra I: High School SAT Prep: Practice & Study Guide Precalculus: High School Holt Physical Science: Online Textbook Help Accuplacer Math: Advanced Algebra and Functions Placement Test Study Guide Prentice Hall Pre-Algebra: Online Textbook Help PSAT Prep: Practice & Study Guide OUP Oxford IB Math Studies: Online Textbook Help Trigonometry: High School Developmental College Statistics Geometry: High School AP Physics 1: Exam Prep ELM: CSU Math Study Guide Chemistry: High School TExES Mathematics 7-12 (235): Practice & Study Guide College Algebra: Help and Review Algebra II: High School High School Chemistry: Help and Review Explore our homework questions and answers library To ask a site support question, click here Get expert help 24/7 Sign up and access a network of thousands of Stokes' theorem experts. Videos, quizzes & homework help Watch 5 minute video clips, get step by step explanations, take practice quizzes and tests to master any topic. Study.com has a library of 920,000 questions and answers for covering your toughest textbook problems I love the way expert tutors clearly explains the answers to my homework questions. Keep up the good work! - Maritess, College Student Question to be answered
CommonCrawl
Pure and Applied Geophysics January 2015 , Volume 172, Issue 1, pp 23–31 | Cite as The Negative Binomial Distribution as a Renewal Model for the Recurrence of Large Earthquakes Alejandro Tejedor Javier B. Gómez Amalio F. Pacheco The negative binomial distribution is presented as the waiting time distribution of a cyclic Markov model. This cycle simulates the seismic cycle in a fault. As an example, this model, which can describe recurrences with aperiodicities between 0 and 0.5, is used to fit the Parkfield, California earthquake series in the San Andreas Fault. The performance of the model in the forecasting is expressed in terms of error diagrams and compared with other recurrence models from literature. Negative binomial distribution renewal process seismic cycle earthquake forecasting Appendix: Asymptotic Behavior of the Hazard Rate Function Recall that the N-step Markov-cycle distribution, Eq. (7), collapses to a NBD when all transition probabilities are equal, \(a = a_{1} = a_{2} = \cdots = a_{N}\): $$P_{N,a} (n) = (1 - a)^{N} a^{n - N} \left( {\begin{array}{*{20}c} {n - 1} \\ {N - 1} \\ \end{array} } \right) = \left( {\frac{1 - a}{a}} \right)^{N} a^{n} \frac{(n - 1) \ldots (n - N + 1)}{(N - 1)!}.$$ Using the definition of hazard rate for a discrete distribution, Eq. (15) we can write $$h_{N,a} (n) = \frac{{P_{N,a} (n)}}{{\mathop \sum \nolimits_{i = n}^{\infty } P_{N,a} (i)}} = \frac{{a^{n} (n - 1) \ldots (n - N + 1)}}{{\mathop \sum \nolimits_{i = n}^{\infty } a^{i} (i - 1) \ldots (i - N + 1)}} = \frac{1}{{\mathop \sum \nolimits_{i = 1}^{\infty } a^{i - n} \frac{i - 1}{n - 1} \ldots \frac{i - N + 1}{n - N + 1}}}.$$ To proceed further, we make the following change of variable: $$i - n = m.$$ With this change of variable, the hazard rate of the general, two-parameter NBD, Eq. (20), can be written as $$h_{N,a}^{ - 1} = \mathop \sum \limits_{m = 0}^{\infty } a^{m} \left( {1 + \frac{m}{n - 1}} \right) \ldots \left( {1 + \frac{m}{n - N + 1}} \right).$$ In the long-time limit, i.e., when n tends to infinity, we have $$\mathop {\lim }\limits_{n \to \infty } h_{N,a}^{ - 1} = \mathop \sum \limits_{m = 0}^{\infty } a^{m} \left( {1 \times 1 \times 1 \times \cdots \times 1} \right) = \mathop \sum \limits_{m = 0}^{\infty } a^{m} = \frac{1}{1 - a}.$$ So, in the general, two-parameter NBD the asymptotic limit of the hazard rate is: $$\mathop {\lim }\limits_{n \to \infty } h_{N,a} = 1 - a.$$ Abaimov, S.G., Turcotte, D.L. and Rundle, J.B. (2007), Recurrence-time and frequency-slip statistics of slip events on the creeping section of the San Andreas fault in central California, Geophys. J. Int. 170, 1289–1299.Google Scholar Abaimov, S.G., Turcotte, D.L., Shcherbakov, R, Rundle, J.B. Yakovlev, G., Goltz, C., and Newman, W.I. (2008), Earthquakes: Recurrence and Interoccurrence Times, Pure Appl. Geophys. 165, 777–795.Google Scholar Bakun, W.H. (1988), History of significant earthquakes in the Parkfield area, Earthq. Volcano. 20, 45–51.Google Scholar Bakun, W.H., and Lindh, A.G. (1985), The Parkfield, California, earthquake prediction experiment, Science 229, 619–624.Google Scholar Ellsworth, W.L., Matthews, M.V., Nadeau, R.M., Nishenko, S.P., Reasenberg, P.A., Simpson, R.W. (1999), A physically-based earthquake recurrence model for estimation of long-term earthquake earthquake probabilities. United States Geological Survey Open-File Report 99, 552pp.Google Scholar Ferráes, S. (2003), The conditional probability of earthquake occurrence and the next large earthquake in Tokyo, Japan, J. Seismol. 7, 145–153.Google Scholar Ferráes, S. (2005), A probabilistic prediction of the next strong earthquake in the Acapulco-San Marcos segment, Mexico, Geofísica Internacional 44(4), 347–353.Google Scholar Goltz, C., Turcotte, D.L., Abaimov, S.G., Nadeau, R.M., Uchida, N., and Matsuzawa, T. (2009), Rescaled earthquake recurrence time statistics: application to microrepeaters, Geophys. J. Int. 176, 256–264.Google Scholar Gómez, J.B. and Pacheco, A.F. (2004), The Minimalist Model of characteristic earthquakes as a useful tool for description of the recurrence of large earthquakes, Bull. Seismol. Soc. Am. 94, 1960–1967.Google Scholar González, Á., Gómez, J.B. and Pacheco, A.F. (2005), The occupation of a box as a toy model for the seismic cycle of a fault, Am. J. Phys. 73, 946–952.Google Scholar Keilis-Bork D. V. and Soloviev A. (2003), Nonlinear Dynamics of the Lithosphere and Earthquake Prediction, Springer Verlag, Berlin.Google Scholar Matthews, M.V., Ellsworth, W.L. and Reasenberg, P.A. (2002), A Brownian model for recurrent earthquakes, Bull. Seismol. Soc. Am. 92, 2233–2250.Google Scholar Michael, A.J. and Jones, L.M. (1998), Seismicity alert probabilities at Parkfield, California, revisited, Bull. Seismol. Soc. Am. 88, 117–130.Google Scholar Michael, A.J. (2005), Viscoelasticity, postseismic slip, fault interactions, and the recurrence of large earthquakes, Bull. Seismol. Soc. Am. 95, 1594–1603.Google Scholar Molchan, G.M. (1997), Earthquake prediction as a decision-making problem, Pure Appl. Geophys. 149(1), 233–247.Google Scholar Newman W. I. and Turcotte D.L. (1992), A simple model for the earthquake cycle combining self-organized complexity with critical point behavior, Nonlinear Process. Geophys. 9, 453–61.Google Scholar Reid, H.F. (1910), The mechanics of the earthquake, In: The California Earthquake of April 18, 1906, Report of the State Earthquake Investigation Commission, Carnegie Institution, Washington, DC, Vol. 2, pp. 1–192.Google Scholar Rikitake, T. (1974), Probability of earthquake occurrence as estimated from crustal strain, Tectonophysics 23(3), 299–312.Google Scholar Scholz, C.H. (2002), The Mechanics of Earthquakes and Faulting, Cambridge University Press.Google Scholar Sornette, D. and Knopoff, L. (1997). The paradox of the expected time until the next earthquake. Bull. Seismol. Soc. Am. 87, 789–798.Google Scholar Sykes, L.R., and Menke, W. (2006), Repeat Times of Large Earthquakes: Implications for Earthquake Mechanics and Long-Term Prediction. Bull. Seismol. Soc. Am. 96(5), 1569–1596.Google Scholar Tejedor, A., Gómez, J.B., and Pacheco, A.F. (2009), Earthquake size-frequency statistics in a forest-fire model of individual faults, Physical Review E 79, 046102.Google Scholar Tejedor, A., Gómez, J.B., and Pacheco, A.F. (2012), One-way Markov process approach to repeat times of large earthquakes in faults, J. Stat. Phys. 149(5), 951–963.Google Scholar Utsu, T. (1984), Estimation of parameters for recurrence models of earthquakes, Bull. Earthq. Res. Inst. Univ. Tokyo 59, 53–66.Google Scholar Vázquez-Prada, M., González, Á., Gómez, J.B. and Pacheco, A.F. (2002), A minimalist model of characteristic earthquakes. Nonlinear. Process. Geophys. 9, 513–519.Google Scholar Working Group on California Earthquake Probabilities (2003), Earthquake Probabilities in the San Francisco Bay Region: 2002–2031, United States Geological Survey Open-File Report 03-214, 234 p.Google Scholar Yakovlev, G., Turcotte, D.L., Rundle, J.B., and Rundle, P.B. (2006), Simulation-Based Distributions of Earthquake Recurrence Times on the San Andreas Fault System, Bull. Seismol. Soc. Am. 96(6), 1995–2007.Google Scholar Zöller, G., Hainzl, S., and Holschneider, M. (2008), Recurrent Large Earthquakes in a Fault Region: What Can Be Inferred from Small and Intermediate Events?, Bull. Seismol. Soc. Am. 98, 2641–2651.Google Scholar © Springer Basel 2014 1.Saint Anthony Falls Laboratory, Department of Civil EngineeringUniversity of MinnesotaMinneapolisUSA 2.Department of Earth SciencesUniversity of ZaragozaZaragozaSpain 3.Department of Theoretical PhysicsUniversity of ZaragozaZaragozaSpain Tejedor, A., Gómez, J.B. & Pacheco, A.F. Pure Appl. Geophys. (2015) 172: 23. https://doi.org/10.1007/s00024-014-0871-2 Received 14 January 2014 Revised 19 May 2014 Accepted 29 May 2014 Publisher Name Springer Basel
CommonCrawl
\begin{definition}[Definition:Relative Semantic Equivalence/Term] Let $\FF$ be a theory in the language of predicate logic. Let $\tau_1, \tau_2$ be terms. Then $\tau_1$ and $\tau_2$ are '''semantically equivalent with respect to $\FF$''' {{iff}}: :$\map {\operatorname{val}_\AA} {\tau_1} \sqbrk \sigma = \map {\operatorname{val}_\AA} {\tau_2} \sqbrk \sigma$ for all models $\AA$ of $\FF$ and assignments $\sigma$ for $\tau_1,\tau_2$ in $\AA$. Here $\map {\operatorname{val}_\AA} {\tau_1} \sqbrk \sigma$ denotes the value of $\tau_1$ under $\sigma$. \end{definition}
ProofWiki
Methodology article Identifying GPCR-drug interaction based on wordbook learning from sequences Pu Wang1, Xiaotong Huang1, Wangren Qiu2 & Xuan Xiao ORCID: orcid.org/0000-0003-1016-75442 BMC Bioinformatics volume 21, Article number: 150 (2020) Cite this article G protein-coupled receptors (GPCRs) mediate a variety of important physiological functions, are closely related to many diseases, and constitute the most important target family of modern drugs. Therefore, the research of GPCR analysis and GPCR ligand screening is the hotspot of new drug development. Accurately identifying the GPCR-drug interaction is one of the key steps for designing GPCR-targeted drugs. However, it is prohibitively expensive to experimentally ascertain the interaction of GPCR-drug pairs on a large scale. Therefore, it is of great significance to predict the interaction of GPCR-drug pairs directly from the molecular sequences. With the accumulation of known GPCR-drug interaction data, it is feasible to develop sequence-based machine learning models for query GPCR-drug pairs. In this paper, a new sequence-based method is proposed to identify GPCR-drug interactions. For GPCRs, we use a novel bag-of-words (BoW) model to extract sequence features, which can extract more pattern information from low-order to high-order and limit the feature space dimension. For drug molecules, we use discrete Fourier transform (DFT) to extract higher-order pattern information from the original molecular fingerprints. The feature vectors of two kinds of molecules are concatenated and input into a simple prediction engine distance-weighted K-nearest-neighbor (DWKNN). This basic method is easy to be enhanced through ensemble learning. Through testing on recently constructed GPCR-drug interaction datasets, it is found that the proposed methods are better than the existing sequence-based machine learning methods in generalization ability, even an unconventional method in which the prediction performance was further improved by post-processing procedure (PPP). The proposed methods are effective for GPCR-drug interaction prediction, and may also be potential methods for other target-drug interaction prediction, or protein-protein interaction prediction. In addition, the new proposed feature extraction method for GPCR sequences is the modified version of the traditional BoW model and may be useful to solve problems of protein classification or attribute prediction. The source code of the proposed methods is freely available for academic research at https://github.com/wp3751/GPCR-Drug-Interaction. As the largest family of human membrane protein, GPCRs mediate multiple physiological processes such as neurotransmission, cellular metabolism, secretion, cellular differentiation, growth, inflammatory, and immune responses [1, 2]. As a result, these receptors have emerged as the most important drug targets in human pathophysiology [3, 4]. According to the report in [5], 475 drugs target 108 unique non-olfactory GPCRs, this account for about 34% of all drugs approved by the US Food and Drug Administration (FDA). Furthermore, dozens of novel GPCR targets that are not yet modulated by approved drugs are now in clinical trials, these receptors are potentially novel targets for the treatment of various indications. The GPCR-related drug discovery often relies on the binding affinity identification. The traditional high-throughput screening (HTS) method is receptor binding assay, such as scintillation proximity assay (SPA) and time-resolved fluorescence resonance energy transfer (TR-FRET) technology [6]. However, with the development of computational methods for GPCR drug discovery, the HTS can be aided by in silico modeling, including the structure-based methods and the sequence-based methods. The combination of in vitro and in silico methods will reduce both time and cost by reducing the number of candidate compounds to be experimentally tested. The structure-based approach plays an important role in drug discovery, especially for enzyme-targeted drugs [7, 8]. However, this approach is restrained in the development of GPCR-targeted drugs because it is very difficult to acquire the reliable 3D structures of these receptors. With the breakthroughs in GPCR crystallography [9, 10], the structure-based methods are potentially impactful for GPCR-targeted drug design [11,12,13]. As far as it goes, sequence-based methods may be an easy and efficient choice owing to machine learning technology and the accumulation of target-drug interaction data stored in KEGG [14], SuperTarget [15], DrugBank [16] and so on. Identifying the target-drug interaction has become a hot topic in bioinformatics, and a great deal of effort has been made in this area to bring up many effective methods [17,18,19,20,21,22,23,24,25,26]. Because the importance and particularity of GPCRs (the available 3D structures are very limited), we focus the study in the computational approach for identifying GPCR-drug interaction only based on sequence information. Because there are two types of molecules involved in the interaction between GPCRs and drugs, the method of combining the chemical structure information of drugs and sequence information of proteins is often used here. Yamanishi et al. [27] did a series of research in the prediction of target-drug interaction networks, including the GPCR-drug interaction. The methods used in these studies are in fact based on sequence similarities, including the chemical structure similarities between compounds computed by SIMCOMP [28], the pharmacological effect similarities between compounds computed by the weighted cosine correlation coefficient, and the sequence similarities between the proteins computed by a normalized version of Smith–Waterman scores [29]. And then in the framework of supervised bipartite graph inference, compounds and proteins were mapped onto the unified feature space, in which the more closer the compound and protein were, the more likely that two objects interacted with each other. Differently, He et al. [30] studied the GPCR-drug interaction based on functional groups and biological features. In this method, any drug was formulated as a 28D feature vector according to its chemical functional groups, and any protein was formulated as a 139D feature vector using pseudo amino acid composition (PseAAC) [31, 32] method. And then machine learning technology such as feature selection and the nearest neighbor algorithm were adopted to solve the problem of interaction prediction. iGPCR-Drug [33] was also a sequence-based method specifically proposed for identifying GPCR-drug interaction. In this method, any drug was represented as 2D fingerprint via a chemical toolbox called OpenBabel [34], and then DFT was used to extract 256D frequency features for each drug. Accordingly, any GPCR was formulated as a 22D feature vector through PseAAC method. In such a way, any GPCR-drug pair, no matter interaction or non-interaction, could be formulated as a 278D feature vector by combining the two types of feature vectors. Finally, these feature vectors were input into the fuzzy K-nearest-neighbor classifier for interaction recognition. Recently, Hu et al. [35] proposed a new sequence-based method for the prediction of GPCR-drug interaction. In this method, the discrete wavelet transform (DWT) was utilized to extract the features of drugs based on their fingerprints, and any drug molecular was represented as a 128D feature vector. For GPCRs, the pseudo position specific scoring matrix (PsePSSM) features were extracted to encode any GPCR as a 140D vector. With the combined 268D feature vectors as input, several classifiers were tested, including optimized evidence theoretic K nearest neighbor (OET-KNN), radial basis function networks (QuickRBF), support vector machine (SVM), and random forest (RF). The experiment results showed that RF performs better than the others consistently. To reduce the false positive and false negative errors, the initial model was further improved with a drug-association-matrix-based PPP. Although this advanced model characterized by the combination of progressive feature extraction method (PsePSSM and DWT), ensemble learning method (RF), and post-processing procedure (PPP) was better than the foregoing ones, but it seemed that the generalization ability of this advanced model was still limited because the results of independent test were much lower than that of cross-validation on training dataset, especially when there was no PPP. So it is very meaningful to develop models with high generalization. In this study, we propose a new powerful sequence-based method for identifying the GPCR-drug interaction based on wordbook learning from sequences. For GPCRs, we encode the sequences by the physicochemical properties of amino acids, and then create a wordbook through clustering technology. Based on the wordbook, any GPCR is formulated as a feature vector containing the word frequencies. It is easy to construct the wordbook of drugs by carrying out DFT on the drugs' fingerprints and the amplitude compositions at different frequency points are taken as the words of the drug wordbook, and then each drug is formulated as a feature vector equivalent to the dimension of GPCR feature vector. In the joint feature space of GPCR-drug pairs, a very simple machine learning method, DWKNN [36], is employed as the classifier for interaction prediction. This basic method is very easy to be enhanced through ensemble learning. Independent test on the benchmark dataset demonstrates the generalization ability of the proposed methods. Firstly, we fix the hyperparameter in the prediction engine DWKNN, and compare the prediction performance with different representations of GPCRs and drugs. Secondly, with the best feature representation proved by experiments, we test the effect of hyperparameter in DWKNN. Thirdly, we try to enhance the base model through ensemble learning. Finally, we compare the performance of the proposed methods with that of the previous ones through cross-validation and independent test. Experimental datasets and performance measurement Our experiments are carried out on two recently constructed datasets: D92M and Check390 [35], which are used for training and independent test respectively. D92M contains 92 unique GPCRs and 217 unique drugs, and they constitute 635 interactive pairs and 1225 non-interactive pairs. D92M is in fact a refined dataset based on the original data used in [30] by correcting the falsely labeled GPCR-drug pairs. Check390 consists of 130 interactive pairs and 260 non-interactive pairs that do not appear in the training dataset. The metrics for performance evaluation used in our experiments include Receiver Operating Characteristic curve (ROC), Area Under an ROC Curve (AUC), Sensitivity(Sn), Specificity (Sp), Strength (Str, the average of Sensitivity and Specificity), Accuracy (Acc), and the Matthews correlation coefficient (MCC). Effect of different physicochemical properties for encoding GPCRs There are more than 500 amino acid indices in AAindex, which is a database of numerical indices representing various physicochemical and biochemical properties of amino acids and pairs of amino acids [37]. In this section, we test the effects of five common amino acid indices: hydropathy index (Entry: KYTJ820101), molecular weight (Entry: FASG760101), isoelectric point (Entry: ZIMJ680104), pK-N (Entry: FASG760104) and pK-C (Entry: FASG760105). The ROC curves of ten-fold cross-validation on D92M with different amino acid indices for encoding GPCRs are shown in Fig. 1. Because hydropathy index has the biggest AUC, we choose it as the default amino acid index. ROC curves of ten-fold cross-validation on D92M with different amino acid indices for encoding GPCRs Effect of different feature representations of drugs Though DFT was successfully used in previous work [33], no contrast experiment was carried out to prove the necessity of DFT. In this section, with the fixed GPCR representation, ten-fold cross-validation is carried out on D92M while representing the drugs with primary molecular fingerprint (without DFT) and frequency amplitudes (with DFT) respectively, and the ROC curves are shown in Fig. 2. It is clear that the ROC curve of DFT is always above that of without DFT. This is because the DFT can extract more pattern information than original structural description in the form of molecular fingerprint. ROC curves of ten-fold cross-validation on D92M while representing drugs with primary molecular fingerprint (without DFT) or frequency amplitudes (with DFT) Effect of different feature representations of GPCRs In this experiment, we compare the proposed BoW model with the traditional ones like amino acid composition (AAC), dipeptide composition (DPC), and their combination AAC + DPC. Figure 3 shows the ROC curves of different feature representations of GPCRs through ten-fold cross-validation on D92M. It can be found that the performance of AAC is the worst. This is inevitable because the sequence order information is completely ignored. By taking the adjacent information into account, the results of DPC are better than AAC. The performance of AAC + DPC is only slightly better than DPC. The performance of the proposed method is significantly better than the others, because more sequence order information and physicochemical information are taken into account. ROC curves of ten-fold cross-validation on D92M with different feature representations of GPCRs Effect of hyperparameter in DWKNN Figure 4 shows the AUC values while using different K values in DWKNN. As we can see, at the beginning, AUC is improved significantly along with the increasing of nearest neighbors. However, after K = 8, its values begin to oscillate, and when K = 13, it reach maximum. So K = 13 is set as the default hyperparameter value in DWKNN when using hydropathy index for GPCRs and DFT for drugs. AUCs of ten-fold cross-validation on D92M with different K values in DWKNN Effect of hyperparameter in ensemble model In Fig. 1, the ROC curves with different amino acid indices don't look very different from each other, and what will happen if all the five indices are used? In this experiment, we try to enhance the base model through Bagging, which is the ensemble learning method used by RF [35]. In the proposed ensemble model, the number of prediction engines for each amino acid index (called Ne) is the only hyperparameter, and its impact on the ensemble model is displayed in Table 1, from which we can find three points. Firstly, compared with the best base learner (use hydropathy index and DWKNN engine with K = 13), nearly all the metrics are improved in the ensemble model (use five amino acid indices and different DWKNN engines with random K values). Secondly, the Sn and Sp values of both the base learner and the ensemble models are not very biased although the dataset is imbalanced. Thirdly, for the proposed methods the MCC values and Maximum MCC values are not very different, so there is no need to adjust the threshold values to go after the maximum, which has potential risks of over-fitting on the training dataset. Because when Ne = 4, the biggest MCC is obtained, so we choose it as the default hyperparameter value in the ensemble model. Table 1 Performance comparisons between base learner and ensemble models on D92M over leave-one-out cross-validation. All the results are obtained by setting 0.5 as the default discrimination threshold to generate the prediction label except the Maximum MCC values which are obtained by identifying the thresholds that maximize the values of MCC Comparison with other methods To demonstrate the performance of the proposed methods for predicting GPCR-drug interactions, we test them on the training dataset D92M and independent test dataset check390 respectively, and compare them with several state-of-the-art methods, including iGPCR-Drug, OET-KNN, QuickRBF, SVM, RF and RF + PPP. iGPCR-Drug employs PseAAC features of GPCRs, DFT features of drugs, and FKNN classifier. OET-KNN, QuickRBF, SVM, RF and RF + PPP employ PsePSSM features of GPCR sequences and DWT features of drug fingerprints. Beside classic machine learning modules, RF + PPP uses PPP to improve the prediction performance. The results of leave-one-out cross-validation on D92M are listed in Table 2. It should be noted that the results of other six methods were reported in [35]. From this table we can see that the Sn values of the proposed methods are higher than the other methods, while the Sp, Acc and MCC values of the proposed methods are lower than the others. This may be due to the prediction engine used in the proposed methods is relatively weak. Table 2 Performance comparisons of different methods on D92M over leave-one-out cross-validation. The best results for each metric are in bold For machine learning models, their generalization ability can be best evaluated through independent test. With D92M as training dataset, the results of independent test on check390 are listed in Table 3, in which the results of other six models are also from [35]. From this table we can find that the proposed methods almost always outperform the others across the five metrics, except OET-KNN, which achieves the highest value of Sp (84.2%) while having the lowest value of Sn (67.7%). In the other models, when only the classic machine learning methods are considered, RF characterized by advanced feature extraction and ensemble learning has the maximum MCC (0.54), which is ~ 13% lower than the proposed base learner (0.61), and ~ 16% lower than the proposed ensemble model (0.63). By employing complex PPP, RF + PPP get significant performance gains. However, the proposed methods (without PPP) outperform it across all metrics. All these results demonstrate the effectiveness of the proposed methods. Table 3 Performance comparisons of different methods on the independent test dataset check390. The best results for each metric are in bold GPCRs are the most important drug targets. The accurate identification of GPCR-drug interactions is fundamental to the discovery of GPCR-related drugs. Since the structure of GPCR is difficult to obtain, sequence-based machine learning methods are particularly important as an initial screening step to select the most likely candidates from hundreds or even thousands of candidates for wet-lab experiments, thereby reducing the cost and time of experiments. In this paper, a new sequence-based method is proposed for the determination of GPCR-drug interactions. Due to the usage of effective feature extraction method, the good prediction performance is achieved even with a relatively weak classifier as the prediction engine. Based on this basic method, the prediction performance can be further improved through ensemble learning. In this paper, a new feature extraction method for GPCR sequences is proposed, which is inspired by the fact that the traditional BoW models are weak in extracting long fragment information (feature dimension is too high) and ignore the physicochemical properties of amino acids. It is shown through comparison experiments that the proposed feature extraction method precedes the traditional BoW models such as AAC and DPC. When compared with other models using PseAAC and PsePSSM, it is also competent. Although this method is used for GPCR feature extraction, it is clear that this method is a general peptide or protein feature extraction method that may be used to solve other target-drug interaction or protein classification problems. However, there are still three points need to be further investigated. First, there are hundreds of amino acid indices, and how to find the most proper ones is a big challenge. Second, the creation of workbooks relies on the clustering algorithm. In this study, only the simple C-means clustering algorithm is tried. More advanced clustering algorithms may create better wordbooks, so as to improve the performance of the model. Third, for fragments of different lengths how many clustering centers should be selected to constitute the dictionary entries? Intuitively, as the length of the fragments increases, there are more amino acid combinations, and the fragments should be grouped into more clusters. However, the theoretical optimum is difficult to obtain. In the future, we will do more in-depth research on this feature extraction method. In this paper, we propose a new sequence-based method for GPCR-drug interaction prediction. The remarkable feature of this method is to use a modified BoW model to represent GPCR sequences. Compared with the traditional BoW models, such as AAC, DPC, etc., this method can extract more pattern information from low order to high order, and restrict the dimension of feature space. In addition, the physicochemical properties of amino acids can be taken into account, so as to improve the representation ability. In terms of drug representation, we use the classical DFT transformation method. Compared with the original molecular fingerprint, DFT can extract more advanced pattern information, reduce the feature dimension and improve the prediction performance. The experimental results on the independent test dataset show that the proposed methods are better than the other sequence-based methods in generalization ability. It should be noted that the proposed methods were tested on only one set of training and independent test dataset about GPCR-drug interaction, and they need to be evaluated on more datasets. We believe that although there are many effective feature extraction methods for amino acid sequences, such as AAC, DPC, PseAAC, PsePSSM, etc., the proposed new feature extraction method based on wordbook learning will also be useful to solve problems of target-drug interaction, protein-protein interaction, and protein attribute prediction. In this section, we explain the proposed method in detail, including the GPCR representation, drug representation and prediction engine. For sequence-based methods, the groundwork is to formulate the molecules with an effective mathematical expression that can truly reflect their innate relation with the label to be predicted [38]. When there is not enough data for automatic feature learning by neural network methods, BoW [39, 40] model is a quality replacement that is very flexible, and has been widely used in natural language processing and image processing. The first stage of BoW is to design the wordbook, for example, the n-gram method, which split the sentences (or sequences) into words with length n, and the set of unique words constitute the wordbook. This strategy has always been used in bioinformatics, such as AAC with n = 1, and DPC with n = 2. However, the power of AAC and DPC is very restricted, because the sequence order information is almost completely neglected. Increasing n can take more order information into account, but the size of the wordbook will be too large, for example, when n = 3, there are 20^3 unique words in the wordbook. This is a very high-dimensional and sparse representation that is harder to model for computational reasons (space and time complexity). Moreover, the physicochemical properties of 20 native amino acids are also ignored while these properties define the protein structures and functions [41]. To address these problems, we propose a novel method to represent GPCRs with BoW model, as described in the following passages. Design wordbook for GPCRs A GPCR sequence containing L amino acid residues is often formulated in the following format, with the N-terminus at the left, and the C-terminus at the right. $$ \mathrm{G}={\mathrm{R}}_1{\mathrm{R}}_2\dots \dots {\mathrm{R}}_{\mathrm{L}} $$ Given a physicochemical property of amino acids, the primary sequence can be encoded as a numerical sequence as follow, $$ {\mathrm{G}}_{\mathrm{E}}={\mathrm{E}}_1{\mathrm{E}}_2\dots \dots {\mathrm{E}}_{\mathrm{L}} $$ where Ei is the property value of amino acid residue Ri. As described above, if we directly apply the n-gram model to split the GPCR sequences into words and select the unique ones to construct a wordbook, the wordbook size may be too large. To construct a small size wordbook, we will merge the similar words by clustering technology. Specifically speaking, if all the words are clustered into C clusters, then the C clustering centers constitute a small size wordbook because C is always much less than the number of unique words. The C-means clustering [42] method is used in this study. The wordbook of GPCRs is created in the following steps: Encode GPCR sequences according to a physicochemical property of amino acids. Split the encoded sequences into fragments with different window sizes. Cluster the fragments with the same length respectively, and take the clustering centers as the words of the GPCR wordbook. The fragments can be sampled from the sequences in two modes. Sampling mode 1 is to sample fragments as many as possible when there are not enough sequences. In this case we can move the window along each sequence from the left to the right with stride 1. Sampling mode 2 is to randomly sample a certain amount of fragments from each sequence when there are sufficient sequences. Figure 5 illustrates the process of creating GPCR wordbook. Flowchart of creating GPCR wordbook In theory, any physicochemical property can be used here as long as it works in the interaction between GPCRs and drugs. The hydropathy is an important physicochemical property of amino acids and affects the structure, stability and basic properties of the proteins. We use the hydropathy property reported in [43] to encode the GPCRs, and then randomly select 500 fragments of length 2 from each encoded GPCR sequence. If the length of one sequence does not meet the condition, then sampling mode 1 is used. With the first element as the X-axis, and the second element as the Y-axis, then all the sampled fragments can be showed in Fig. 6, from which we can see that the fragments have obvious tendency of clustering. This indicates that it is reasonable to construct a smaller wordbook. Fragments of length 2 sampled from the GPCR sequences encoded by hydropathy property. The sampled fragments belong to the same cluster are drawn in the same color and shape. The black asterisks are the clustering centers Feature extraction from GPCRs Based on the wordbook, any GPCR can be represented as a feature vector in the following steps: (1) Encode the GPCR primary sequence by the same physicochemical property used in the process of creating wordbook. (2) Split the encoded sequence into fragments of length l with sampling mode 1. (3) Count the number of times each word appears in the sequence. If any fragment is closest to one word in the wordbook according to Euclidean distance, then we say that this word appear once. (4) Formulate the GPCR as a feature vector containing the occurrence frequency of each word as follow: $$ \mathbf{G}\left(l,{C}_l\right)=\left[{f}_1^l\kern0.5em {f}_2^l\kern0.5em \cdots \kern0.5em {f}_{C_l}^l\right] $$ where l is word length, Cl is the number of length-l words in the wordbook, and \( {f}_i^l \) is the ratio between the number of the ith word and the number of fragments. If we change the window size when splitting the sequence, then we can get more features so as to integrate more pattern information. Specially, it is difficult to cluster the fragments of length 1 due to no numerical stability, so we just use AAC for G(1,C1). Feature extraction from drugs Molecular fingerprint is a way of encoding the structure of a molecule, and has been widely used in chemical informatics. Because of the effectiveness in previous work [21, 33, 35], we also extract features from drugs based on their molecular fingerprints. A drug's MOL file is a chemical file format contains information about atoms and bond and can be obtained from KEGG database (https://www.genome.jp/kegg/) via drug code. Then this MOL file can be converted to molecular fingerprint through OpenBabel software (http://openbabel.org/). There are multiple output formats, such as FP2, FP3, FP4 and MACCS. In this study, the FP2 format that encodes a drug as a 256-bit hexadecimal string is used. If we regard the FP2 molecular fingerprint as a sequence of spaced samples by converting the hexadecimal char "0 ~ F" to number 0 ~ 15, then we can apply DFT on this digital signal. DFT has been successfully used for the prediction of GPCR-drug interaction [33]. However, because the frequency amplitudes of a digital signal are in fact symmetrical, only the first 128 amplitudes are used here to make up the feature vector D. $$ \mathbf{D}=\left[{A}_1\kern0.5em {A}_2\kern0.5em \cdots \kern0.5em {A}_{128}\right] $$ where Ai is the ith amplitude divided by the sum of all the 128 amplitudes. Representation of GPCR-drug pairs Now we can represent the GPCR-drug pair denoted as P by concatenating the GPCR and drug feature vectors, i.e. P = G(l, Cl) ⊕ D. For simplicity, we make Cl = 10*l. Because it is difficult to cluster the length-1 fragments due to no numerical stability, we just use the 20-dimensional AAC when l = 1. What's more, to make the feature dimension of GPCRs equal that of drugs, we set C4 = 58. That is to say, Cl = 20, 20, 30, and 58 when l = 1, 2, 3 and 4 respectively. In such a way each pair is formulated as a 256D feature vector. $$ \mathbf{P}=\left[{f}_1^1\kern0.5em \cdots \kern0.5em {f}_{20}^1\kern0.5em {f}_1^2\kern0.5em \cdots \kern0.5em {f}_{20}^2\kern0.5em {f}_1^3\kern0.5em \cdots \kern0.5em {f}_{30}^3\kern0.5em {f}_1^4\kern0.5em \cdots \kern0.5em {f}_{58}^4\kern0.5em {A}_1\kern0.5em {A}_2\kern0.5em \cdots \kern0.5em {A}_{128}\right] $$ Prediction engine We employ DWKNN classifier as the prediction engine, which have only one parameter and its performance depends on the feature representation to a great extent. DWKNN is an improvement on the original KNN algorithm. Its basic idea is to weight the evidence of the neighbor according to the distance from the unknown sample, and the smaller the distance is, the larger weight the neighbor will have. When an unknown sample x is to be classified, the K nearest neighbors of x together with their class labels in the training dataset are given by \( \left({\boldsymbol{x}}_k^{\ast },{\boldsymbol{y}}_k^{\ast}\right),1\le k\le K \). Let the distances of these neighbors from x be expressed as dk, which are ordered so that d1 ≤ d2 ≤ ⋯ ≤ dK, then the weight of the kth nearest neighbor can be defined as $$ {w}_k=\left\{\begin{array}{l}\frac{d_K-{d}_k}{d_K-{d}_1},\kern0.6em {d}_K\ne {d}_1\\ {}1,\kern3.999998em {d}_K={d}_1\end{array}\right. $$ The Euclidean distance is used here, and it is clear that the smaller the distance of the neighbor is, the larger weight the neighbor will have. With the weights of neighbors, we can calculate the output of x as below, $$ o=\raisebox{1ex}{$\sum \limits_{{\boldsymbol{y}}_k^{\ast }=1}{w}_k$}\!\left/ \!\raisebox{-1ex}{$\sum \limits_{k=1}^K{w}_k$}\right. $$ where \( {y}_k^{\ast }=1 \) indicates that the kth neighbor is a positive sample, i.e. interactive pair in this study. This output varies from 0 to 1, and can be taken as the probability of interaction. The larger the output is, the more likely that the query GPCR-drug pair is interactive. We usually choose a discrimination threshold t to generate the prediction label, for example, when o > t, we say the query sample is positive (interactive), otherwise, it is negative (non-interactive). This trick is very useful when the training dataset is imbalanced. Framework of the proposed methods Figure 7 shows the framework of the proposed basic method. For a query GPCR-drug pair, we create the 128D feature vectors for GPCR (FeatureG) and drug (FeatureD) respectively, and then concatenate them into a 256D feature vector. This process is the same with that of creating training samples. The concatenated vector is input into the prediction engine DWKNN with a fixed K value (for example 13) to get an output, which is compared with a discrimination threshold (for example 0.5) to generate prediction label. Framework of the proposed basic method Figure 8 shows the framework of the proposed ensemble method, which can be described in the following steps: (1) different wordbooks are created with different amino acid indices respectively; (2) different kinds of FeatureG are extracted based on these wordbooks; (3) each kind of FeatureG is concatenated with FeatureD; (4) to make the base learners as diverse as possible, the concatenated features are randomly discarded with a probability of 0.05 (RD), and then are input into different DWKNN engines with random K values sampled from 1 to 15; (5) the final output of the ensemble model is the average of the outputs of all base learners. It should be noted that the number of base learners depend on the number of amino acid indices and the number of prediction engines for each amino acid index (called Ne). For example, if five amino acid indices are used and Ne = 2, then there will be 10 base learners in total. The proposed framework may be improved by some new techniques such as Optimal Bayesian Classification [44] and Bayesian Inverse Reinforcement Learning [45]. Framework of the proposed ensemble method The datasets used during the current study are available from http://202.119.84.36:3079/TargetGDrug/ AAC: Amino acid composition AAindex: Amino acid index database AUC: Area under an ROC curve DPC: Dipeptide composition BoW: Bag-of-words DFT: Discrete Fourier transform DWKNN: Distance-weighted K-nearest-neighbor HTS: Matthews correlation coefficient PseAAC: Pseudo amino acid composition PsePSSM: Pseudo position specific scoring matrix PPP: Post-processing procedure RF: ROC: Receiver operating characteristic Sn: Sp: Str: The average of Sensitivity and Specificity SVM: Support vector machine Jacoby E, Bouhelal R, Gerspacher M, Seuwen K. The 7TM G-protein-coupled receptor target family. Chemmedchem. 2006;1(8):760–82. Katritch V, Cherezov V, Stevens RC. Structure-function of the G protein-coupled receptor superfamily. Annu Rev Pharmacol Toxicol. 2013;53:531–56. CAS PubMed Article PubMed Central Google Scholar Insel PA, Tang CM, Hahntow I, Michel MC. Impact of GPCRs in clinical medicine: monogenic diseases, genetic variants and drug targets. Biochim Biophys Acta. 2007;1768(4):994–1005. Heilker R, Wolff M, Tautermann CS, Bieler M. G-protein-coupled receptor-focused drug discovery using a target class platform approach. Drug Discov Today. 2009;14(5):231–40. Hauser AS, Attwood MM, Rask-Andersen M, Schioth HB, Gloriam DE. Trends in GPCR drug discovery: new agents, targets and indications. Nat Rev Drug Discov. 2017;16(12):829–42. CAS PubMed PubMed Central Article Google Scholar Zhang R, Xie X. Tools for GPCR drug discovery. Acta Pharmacol Sin. 2012;33(3):372–84. Wlodawer A, Vondrasek J. Inhibitors of HIV-1 protease: a major success of structure-assisted drug design. Annu Rev Biophys Biomol Struct. 1998;27:249–84. Capdeville R, Buchdunger E, Zimmermann J, Matter A. Glivec (STI571, imatinib), a rationally developed, targeted anticancer drug. Nat Rev Drug Discov. 2002;1(7):493–502. Piscitelli CL, Kean J, Graaf CD, Deupi XJMP. A molecular Pharmacologist's guide to GPCR crystallography. Mol Pharmacol. 2015;88(3):536–51. Jazayeri A, Dias JM, Marshall FH. From G protein-coupled receptor structure resolution to rational drug design. J Biol Chem. 2015;290(32):19489–95. Cooke RM, Brown AJ, Marshall FH, Mason JS. Structures of G protein-coupled receptors reveal new opportunities for drug discovery. Drug Discov Today. 2015;20(11):1355–64. Tautermann CS, Gloriam DE. Editorial overview: New technologies: GPCR drug design and function-exploiting the current (of) structures. Curr Opin Pharmacol. 2016;30:vii–x. Manglik A, Lin H, Aryal DK, McCorvy JD, Dengler D, Corder G, Levit A, Kling RC, Bernat V, Hubner H, et al. Structure-based discovery of opioid analgesics with reduced side effects. Nature. 2016;537(7619):185–90. Kanehisa M, Goto S, Hattori M, Aoki-Kinoshita KF, Itoh M, Kawashima S, Katayama T, Araki M, Hirakawa M. From genomics to chemical genomics: new developments in KEGG. Nucleic Acids Res. 2006;34(Database issue):D354–7. Gunther S, Kuhn M, Dunkel M, Campillos M, Senger C, Petsalaki E, Ahmed J, Urdiales EG, Gewiess A, Jensen LJ, et al. SuperTarget and matador: resources for exploring drug-target relationships. Nucleic Acids Res. 2008;36(Database issue):D919–22. Wishart DS, Knox C, Guo AC, Cheng D, Shrivastava S, Tzur D, Gautam B, Hassanali M. DrugBank: a knowledgebase for drugs, drug actions and drug targets. Nucleic Acids Res. 2008;36(Database issue):D901–6. Lee I, Nam H. Identification of drug-target interaction by a random walk with restart method on an interactome network. BMC Bioinformatics. 2018;19(Suppl 8):208. PubMed PubMed Central Article CAS Google Scholar Xie L, He S, Song X, Bo X, Zhang Z. Deep learning-based transcriptome data classification for drug-target interaction prediction. BMC Genomics. 2018;19(Suppl 7):667. Yamanishi Y. Sparse modeling to analyze drug-target interaction networks. Methods Mol Biol. 1807;2018:181–93. Ding Y, Tang J, Guo F. The computational models of drug-target interaction prediction. Protein Pept Lett. 2019;27(5):348–58. Li L, Koh CC, Reker D, Brown JB, Wang H, Lee NK, Liow HH, Dai H, Fan HM, Chen L, et al. Predicting protein-ligand interactions based on bow-pharmacological space and Bayesian additive regression trees. Sci Rep. 2019;9(1):7703. Sachdev K, Gupta MK. A comprehensive review of feature based methods for drug target interaction prediction. J Biomed Inform. 2019;93:103159. PubMed Article PubMed Central Google Scholar Yan XY, Zhang SW, He CR. Prediction of drug-target interaction by integrating diverse heterogeneous information source with multiple kernel learning and clustering methods. Comput Biol Chem. 2019;78:460–7. You J, McLeod RD, Hu P. Predicting drug-target interaction network using deep learning model. Comput Biol Chem. 2019;80:90–101. Zhang W, Lin W, Zhang D, Wang S, Shi J, Niu Y. Recent advances in the machine learning-based drug-target interaction prediction. Curr Drug Metab. 2019;20(3):194–202. Zhao Q, Yu H, Ji M, Zhao Y, Chen X. Computational model development of drug-target interaction prediction: a review. Curr Protein Pept Sci. 2019;20(6):492–4. Yamanishi Y, Araki M, Gutteridge A, Honda W, Kanehisa M. Prediction of drug-target interaction networks from the integration of chemical and genomic spaces. Bioinformatics. 2008;24(13):i232–40. Hattori M, Okuno Y, Goto S, Kanehisa M. Development of a chemical structure comparison method for integrated analysis of chemical and genomic information in the metabolic pathways. J Am Chem Soc. 2003;125(39):11853–65. Smith TF, Waterman MS. Identification of common molecular subsequences. J Mol Biol. 1981;147(1):195–7. He Z, Zhang J, Shi XH, Hu LL, Kong X, Cai YD, Chou KC. Predicting drug-target interaction networks based on functional groups and biological features. PLoS One. 2010;5(3):e9603. Arif M, Hayat M, Jan Z. iMem-2LSAAC: a two-level model for discrimination of membrane proteins and their types by extending the notion of SAAC into chou's pseudo amino acid composition. J Theor Biol. 2018;442:11–21. CAS PubMed Article Google Scholar Mei J, Zhao J. Analysis and prediction of presynaptic and postsynaptic neurotoxins by Chou's general pseudo amino acid composition and motif features. J Theor Biol. 2018;447:147–53. Xiao X, Min JL, Wang P, Chou KC. iGPCR-drug: a web server for predicting interaction between GPCRs and drugs in cellular networking. PLoS One. 2013;8(8):e72234. O'Boyle NM, Banck M, James CA, Morley C, Vandermeersch T, Hutchison GR. Open babel: an open chemical toolbox. J Cheminform. 2011;3:33. Hu J, Li Y, Yang J-Y, Shen H-B, Yu D-J. GPCR–drug interactions prediction using random forest with drug-association-matrix-based post-processing procedure. Comput Biol Chem. 2016;60:59–71. Dudani SA. The distance-weighted k-nearest-neighbor rule. IEEE Trans Syst Man Cybernetics. 1976;SMC-6(4):325–7. Kawashima S, Kanehisa M. AAindex: amino acid index database. Nucleic Acids Res. 2000;28(1):374. Chou K-C. Some remarks on protein attribute prediction and pseudo amino acid composition. J Theor Biol. 2011;273(1):236–47. Powell RT, Olar A, Narang S, Rao G, Sulman E, Fuller GN, Rao A. Identification of histological correlates of overall survival in lower grade Gliomas using a bag-of-words paradigm: a preliminary analysis based on Hematoxylin & Eosin Stained Slides from the lower grade Glioma cohort of the Cancer genome atlas. J Pathol Inform. 2017;8:9. PubMed PubMed Central Article Google Scholar Fanxiang Z, Yuefeng J, Levine MD. Contextual bag-of-words for robust visual tracking. IEEE Trans Image Process. 2018;27(3):1433–47. Kawashima S, Pokarowski P, Pokarowska M, Kolinski A, Katayama T, Kanehisa M. AAindex: amino acid index database, progress report 2008. Nucleic Acids Res. 2008;36(Database issue):D202–5. Fuente-Tomas L, Arranz B, Safont G, Sierra P, Sanchez-Autet M, Garcia-Blanco A, Garcia-Portilla MP. Classification of patients with bipolar disorder using k-means clustering. PLoS One. 2019;14(1):e0210314. Kyte J, Doolittle RF. A simple method for displaying the hydropathic character of a protein. J Mol Biol. 1982;157(1):105–32. Hajiramezanali E, Imani M, Braga-Neto U, Qian X, Dougherty ER. Scalable optimal Bayesian classification of single-cell trajectories under regulatory model uncertainty. BMC Genomics. 2019;20(Suppl 6):435. Imani M, Braga-Neto UM. Control of gene regulatory networks using Bayesian inverse reinforcement learning. IEEE/ACM Trans Comput Biol Bioinform. 2019;16(4):1250–61. This work has been supported by the Natural Science Foundation of China (No. 61841104, 31560316, 31860312, 31760315), the Natural Science Foundation of Hubei Province in China (No. 2019CFC870). The funding bodies have no involvement in the design of the study, data collection and analysis, or writing the manuscript. Computer School, Hubei University of Arts and Science, Xiangyang, 441053, China Pu Wang & Xiaotong Huang Computer Department, Jingdezhen Ceramic Institute, Jingdezhen, 333403, China Wangren Qiu & Xuan Xiao Pu Wang Xiaotong Huang Wangren Qiu Xuan Xiao PW and XX designed the method, drafted the manuscript, analyzed the data and carried out the experiments. XH and WQ participated in the design and discussion of the research, and modified the manuscript. All authors have read and approved the final manuscript. Correspondence to Xuan Xiao. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Wang, P., Huang, X., Qiu, W. et al. Identifying GPCR-drug interaction based on wordbook learning from sequences. BMC Bioinformatics 21, 150 (2020). https://doi.org/10.1186/s12859-020-3488-8 GPCR-drug interaction Machine Learning and Artificial Intelligence in Bioinformatics Machine learning for computational and systems biology
CommonCrawl
Feed-forward regulation adaptively evolves via dynamics rather than topology when there is intrinsic noise Kun Xiong ORCID: orcid.org/0000-0003-1431-65861, Alex K. Lancaster ORCID: orcid.org/0000-0002-0002-92632, Mark L. Siegal ORCID: orcid.org/0000-0001-6930-29883 & Joanna Masel ORCID: orcid.org/0000-0002-7398-21274 Nature Communications volume 10, Article number: 2418 (2019) Cite this article Cellular noise Computer modelling In transcriptional regulatory networks (TRNs), a canonical 3-node feed-forward loop (FFL) is hypothesized to evolve to filter out short spurious signals. We test this adaptive hypothesis against a novel null evolutionary model. Our mutational model captures the intrinsically high prevalence of weak affinity transcription factor binding sites. We also capture stochasticity and delays in gene expression that distort external signals and intrinsically generate noise. Functional FFLs evolve readily under selection for the hypothesized function but not in negative controls. Interestingly, a 4-node "diamond" motif also emerges as a short spurious signal filter. The diamond uses expression dynamics rather than path length to provide fast and slow pathways. When there is no idealized external spurious signal to filter out, but only internally generated noise, only the diamond and not the FFL evolves. While our results support the adaptive hypothesis, we also show that non-adaptive factors, including the intrinsic expression dynamics, matter. Transcriptional regulatory networks (TRNs) are integral to development and physiology, and underlie all complex traits. An intriguing finding about TRNs is that certain topological "motifs" of interconnected transcription factors (TFs) are overrepresented relative to random re-wirings that preserve the frequency distribution of connections1,2. The significance of this finding remains open to debate. The canonical example is the feed-forward loop (FFL), in which TF A regulates a target C both directly, and indirectly via TF B, and no regulatory connections exist in the opposite direction1,2,3. Each of the three regulatory interactions in an FFL can be either activating or repressing, so there are eight distinct kinds of FFLs (Supplementary Fig. 1)4. Given the eight frequencies expected from the ratio of activators to repressors, two of these kinds of FFLs are significantly overrepresented4. In this paper, we focus on one of these two overrepresented types, namely the type 1 coherent FFL (C1-FFL), in which all three links are activating rather than repressing (Supplementary Fig. 1, left). C1-FFL motifs are an active part of systems biology research today, e.g. they are used to infer the function of specific regulatory pathways5,6. The overrepresentation of FFLs in observed TRNs is normally explained in terms of selection favoring a function of FFLs. Specifically, the most common adaptive hypothesis is that cells often benefit from ignoring short-lived signals and responding only to durable signals3,4,7. Evidence that C1-FFLs can perform this function comes from the behavior both of theoretical models4 and of in vivo gene circuits7. A C1-FFL can achieve this function when its regulatory logic is that of an AND gate, i.e. both the direct path from A to C and the indirect path from A to B to C must be activated before the response is triggered. In this case, the response will only be triggered if, by the time the signal trickles through the longer path, it is still active on the shorter path as well. This yields a response to long-lived signals but not short-lived signals. However, just because a behavior is observed, we cannot conclude that the behavior is a historical consequence of past selection favoring that behavior8,9. The explanatory power of this adaptive hypothesis of filtering out short-lived and spurious signals needs to be compared to that of alternative, nonadaptive hypotheses10. The overrepresentation of C1-FFLs might be a byproduct of some other behavior that was the true target of selection11. Alternatively, it might be an intrinsic property of TRNs generated by mutational processes—gene duplication patterns have been found to enrich for FFLs in general12, although not yet C1-FFLs in particular. Adaptationist claims about TRN organization have been accused of being just-so stories, with adaptive hypotheses still in need of testing against an appropriate null model of network evolution13,14,15,16,17,18,19,20,21,22,23. Here we develop such a computational null model of TRN evolution, and apply it to the case of C1-FFL overrepresentation. We include sufficient realism in our model of cis-regulatory evolution to capture the nonadaptive effects of mutation in shaping TRNs. In particular, we consider weak TF binding sites (TFBSs) that can easily appear de novo by chance alone, and from there be selected to bind a TF more strongly, as well as simulating mutations that duplicate and delete genes. Our TRN model also captures the stochasticity of gene expression, which causes the number of mRNAs and hence proteins to fluctuate24,25. This is important, because demand for spurious signal filtering and hence C1-FFL function may arise not just from external signals, but also from internal fluctuations. Stochasticity in gene expression also shapes how external spurious signals are propagated. Stochasticity is a constraint on what TRNs can achieve, but it can also be adaptively co-opted in evolution26; either way, it might underlie the evolution of certain motifs. Most other computational models of TRN evolution that consider gene expression as the major phenotype do not simulate stochasticity in gene expression (but see three notable exceptions27,28,29). Given the potential importance of the details of stochastic and nonstochastic dynamics, we constrain the parameter ranges explored by mutation to values taken from data mostly on Saccharomyces cerevisiae. Different parameter values can cause the same network topology to display different dynamic behaviors; different topologies can also display similar dynamic behaviors21,30,31,32. Here we ask whether AND-gated C1-FFLs evolve as a response to selection for filtering out short and spurious external signals. Our new model allows us to compare the frequencies of network motifs arising in the presence of this hypothesized evolutionary cause to motif frequencies arising under nonadaptive control simulations, i.e. evolution under conditions that lack short spurious external signals while controlling both for mutational biases and for less specific forms of selection. We also ask whether other network motifs evolve to filter out short spurious signals, and if so, whether different conditions favor the appearance of different motifs during evolution. We simulate the dynamics of TRNs as the TFs activate and repress one another's transcription over a timescale we refer to as "gene expression time". This generates the gene expression phenotypes on which selection acts over longer evolutionary timescales. For each moment in gene expression time, we simulate the numbers of nuclear and cytoplasmic mRNAs in a cell, the protein concentrations, and the chromatin state of each gene in a haploid genome. Transitions between three possible chromatin states—Repressed, Intermediate, and Active—are a stochastic function of TF binding, and transcription initiation from the Active state is also stochastic. An overview of the model is shown in Fig. 1, with details given in the Methods. TF binding to the cis-regulatory sequence of a gene affects chromatin, which affects transcription rates, eventually feeding back to affect the concentration of TFs and hence their binding. Gene expression is further controlled by five gene-specific parameters: mean duration of transcriptional bursts, mRNA degradation rate, protein production rate, protein degradation rate, and gene length (which affects delays in transcription and translation). Overview of the model. a Simulation of gene expression phenotypes. We show a simple TRN with one TF (yellow) and one effector gene (blue), with arrows for major biological processes simulated in the model. b Phenotype–fitness relationship. Fitness is primarily determined by the concentration of an effector protein (here shown as beneficial as in Eq. 4, but potentially deleterious in a different environment as in Eq. 5), with a secondary component coming from the cost of gene expression (proportional to the rate of protein production), combined to give an instantaneous fitness at each moment in gene expression time. c Evolutionary simulation. A single resident genotype is replaced when a mutant's estimated fitness is high enough. Stochastic gene expression adds uncertainty to the estimated fitness, allowing less fit mutants to occasionally replace the resident, capturing the flavor of genetic drift We model five types of mutations: (1) to the five gene-specific parameters, (2) to the cis-regulatory sequences, (3) to the consensus binding sequences, (4) to the maximum binding affinity of TFs, and (5) duplication/deletion of genes. An external signal (Fig. 1a, red) is treated like another TF, and the concentration of an effector gene (Fig. 1a, blue) in response is a primary determinant of fitness, combined with a cost associated with gene expression (Fig. 1b). Mutants replace resident genotypes as a function of the difference in estimated fitness (Fig. 1c). Parameter values, taken as far as possible from S. cerevisiae, are summarized in Supplementary Table 1. Mutation rates are summarized in Supplementary Table 2. C1-FFLs must be AND-gated to achieve their putative function. To allow the regulatory logic to evolve between AND-gated and other regulatory logic, we make effector gene expression require at least two TFBSs to be occupied by activators. An AND-gate is then present when the only way to have two TFs bound is for them to be different TFs (Fig. 2). All other genes are AND-gate-incapable, meaning that their activation requires only one TFBS to be occupied by an activator. The distribution of TFBSs determines the regulatory logic of effector expression. We use the pattern of TFBSs (red and yellow bars along black cis-regulatory sequences) to classify the regulatory logic of the effector gene. C1-FFLs are classified first by whether or not they are capable of simultaneously binding the signal and the TF (left 4 vs. right 3; see Supplementary Fig. 2 and Supplementary Methods for details about overlapping TFBSs). Further classification is based on whether either the signal or the TF has multiple nonoverlapping TFBSs, allowing it to activate the effector without help from the other (solid arrow). The three subtypes to the right (where the signal and TF cannot bind simultaneously) are rarely seen; they are unless otherwise indicated included in "Any logic" and "non-AND-gated" tallies, but are not analyzed separately. Two of them involve emergent repression, creating incoherent feed-forward loops (see Supplementary Fig. 1 for full FFL naming scheme). Emergent repression occurs when the binding of one activator to its only TFBS prevents the other activator from binding to either of its two TFBSs, hence preventing simultaneous binding of two activators We select on the ability to recognize signals. In environment 1, expressing the effector is beneficial, and in environment 2 it is deleterious (see Methods for details). We select for TRNs that take information from the signal and correctly decide whether to express the effector. Fitness is a weighted average across separate gene expression simulations in the two environments and their corresponding presence or absence of signal. In both cases, we begin each gene expression simulation with no signal. In both environments, the signal is turned on after a burn-in period (see Methods), but in environment 2, the signal lasts for only 10 min, with selection to ignore it (Fig. 3). Selection for filtering out short spurious signals. Each selection condition averages fitness across simulations in two environments. The effectors have different fitness effects in the two environments, and the signal also behaves differently in the two environments. Simulations begin with zero mRNA and protein, and all genes at the Repressed state (see Methods). Each simulation is burned in for a randomly sampled length of time in the absence of signal (shown here as 10 min in environment 1, and 15 min in environment 2), and continues for another 90 min after the burn-in. The signal is shown in black. Red illustrates a good solution in which the effector responds appropriately in each of the environments, while blue shows an inferior solution. Ne_sat marks the amount of effector protein at which the benefit from expressing the effector in environment 1 becomes saturated, as does the damage in environment 2 (see Methods). See Supplementary Fig. 3 for examples of high-fitness and low-fitness evolved phenotypes, where, as shown in this schematic, high-fitness solutions have longer delays followed by more rapid responses thereafter AND-gated C1-FFLs readily evolve as spurious signal filters We begin by simulating the easiest case we can devise to allow the evolution of C1-FFLs for their purported function of filtering out short spurious signals. The signal is allowed to act directly on the effector, after which all that needs to evolve is a single activating TF between the two, as well as AND-logic for the effector (Fig. 2, leftmost). We score network motifs at the end of a set period of evolution (see Supplementary Methods for details), further classifying evolved C1-FFLs into subtypes based on the presence of nonoverlapping TFBSs (Fig. 2). The adaptive hypothesis predicts the evolution of the C1-FFL subtype with AND-regulatory logic, which requires the effector to be stimulated both by the signal and by the slow TF. While all evolutionary replicates show large increases in fitness, the extent of improvement varies dramatically, indicating whether or not the replicate was successful at evolving the phenotype of interest rather than becoming stuck at an alternative locally optimal phenotype (Fig. 4a). AND-gated C1-FFLs frequently evolve in replicates that reach high fitness, but not in replicates that reach lower fitness (Fig. 4b). AND-gated C1-FFLs are associated with a successful response to selection. a Distribution of fitness outcomes across replicate simulations, calculated as the average fitness over the last 10,000 steps of the evolutionary simulation. We divide genotypes into a low-fitness group (blue) and a high-fitness group (red) using as a threshold an observed gap in the distribution. b High-fitness replicates are characterized by the presence of an AND-gated C1-FFL. "Any logic" counts the presence of any of the seven subtypes shown in Fig. 2b. Because one TRN can contain multiple C1-FFLs of different subtypes, each of which are scored, the sum of the occurrences of all seven subtypes will generally be more than "Any logic". See Supplementary Methods for details on the calculation of the y axis. c The overrepresentation of AND-gated C1-FFLs becomes even more pronounced relative to alternative logic-gating when weak (two-mismatch) TFBSs are excluded while scoring motifs. Data are shown as mean ± s.e.m. of the occurrence over replicate evolution simulations We also see C1-FFLs that, contrary to expectations, are not AND-gated. Non-AND-gated motifs are found more often in low-fitness than high-fitness replicates (Fig. 4b), indicating that the preference for AND-gates is associated with adaptation rather than mutation bias. However, some non-AND-gated motifs are still found even in the high-fitness replicates. This is because motifs and their logic gates are scored on the basis of all TFBSs, even those with two mismatches and hence low binding affinity. Unless these weak TFBSs are deleterious, they will appear quite often by chance alone. A random 8-bp sequence has probability \(\left( {\begin{array}{*{20}{c}} 8 \\ 2 \end{array}} \right) \times 0.25^6 \times 0.75^2 = 0.0038\) of being a two-mismatch binding site for a given TF. In our model, a TF has the potential to recognize 137 different sites in a 150-bp cis-regulatory sequence (taking into account steric hindrance at the edges), each with two orientations. Thus, by chance alone a given TF will have 0.0038 × 137 × 2 ≈ 1 two-mismatch binding sites in a given cis-regulatory sequence (ignoring palindromes for simplicity), compared to only ~0.1 one-mismatch TFBSs. Non-AND-gated C1-FFLs mostly disappear when two-mismatch TFBSs are excluded, but the AND-gated C1-FFLs found in high-fitness replicates do not (Fig. 4c). To confirm the functionality of these AND-gated C1-FFLs, we mutated the evolved genotype in two different ways (Fig. 5a) to remove the AND regulatory logic. As expected, this lowers fitness in the presence of the short spurious signal but increases fitness in the presence of constant signal, with a net reduction in fitness (Fig. 5b). This is consistent with AND-gated C1-FFLs representing a tradeoff, by which a more rapid response to a true signal is sacrificed in favor of the greater reliability of filtering out short spurious signals. Destroying the AND-logic of a C1-FFL removes its ability to filter out short spurious signals. a For each of the n = 25 replicates in the high-fitness group in Fig. 4, we perturbed the AND-logic in two ways, by adding one binding site of either the signal or the slow TF to the cis-regulatory sequence of the effector gene. b For each replicate, the fitness of the original motif (blue) or of the perturbed motif (red or orange) was averaged across the subset of evolutionary steps with an AND-gated C1-FFL and lacking other potentially confounding motifs (see Supplementary Fig. 4 and Supplementary Methods for details). Destroying the AND-logic slightly increases the ability to respond to the signal, but leads to a larger loss of fitness when short spurious signals are responded to. Fitness is shown as mean ± s.e.m. over replicate evolutionary simulations Adaptive motifs are constrained not only in their topology and regulatory logic, but also in the parameter space of their component genes. In particular, there is selection for rapid synthesis of both effector and TF proteins, as well as rapid degradation of effector mRNA and protein (Supplementary Table 3). Fast effector degradation reduces the transient expression induced by the short spurious signal (Supplementary Fig. 3). Note that we evolved solutions only at the level of transcriptional regulation—even more rapid switching could be achieved by posttranslational modifications, if we had allowed them in our model. To test the extent to which AND-gated C1-FFLs are a specific response to selection to filter out short spurious signals, we simulated evolution under three negative control conditions: (1) no selection, i.e. all mutations are accepted to become the new resident genotype; (2) no spurious signal, i.e. selection to express the effector under a constant ON signal and not under a constant OFF signal (Fig. 6a); (3) harmless spurious signal, i.e. selection to express the effector under a constant ON signal whereas effector expression in the OFF environment with short spurious signals is neither punished nor rewarded beyond the cost of unnecessary gene expression (Fig. 6a). AND-gated C1-FFLs evolve much less often under all three negative control conditions (Fig. 6b, c), showing that their prevalence is a consequence of selection for filtering out short spurious signals, rather than a consequence of mutational bias and/or simpler forms of selection. C1-FFLs that do evolve under control conditions tend not to be AND-gated (Fig. 6b), and mostly disappear when weak TFBSs are excluded during motif scoring (Fig. 6c). Selection for filtering out short spurious signals is the primary cause of C1-FFLs. TRNs are evolved under different selection conditions, and we score the probability that at least one C1-FFL is present (see Supplementary Methods). Schematics of selection, in which fitness is averaged with weights 2:1 over environment 1:2, are shown in (a). The effector is deleterious in environment 1 except in the "harmless" and "no selection" conditions. Weak (two-mismatch) TFBSs are included in (b) and are excluded in (c) during motif scoring. Data are shown as mean ± s.e.m. over evolutionary replicates. C1-FFL occurrence is similar for high-fitness and low-fitness outcomes in control selective conditions (Supplementary Fig. 5), and so all evolutionary outcomes were combined. "Spurious signal filter required (high fitness)" uses the same data as in Fig. 4 More complex networks also evolve diamond motifs In real biological situations, sometimes the source signal will not be able to directly regulate an effector, and must instead operate via a longer regulatory pathway involving intermediate TFs33. In this case, even if the signal itself takes the idealized form shown in Fig. 3, its shape after propagation may become distorted by the intrinsic processes of transcription. Motifs are under selection to handle this distortion. To enforce indirect regulation, we ran simulations in which the signal was only allowed to bind to the cis-regulatory sequences of TFs and not of effector genes. The fitness distribution of the evolutionary replicates has no obvious gaps (Supplementary Fig. 6), so we compared the highest fitness, lowest fitness, and median fitness replicates. In agreement with results when direct regulation is allowed, genotypes of low and medium fitness contain few AND-gated C1-FFLs, while high-fitness genotypes contain many more (Fig. 7b, left and right). AND-gated C1-FFLs and diamonds are associated with high fitness in complex networks. Out of 238 simulations (Supplementary Fig. 6), we took the 30 with the highest fitness (H), the 30 with the lowest fitness (L), and 30 of around median fitness (M). AND-gated motifs are scored while including weak TFBSs in the effectors' cis-regulatory regions, near-AND-gated motifs are those scored only when these weak TFBSs are excluded. a Diagrams of enriched motifs when weak TFBSs are included. It is possible for the same genotype to contain one of each, resulting in overlap between the red AND-gated columns and the dotted near-AND-gated columns. Weak TFBSs upstream in the TRN, i.e. not in the effector, are shown both included (b) and excluded (c). See Supplementary Methods for y-axis calculation details. Error bars show mean ± s.e.m. of the proportion of evolutionary steps containing the motif in question, across replicate evolutionary simulations While visually examining the network context of these C1-FFLs, we discovered that many were embedded within AND-gated "diamonds". In a diamond, the signal activates the expression of two genes that encode different TFs, and the two TFs activate the expression of an effector gene (Fig. 7a middle). When one of the two TF genes activates the other, then a C1-FFL is also present among the same set of genes; we call this topology a "FFL-in-diamond" (Fig. 7a, right), and the prevalence of this configuration drew our attention toward diamonds. This led us to discover that AND-gated diamonds also occurred frequently without AND-gated C1-FFLs, in the configuration we call "isolated diamonds" (Fig. 7a, middle). Note that it is in theory possible, but in practice uncommon, for diamonds to be part of more complex conjugates. Systematically scoring the AND-gated isolated diamond motif confirmed its high occurrence (Fig. 7b, c, middle). AND-gated isolated C1-FFLs appear mainly in the highest fitness outcomes, while AND-gated isolated diamonds appear in all fitness groups (Fig. 7c), suggesting that diamonds are easier to evolve. An AND-gated C1-FFL integrates information from a short/fast regulatory pathway with information from a long/slow pathway, in order to filter out short spurious signals. A diamond achieves the same end of integrating fast and slowly transmitted information via differences in the gene expression dynamics of the two regulatory pathways, rather than via topological length (Fig. 8). The fast and slow pathways could be distinguished in a number of ways, e.g. by the slope at which the transcription factor concentration increases or the time at which it exceeds a threshold or plateaus. We found it convenient to identify the "fast TF" as the one with the higher protein degradation rate. Specifically, we use the geometric mean of the protein degradation rate over gene copies of a TF in order to differentiate the two TFs. The two TFs in an AND-gated diamond propagate the signal at different speeds. Expression of the two TFs in one representative genotype from the one high-fitness evolutionary replicate in Fig. 7b that evolved an AND-gated isolated diamond is shown. Both the slow TF and the fast TF are encoded by three gene copies, shown separately in color, with the total for each TF in thick black. The expression of one TF plateaus faster than that of the other; this is characteristic of the AND-gated diamond motif, and leads to the same functionality as the AND-gated C1-FFL The parameter values of the fast TF are more evolutionarily constrained than those of the slow TF (Supplementary Table 4). In particular, there is selection for rapid degradation of the fast TF protein and mRNA (Supplementary Table 4). Isolated AND-gated C1-FFLs also show pronounced selection for the TF in the fast pathway to have rapid protein degradation (Supplementary Table 5). Fast-degrading mRNA and proteins are rare (Supplementary Table 1), suggesting that mutational biases might make them difficult to evolve. But even when they do evolve, fast degradation keeps the fast TF at low concentrations. To compensate, the fast TF must overcome mutational bias to also evolve high binding affinity and rapid protein synthesis (Supplementary Tables 4 and 5). Perturbation analysis supports an adaptive function for AND-gated C1-FFLs and diamonds evolved under indirect regulation (Fig. 9a, b). Breaking the AND-gate logic of these motifs by adding a TFBS to the effector cis-regulatory region reduces the fitness under the spurious signal but increases it under the constant ON signal, resulting in a net decrease in the overall fitness. Note that a simple transcriptional cascade, signal → TF → effector, has also been found experimentally to filter out short spurious signals34; in Supplementary Note 1, we argue that diamonds are not by-products of selection for cascades, but are the direct target of selection. Isolated C1-FFLs and diamonds rely on AND gates to filter out short spurious signals. We add a TFBS of either the fast TF or the slow TF to break the AND gate. This slightly increases the ability to respond to the signal, but leads to a larger loss of fitness when effector expression is undesirable. We perform the perturbation on a 8 of the 18 high-fitness replicates from Fig. 7b that evolved an AND-gated C1-FFL, b 4 of the 26 high-fitness replicates that evolved an AND-gated diamond in Fig. 7b, and c 15 of the 37 replicates that evolved an AND-gated diamond in response to selection for signal recognition in the absence of an external spurious signal (Fig. 10b). Replicate exclusion was based on the co-occurrence of other motifs with the potential to confound results (see Supplementary Methods for details). Fitness is shown as mean ± s.e.m. of over replicate evolutionary simulations, calculated as described for Fig. 5 Weak TFBSs change how motifs are scored Results depend on whether we include weak TFBSs when scoring motifs. Weak TFBSs can either be in the effector's cis-regulatory region, affecting how the regulatory logic is scored, or in TFs upstream in the TRN, affecting only the presence or absence of motifs. When we exclude upstream weak TFBSs while scoring motifs, FFL-in-diamonds are no longer found, while the occurrence of isolated C1-FFLs and diamonds increases (Fig. 7c). This makes sense, because adding one weak TFBS, which can easily happen by chance alone, can convert an isolated diamond or C1-FFL into a FFL-in-diamond (added between intermediate TFs, or from signal to slow TF, respectively). When a motif is scored as AND-gated only when two-mismatch TFBSs in the effector are excluded, we call it a "near-AND-gated" motif. TFs may bind so rarely to a weak affinity TFBS that the presence of the weak TFBS changes little, making the regulatory logic still effectively AND-gated. A near-AND-gated motif may therefore evolve for the same adaptive reasons as an AND-gated one. Figure 7b, c shows that both AND-gated and near-AND-gated motifs are enriched in the higher fitness genotypes. There is more likely to be a weak affinity TFBS for the fast TF than the slow TF, and adding one does less harm (see Supplementary Note 2 and Supplementary Fig. 7). Diamonds also evolve without external spurious signals We simulated evolution under the same three control conditions as before, this time without allowing the signal to directly regulate the effector. When weak (two-mismatch) TFBSs are excluded, AND-gated isolated C1-FFLs are seen only after selection for filtering out a spurious signal, and not under other selection conditions (Fig. 10a). However, AND-gated isolated diamonds also evolve in the absence of spurious signals, indeed at even higher frequency (Fig. 10b). Results including weak TFBSs are similar (Supplementary Fig. 10). AND-gated diamonds, but not AND-gated C1-FFLs, also evolve in negative controls. a Selection for filtering out a short spurious signal is the primary way to evolve AND-gated isolated C1-FFLs, but b AND-gated isolated diamonds also evolve in the absence of spurious signals. The selection conditions are the same as in Fig. 6, but we do not allow the signal to directly regulate the effector. In the "no spurious signal" and "harmless spurious signal" control conditions, motif frequencies are similar between low- and high-fitness genotypes (Supplementary Figs. 8 and 9), and so our analysis includes all evolutionary replicates. When scoring motifs, we exclude all two-mismatch TFBSs; results with inclusions, and for FFL-in-diamonds, are shown in Supplementary Fig. 10. Many non-AND-gated diamonds have the "no regulation" logic in Fig. 2, perhaps as an artifact created by the duplication and divergence of intermediate TFs; we excluded them from the "Any logic" and "Non-AND-gated" tallies in (b). See Supplementary Methods for the calculation of the y-axis. Data are shown as mean ± s.e.m. over evolutionary replicates. We reused data from Fig. 7 for "Spurious signal filter required (high fitness)" Perturbing the AND-gate logic in isolated diamonds evolved in the absence of spurious external signals reduces fitness via effects in the environment where expressing the effector is deleterious (Fig. 9c). The stochastic expression of intermediate TFs might effectively create short intrinsic spurious signals when the external signal is set to OFF. It seems that AND-gated diamonds evolve to mitigate this risk, but that AND-gated C1-FFLs do not. This may be because C1-FFLs delay the expression of effectors more than diamonds do, because the fast TF must first be translated in order to turn on the slow TF. Delays are costly when expression is beneficial, and unnecessary to filter out a very short signal; because internally generated spurious signals have an exponential distribution, most are short35. Alternatively, the advantage of diamonds might be that spurious effector expression requires both TFs to be accidentally and independently expressed, whereas spurious TF expression in AND-gated C1-FFLs is not independent because the fast TF can induce the slow TF36. There has never been sufficient evidence to satisfy evolutionary biologists that motifs in TRNs represent adaptations for particular functions. Critiques by evolutionary biologists to this effect13,14,15,16,17,18,19,20,21,22,23 have been neglected, rather than answered, until now. While C1-FFLs can be conserved across different species37,38,39,40, this does not imply that specific "just-so" stories about their function are correct. In this work, we study the evolution of AND-gated C1-FFLs, which are hypothesized to be adaptations for filtering out short spurious signals3. Using a novel and more mechanistic computational model to simulate TRN evolution, we found that AND-gated C1-FFLs evolve readily under selection for filtering out a short spurious signal, and not under control conditions. Our results support the adaptive hypothesis about C1-FFLs. It is difficult to distinguish adaptations from "spandrels"8. Standard procedure is to look for motifs that are more frequent than expected from some randomized version of a TRN2,41. For this method to work, this randomization must control for all confounding factors that are nonadaptive with respect to the function in question, from patterns of mutation to a general tendency to hierarchy—a near-impossible task. Our approach to a null model is not to randomize, but to evolve with and without selection for the specific function of interest. This meets the standards of evolutionary biology for inferring the adaptive nature of a motif13,14,15,16,17,18,19,20,21,22,23. AND-gated C1-FFLs express an effector after a noise-filtering delay when the signal is turned on, but shut down expression immediately when the signal is turned off, giving rise to a "sign-sensitive delay"3,7. Rapidly switching off has been hypothesized to be part of their selective advantage, above and beyond the function of filtering out short spurious signals35. We intended to select only for filtering out a short spurious signal, and not for fast turn-off; specifically, we expected effector expression to evolve a delay equal to the duration of the spurious signal. However, evolved solutions still expressed the effector in the presence of short spurious signals (Supplementary Fig. 3), and thus benefitted from rapidly turning off this spurious expression. In other words, we effectively selected for both delayed turn-on and rapid turn-off, despite our intent to only select for the former. Previous studies have also attempted to evolve adaptive motifs in a computational TRN, successfully under selection for circadian rhythm and for multiple steady states42, and unsuccessfully under selection to produce a sine wave in response to a periodic pulse23. Other studies have evolved adaptive motifs in a mixed network of transcriptional regulation and protein–protein interaction43,44,45. Our successful simulation might offer some methodological lessons, especially a focus on high-fitness evolutionary replicates, which was done by us and by Burda et al.42 but not by Knabe et al.23. Knabe et al.23 suggested that including a cost for gene expression may suppress unnecessary links and thus make it easier to score motifs. However, when we removed the cost of gene expression term (C(t) = 0, see Methods), AND-gated C1-FFLs still evolved in the high-fitness genotypes under selection for filtering out a spurious signal (Supplementary Fig. 11). In our model, removing the cost of gene expression did not, via permitting unnecessary links, conceal motifs. While simplified relative to reality, our model is undeniably complicated. An important question is which complications are important for what. One complication is our nucleotide-sequence-level model of cis-regulatory sequences. This has the advantage of capturing weak TFBSs, realistic turnover, and other mutational biases. The disadvantage is that calculating the probabilities of TF binding is computationally expensive and scales badly with network size. Future work might design a more schematic model of cis-regulatory sequences to improve computation while still capturing realistic mutation biases. A second complication of our approach is the stochastic simulation of gene expression. This is essential for our question, because intrinsic noise in gene expression can mimic the effects of a spurious signal, but may be less important in other scenarios in which the focus is on steady state behavior. Our model, while complex for a model and hence capable of capturing intrinsic noise, is inevitably less complex than the biological reality. However, we hope to have captured key phenomena, albeit in simplified form. One key phenomenon is that TFBSs are not simply present vs. absent but can be strong or weak, i.e. the TRN is not just a directed graph, but its connections vary in strength. Our model, like that of Burda et al.42 in the context of circadian rhythms, captures this fact by basing TF binding affinity on the number of mismatch deviations from a consensus TFBS sequence. While in reality, the strength of TF binding is determined by additional factors, such as broader nucleic context and cooperative behavior between TFs (reviewed in Inukai et al.46), these complications are unlikely to change the basic dynamics of frequent appearance of weak TFBSs and greater mutational accessibility of strong TFBSs from weak TFBSs than de novo. Similarly, AND-gating can be quantitative rather than qualitative47, a phenomenon that weak TFBSs in our model provide a simplified version of. Core links in adaptive motifs almost always involve strong not weak TFBSs. However, weak (two-mismatch) TFBSs can create additional links that prevent an adaptive motif from being scored as such. Some potential additional links are neutral while others are deleterious; the observed links are thus shaped by this selective filter, without being adaptive. Note that there have been experimental reports that even weak TFBSs can be functionally important48,49; these might, however, better correspond to 1-mismatch TFBSs in our model than two-mismatch TFBSs. Ramos and Barolo49 and Crocker et al.48 identified their "weak" TFBSs in comparison to the strongest possible TFBS, not in comparison to the weakest still showing affinity above baseline. A striking and unexpected finding of our study was that AND-gated diamonds evolved as an alternative motif for filtering out short spurious external signals, and that these, unlike FFLs, were also effective at filtering out intrinsic noise. Multiple motifs have previously been found capable of generating the same steady state expression pattern21; here we find multiple motifs for a much more complex function. Diamonds are not overrepresented in the TRNs of bacteria2 or yeast50, but are overrepresented in signaling networks (in which posttranslational modification plays a larger role)51, and in neuronal networks1. In our model, we treated the external signal as though it were a transcription factor, simply as a matter of modeling convenience. In reality, signals external to a TRN are by definition not TFs (although they might be modifiers of TFs). This means that our indirect regulation case, in which the signal is not allowed to directly turn on the effector, is the most appropriate one to analyze if our interest is in TRN motifs that mediate contact between the two. Note that if under indirect regulation we were to score the signal as not itself a TF, we would observe adaptive C1-FFLs but not diamonds, in agreement with the TRN data. However, these TRN data might miss functional diamond motifs that spanned levels of regulatory organization, i.e. that included both transcriptional and other forms of regulation. The greatest chance of finding diamonds within TRNs alone come from complex and multilayered developmental cascades, rather than bacterial or yeast52. Multiple interwoven diamonds are hypothesized to be embedded with multilayer perceptrons that are adaptations for complex computation in signaling networks53. Previous work has also identified alternatives to AND-gated C1-FFLs. Specifically, in mixed networks of transcriptional regulation and protein−protein interactions, FFLs did not evolve under selection for delayed turn-on (as well as rapid turn-off)45. Indeed, even when an FFL topology was enforced, with only the parameters allowed to evolve, two alternative motifs remained superior45. However, one alternative motif, which the authors called "positive feedback" is essentially still an AND-gated C1-FFL, specifically one in which the intermediate TF expression is also AND-gated, requiring both itself and the signal for upregulation. The other is a cascade in which the signal inhibits the expression of an intermediate TF protein that represses the expression of the effector. The cost of constitutive expression of the intermediate TF in the absence of the signal was not modeled45, giving this cascade an unrealistic advantage. Most previous research on C1-FFLs has used an idealized implementation (e.g. a square wave) of what a short spurious signal entails4,35,54. In real networks, noise arises intrinsically in a greater diversity of forms, which our model does more to capture. Even when a "clean" form of noise enters a TRN, it subsequently gets distorted with the addition of intrinsic noise55. Intrinsic noise is ubiquitous and dealing with it is an omnipresent challenge for selection. Indeed, we see adaptive diamonds evolve to suppress intrinsic noise, even when we select in the absence of extrinsic spurious signals. The function of a motif relies ultimately on its dynamic behavior, with topology merely a means to that end. To create two pathways that regulate the effector at different speeds, the C1-FFL motif uses a pair of short and long pathways, but these also correspond to fast-degrading and slow-degrading TFs. This same function was achieved entirely nontopologically in our adaptively evolved diamond motifs. This agrees with other studies showing that topology alone is not enough to infer activities such as spurious signal filtering from network motifs30,31,32. Transcription of each gene is controlled by TFBSs present within a 150-bp cis-regulatory region. When bound, a TF occupies a stretch of DNA 14 bp long (Supplementary Fig. 2). In the center of this stretch, each TF recognizes an 8-bp consensus sequence (Supplementary Fig. 2), and binds to it with a TF-specific (and mutable) dissociation constant Kd(0). TFs also bind somewhat specifically when there are one or two mismatches, with Kd(1) and Kd(2) values calculated from Kd(0) according to a model of approximately additive binding energy per base pair. With three mismatches, binding occurs at the same background affinity as to any 14 bp stretch of DNA. We model competition between a smaller number of specific higher-affinity binding sites and the much larger number of nonspecific binding sites, the latter corresponding to the total amount of nucleosome-free sequence in S. cerevisiae. Competition with nonspecific binding can be approximated by using an effective dissociation constant \(\hat K_{\mathrm{d}} = 10K_{\mathrm{d}}\). See Supplementary Methods for justification and details of these model choices. Each TF is either an activator or a repressor. The algorithm for obtaining the probability distribution for A activators and R repressors being bound to a given cis-regulatory region at a given moment in gene expression time is described in the Supplementary Methods. The signal is treated as though it were an activating TF whose concentration is controlled externally, with an OFF concentration of zero and an ON concentration of 1000 molecules per cell, which is the typical per-cell number of a yeast TF56. PA denotes the probability of having at least one activator bound for an AND-gate-incapable gene, or two for an AND-gate-capable gene. PR denotes the probability of having at least one repressor bound. Noise in yeast gene expression is well described by a two-step process of transcriptional activation57,58, e.g. nucleosome disassembly followed by transcription machinery assembly. We denote the three corresponding possible states of the transcription start site as Repressed, Intermediate, and Active (Fig. 1a). Transitions between the states depend on the numbers of activator and repressor TFs bound (e.g. via recruitment of histone-modifying enzymes59,60). We make conversion from Repressed to Intermediate a linear function of PA, ranging from the background rate 0.15 min−1 of histone acetylation61 (presumed to be followed by nucleosome disassembly), to the rate of nucleosome disassembly 0.92 min−1 for the constitutively active PHO5 promoter57: $$r_{{\mathrm{Rep}}\_{\mathrm{to}}\_{\mathrm{Int}}} = 0.92P_{\mathrm{A}} + 0.15\left( {1 - P_{\mathrm{A}}} \right).$$ We make conversion from Intermediate to Repressed a linear function of PR, ranging from a background histone de-acetylation rate of 0.67 per min61, up to a maximum of 4.11 min−1 (the latter chosen so as to keep a similar maximum:basal rate ratio as that of rRep_to_Int): $$r_{{\mathrm{Int}}\_{\mathrm{to}}\_{\mathrm{Rep}}} = 4.11P_{\mathrm{R}} + 0.67\left( {1 - P_{\mathrm{R}}} \right)$$ We assume that repressors disrupt the assembly of transcription machinery62 to such a degree that conversion from Intermediate to Active does not occur if even a single repressor is bound. In the absence of repressors, activators facilitate the assembly of transcription machinery63. Brown et al.57 reported that the rate of transcription machinery assembly is 3.3 min−1 for a constitutively active PHO5 promoter, and 0.025 min−1 when the PHO4 activator of the PHO5 promoter is knocked out. We use this range to set $$r_{{\mathrm{Int}}\_{\mathrm{to}}\_{\mathrm{Act}}} = 3.3P_{{\mathrm{A}}\_{\mathrm{no}}\_{\mathrm{R}}} + 0.025P_{{\mathrm{notA}}\_{\mathrm{no}}\_{\mathrm{R}}},$$ where PA_no_R is the probability of having no repressors and either one (for an AND-gate-incapable gene) or two (for an AND-gate-capable gene) activators bound, and PnotA_no_R is the probability of having no TFs bound (for AND-gate-incapable genes) or having no repressors and not more than one activator bound (for AND-gate-capable genes). The promoter sequence not only determines which specific TFBSs are present, but also influences nonspecific components of the transcriptional machinery64,65. We capture this via gene-specific but TF-binding-independent rates rAct_to_Int with which the machinery disassembles and a burst of transcription ends. In other words, we let TF binding regulate the frequency of bursts of transcription, while other properties of the cis-regulatory region regulate their duration. For example, the yeast transcription factor PHO4 regulates the frequency but not duration of bursts of PHO5 expression, by regulating the rates of nucleosome removal and of transition to but not from a transcriptionally active state57. Parameterization of rAct_to_Int is described in the Supplementary Methods. mRNA and protein dynamics All genes in the Active state initiate new transcripts stochastically at rate rmax_transc_init = 6.75 mRNA per min57, while the time for completing transcription depends on gene length (see Supplementary Methods for parameterization of gene length and associated delay times). We model a second delay before a newly completed transcript produces the first protein, which we assume is dominated by translation initiation (length-independent) plus elongation (length-dependent) and not splicing or mRNA export (see Supplementary Methods). After the second delay, we model protein production as continuous at a gene-specific rate rprotein_syn (see Supplementary Methods). Protein transport into the nucleus is rapid66 and is approximated as instantaneous and complete, so that the newly produced protein molecules immediately increase the probability of TF binding. Each gene has its own mRNA and protein decay rates, initialized from distributions taken from data (see Supplementary Methods). All the rates regarding transcription and translation are listed in Supplementary Table 1, including distributions estimated from data, and hard bounds imposed to prevent unrealistic values arising during evolutionary simulations. Gene expression simulation Our algorithm is part stochastic, part deterministic. We use a Gillespie algorithm to simulate stochastic transitions between Repressed, Intermediate, and Active chromatin states, and to simulate transcription initiation and mRNA decay events. Fixed (i.e. deterministic) delay times are simulated between transcription initiation and completion, and between transcript completion and the production of the first protein. Protein production and degradation are described deterministically with ODEs. We update protein production rates frequently in order to recalculate TF concentrations and hence chromatin transition rates, limiting the magnitude of errors in the simulation (Supplementary Fig. 12). Details of our simulation algorithm are given in the Supplementary Methods. We initialize gene expression simulations with no mRNA or protein, and all genes in the Repressed state. We make fitness quantitative in terms of a "benefit" B(t) as a function of the amount of effector protein Ne(t) at gene expression time t. Our motivation is a scenario in which the effector protein is responsible for directing resources from a metabolic program favored in environment 2 to a metabolic program favored in environment 1. In environment 1, where the effector produces benefits, $$B\left( t \right) = \left\{ {\begin{array}{*{20}{l}} {b_{{\mathrm{max}}}\frac{{N_{\mathrm{e}}\left( t \right)}}{{N_{{\mathrm{e}}\_{\mathrm{sat}}}}},} \hfill & {N_{\mathrm{e}}\left( t \right) \, < \, N_{{\mathrm{e}}\_{\mathrm{sat}}}} \hfill \\ {b_{{\mathrm{max}}},} \hfill & {N_{\mathrm{e}}\left( t \right) \ge N_{{\mathrm{e}}\_{\mathrm{sat}}}} \hfill \end{array}} \right.,$$ where bmax is the maximum benefit if all resources were redirected, and Ne_sat is the minimum amount of effector protein needed to achieve this. Similarly, in environment 2 $$B\left( t \right) = \left\{ {\begin{array}{*{20}{l}} {b_{{\mathrm{max}}} - b_{{\mathrm{max}}}\frac{{N_{\mathrm{e}}\left( t \right)}}{{N_{{\mathrm{e}}\_{\mathrm{sat}}}}},} \hfill & {N_{\mathrm{e}}\left( t \right) \, < \, N_{{\mathrm{e}}\_{\mathrm{sat}}}} \hfill \\ {0,} \hfill & {N_{\mathrm{e}}\left( t \right) \ge N_{{\mathrm{e}}\_{\mathrm{sat}}}} \hfill \end{array}}. \right.$$ We set Ne_sat to 10,000 molecules, which is about the average number of molecules of a metabolism-associated protein per cell in yeast56. Without loss of generality given that fitness is relative, we set bmax to 1. A second contribution to fitness comes from the cost of gene expression C(t) (Fig. 1b, middle). We make this cost proportional to the total protein production rate. We estimate a fitness cost of gene expression of 2 × 10−6 per protein molecule translated per minute, based on the cost of expressing a nontoxic protein in yeast67 (see Supplementary Methods for details). To ensure that gene expression changes in response to the signal, and not via an internal timer, we simulate a burn-in phase with duration drawn from an exponential distribution truncated at 30 min, with untruncated mean of 10 min. By having no fitness effects of gene expression during the burn-in, we eliminate a significant source of noise in fitness estimation due to variable burn-in duration. In our control condition, at the end of the burn-in, the signal suddenly switches to a constant ON level in environment 1, and remains off in environment 2. We simulate gene expression for 90 min plus the duration of the burn-in (Fig. 3). A "cellular fitness" in a given environment is calculated as the average instantaneous fitness B(t) − C(t) over the 90 min. We consider environment 2 to be twice as common as environment 1 (a signal should be for an uncommon event rather than the default), and take the corresponding weighted average. Evolutionary simulation We simulate a novel version of origin-fixation (weak-mutation-strong-selection) evolutionary dynamics, i.e. the population contains only one resident genotype at any time, and mutant genotypes are either rejected or chosen to be the next resident (Fig. 1c). Despite the fact that our mutant acceptance rule (see below) was chosen to maximize computational efficiency, our model usually takes 10 CPUs 1–3 days to complete an evolutionary simulation. We note that genetic homogeneity entails ignoring some important population genetic phenomena. First, if there were recombination, heterogeneity would favor mutations that combine well with a range of other genotypes. Second, clonal interference would shift evolution toward beneficial mutations of larger effect68 (an effect we can mimic by modifying the value 10−8 in Eq. 6). Third, polymorphic populations would evolve mutational robustness69. None of these three effects seems a priori likely to change our conclusions, although the possibility cannot be ruled out. Estimators \(\hat F\) of genotype fitness are averages of the cellular fitness values of 200 replicate simulations of gene expression per environment in the case of the mutant, plus an additional 800 should it be chosen to be the next resident. The mutant replaces the resident if $$\frac{{\hat F_{{\mathrm{mutant}}} - \hat F_{{\mathrm{resident}}}}}{{|\hat F_{{\mathrm{resident}}}|}} \ge 10^{ - 8}.$$ This differs from Kimura's70 equation for fixation probability, but captures the flavor of genetic drift. Genetic drift allows slightly deleterious mutations to occasionally fix, and beneficial mutations to sometimes fail to do so, even as the probability of fixation is monotonic with fitness. This is also achieved by our procedure, because of stochastic deviations of \(\hat F\) from true genotype fitness. The number of gene expression simulation replicates captures the flavor of effective population size. Note that it is possible, especially at the beginning of an evolutionary simulation, for relative fitness to be paradoxically negative. This occurs when a randomly initialized genotype does not express the effector (garnering no fitness benefit), but does express other genes (accruing a cost of expression); this combination makes fitness negative. In this rare case, for simplicity, we use the absolute value of \(\hat F\) on the denominator. Evolutionary simulations would require much more computation if we used a classic Wright–Fisher or Moran individual-based model, e.g. of population size 1000. Our scheme ensures that all mutations are evaluated by at least 200 gene expression simulations, making the probability of fixation of a beneficial mutation much higher than the O(s) in an individual-based model. An individual-based model would also require more than the 1000 gene expression simulations required by our scheme per successful selective sweep. If 2000 successive mutants are all rejected, the simulation is terminated; upon inspection, we found that these resident genotypes had evolved to not express the effector in either environment. We refer to each change in resident genotype as an evolutionary step. We stop the simulation after 50,000 evolutionary steps; at this time, most replicate simulations seem to have reached a fitness plateau (Supplementary Fig. 13); we analyze all replicates except those terminated early. To reduce the frequency of early termination in the case where the signal was not allowed to directly regulate the effector, we used a burn-in phase selecting on a more accessible intermediate phenotype (see Supplementary Methods). In this case, burn-in occurred for 1000 evolutionary steps, followed by the usual 50,000 evolutionary steps with selection for the phenotype of interest (Supplementary Fig. 13, right panels). Most replicates found a stable fitness plateau within 10,000 evolutionary steps, although some replicates were temporarily trapped at a low-fitness plateau (Supplementary Fig. 13). Genotype initialization We initialize genotypes with three activator genes, three repressor genes, and one effector gene. Cis-regulatory sequences and consensus binding sequences contain As, Cs, Gs, and Ts sampled with equal probability. Rate constants associated with the expression of each gene are sampled from the distributions summarized in Supplementary Table 1. A genotype is subjected to five broad classes of mutation, at rates summarized in Supplementary Table 2 and justified in the Supplementary Methods. First are single nucleotide substitutions in the cis-regulatory sequence; the resident nucleotide mutates into one of the other three types of nucleotides with equal probability. Second are single nucleotide changes to the consensus binding sequence of a TF (including the signal), with the resident nucleotide mutated into recognizing one of the other three types with equal probability. Both of these types of mutation can affect the number and strength of TFBSs. Third are gene duplications or deletions. Because computational cost scales steeply (and nonlinearly) with network size, we do not allow effector genes to duplicate once there are five copies, nor TF genes to duplicate once the total number of TF gene copies is 19. We also do not allow the signal, the last effector gene, nor the last TF gene to be deleted. Fourth are mutations to gene-specific expression parameters. Most of these (L, rAct_to_Int, rprotein_syn, rmRNA_deg, and rprotein_deg) apply to both TFs and effector genes, while mutations to the gene-specific values of Kd(0) apply only to TFs and the signal. Each mutation to L increases or decreases it by 1 codon, with equal probability unless L is at the upper or lower bound. Effect sizes of mutations to the other five parameters are modeled in such a way that mutation would maintain specified log-normal stationary distributions for these values, in the absence of selection or arbitrary bounds (see Supplementary Methods for details). Upper and lower bounds (Supplementary Methods) are used to ensure that selection never drives these parameters to unrealistic values. Fifth is conversion of a TF from being an activator to being a repressor, and vice versa. The signal is always an activator, and never converts. Importantly, this scheme allows for divergence following gene duplication. When duplicates differ due only to mutations of class 4, i.e. protein function is unchanged, we refer to them as "copies" of the same gene, encoding "protein variants". Mutations in classes 2 and 5 can create a new protein. When scoring network motifs, we require two nodes to be different genes, rather than copies of the same gene (see Supplementary Methods for details). Supplementary Table 6 summarizes the tendencies of different mutation types to be accepted, and to contribute to evolution. Acceptance rates are high, indicative of substantial nearly neutral evolution, in which slightly deleterious mutations are fixed and subsequently compensated for. Data that can be used to recreate the presented figures are available at https://github.com/MaselLab/network-evolution-simulator. Code availability Source code in C is freely available at https://github.com/MaselLab/network-evolution-simulator. Milo, R. et al. Network motifs: simple building blocks of complex networks. Science 298, 824–827 (2002). ADS CAS Article Google Scholar Shen-Orr, S. S., Milo, R., Mangan, S. & Alon, U. Network motifs in the transcriptional regulation network of Escherichia coli. Nat. Genet. 31, 64–68 (2002). Alon, U. Network motifs: theory and experimental approaches. Nat. Rev. Genet. 8, 450–461 (2007). Mangan, S. & Alon, U. Structure and function of the feed-forward loop network motif. Proc. Natl. Acad. Sci. USA 100, 11980–11985 (2003). Jaeger, K. E., Pullen, N., Lamzin, S., Morris, R. J. & Wigge, P. A. Interlocking feedback loops govern the dynamic behavior of the floral transition in Arabidopsis. Plant Cell 25, 820–833 (2013). Peter, I. S. & Davidson, E. H. Assessing regulatory information in developmental gene regulatory networks. Proc. Natl. Acad. Sci. USA 114, 5862–5869 (2017). Mangan, S., Zaslaver, A. & Alon, U. The coherent feedforward loop serves as a sign-sensitive delay element in transcription networks. J. Mol. Biol. 334, 197–204 (2003). Gould, S. J. & Lewontin, R. C. The spandrels of San Marco and the Panglossian Paradigm: a critique of the adaptationist programme. Proc. R. Soc. Lond., B, Biol. Sci. 205, 581–598 (1979). Graur, D. et al. On the immortality of television sets: "function" in the human genome according to the evolution-free gospel of ENCODE. Genome Biol. Evol. 5, 578–590 (2013). Masel, J. & Promislow, D. E. L. Answering evolutionary questions: a guide for mechanistic biologists. BioEssays 38, 704–711 (2016). Widder, S., Solé, R. & Macía, J. Evolvability of feed-forward loop architecture biases its abundance in transcription networks. BMC Syst. Biol. 6, 7 (2012). Cordero, O. X. & Hogeweg, P. Feed-forward loop circuits as a side effect of genome evolution. Mol. Biol. Evol. 23, 1931–1936 (2006). Artzy-Randrup, Y., Fleishman, S. J., Ben-Tal, N. & Stone, L. Comment on "network motifs: simple building blocks of complex networks" and "superfamilies of evolved and designed networks". Science 305, 1107 (2004). Jenkins, D. & Stekel, D. De novo evolution of complex, global and hierarchical gene regulatory mechanisms. J. Mol. Evol. 71, 128–140 (2010). Lynch, M. The evolution of genetic networks by non-adaptive processes. Nat. Rev. Genet. 8, 803–813 (2007). Mazurie, A., Bottani, S. & Vergassola, M. An evolutionary and functional assessment of regulatory network motifs. Genome Biol. 6, R35 (2005). Solé, R. V. & Valverde, S. Are network motifs the spandrels of cellular complexity? Trends Ecol. Evol. 21, 419–422 (2006). Tsuda, M. E. & Kawata, M. Evolution of gene regulatory networks by fluctuating selection and intrinsic constraints. PLoS Comput. Biol. 6, e1000873 (2010). ADS MathSciNet Article Google Scholar Wagner, A. Does selection mold molecular networks? Sci. STKE 2003, pe41 (2003). Kuo, P. D., Banzhaf, W. & Leier, A. Network topology and the evolution of dynamics in an artificial genetic regulatory network model created by whole genome duplication and divergence. BioSystems 85, 177–200 (2006). Payne, J. L. & Wagner, A. Function does not follow form in gene regulatory circuits. Sci. Rep. 5, 13015 (2015). Ruths, T. & Nakhleh, L. Neutral forces acting on intragenomic variability shape the Escherichia coli regulatory network topology. Proc. Natl. Acad. Sci. USA 110, 7754–7759 (2013). Knabe, J. F., Nehaniv, C. L. & Schilstra, M. J. Do motifs reflect evolved function?—no convergent evolution of genetic regulatory network subgraph topologies. Biosystems 94, 68–74 (2008). Raser, J. M. & O'Shea, E. K. Noise in gene expression: origins, consequences, and control. Science 309, 2010–2013 (2005). Kærn, M., Elston, T. C., Blake, W. J. & Collins, J. J. Stochasticity in gene expression: from theories to phenotypes. Nat. Rev. Genet. 6, 451–464 (2005). Eldar, A. & Elowitz, M. B. Functional roles for noise in genetic circuits. Nature 467, 167–173 (2010). Draghi, J. & Whitlock, M. Robustness to noise in gene expression evolves despite epistatic constraints in a model of gene networks. Evolution 69, 2345–2358 (2015). Jenkins, D. J. & Stekel, D. J. A new model for investigating the evolution of transcription control networks. Artif. Life 15, 259–291 (2009). Henry, A., Hemery, M. & François, P. φ-evo: a program to evolve phenotypic models of biological networks. PLoS Comput. Biol. 14, e1006244 (2018). ADS Article Google Scholar Ingram, P. J., Stumpf, M. P. & Stark, J. Network motifs: structure does not determine function. BMC Genom. 7, 108–108 (2006). Wall, M. E., Dunlop, M. J. & Hlavacek, W. S. Multiple functions of a feed-forward-loop gene circuit. J. Mol. Biol. 349, 501–514 (2005). Wall, M. E. Structure–function relations are subtle in genetic regulatory networks. Math. Biosci. 231, 61–68 (2011). MathSciNet CAS Article Google Scholar Balázsi, G., Barabási, A. L. & Oltvai, Z. N. Topological units of environmental signal processing in the transcriptional regulatory network of Escherichia coli. Proc. Natl. Acad. Sci. USA 102, 7841–7846 (2005). Hooshangi, S., Thiberge, S. & Weiss, R. Ultrasensitivity and noise propagation in a synthetic transcriptional cascade. Proc. Natl. Acad. Sci. USA 102, 3581–3586 (2005). Dekel, E., Mangan, S. & Alon, U. Environmental selection of the feed-forward loop circuit in gene-regulation networks. Phys. Biol. 2, 81 (2005). Kittisopikul, M. & Süel, G. M. Biological role of noise encoded in a genetic network motif. Proc. Natl. Acad. Sci. USA 107, 13300–13305 (2010). Boyle, A. P. et al. Comparative analysis of regulatory information and circuits across distant species. Nature 512, 453–456 (2014). Kemmeren, P. et al. Large-scale genetic perturbations reveal regulatory networks and an abundance of gene-specific repressors. Cell 157, 740–752 (2014). Stergachis, A. B. et al. Conservation of trans-acting circuitry during mammalian regulatory evolution. Nature 515, 365–370 (2014). Madan Babu, M., Teichmann, S. A. & Aravind, L. Evolutionary dynamics of prokaryotic transcriptional regulatory networks. J. Mol. Biol. 358, 614–633 (2006). Kashtan, N., Itzkovitz, S., Milo, R. & Alon, U. Efficient sampling algorithm for estimating subgraph concentrations and detecting network motifs. Bioinformatics 20, 1746–1758 (2004). Burda, Z., Krzywicki, A., Martin, O. C. & Zagorski, M. Motifs emerge from function in model gene regulatory networks. Proc. Natl. Acad. Sci. USA 108, 17263–17268 (2011). François, P. & Hakim, V. Design of genetic networks with specified functions by evolution in silico. Proc. Natl. Acad. Sci. USA 101, 580–585 (2004). François, P. & Siggia, E. D. Predicting embryonic patterning using mutual entropy fitness and in silico evolution. Development 137, 2385–2395 (2010). Warmflash, A., François, P. & Siggia, E. D. Pareto evolution of gene networks: an algorithm to optimize multiple fitness objectives. Phys. Biol. 9, 56001–56007 (2012). Inukai, S., Kock, K. H. & Bulyk, M. L. Transcription factor–DNA binding: beyond binding site motifs. Curr. Opin. Genet. Dev. 43, 110–119 (2017). Wang, D. et al. Loregic: a method to characterize the cooperative logic of regulatory factors. PLoS Comput. Biol. 11, e1004132 (2015). Crocker, J. et al. Low affinity binding site clusters confer Hox specificity and regulatory robustness. Cell 160, 191–203 (2015). Ramos, A. I. & Barolo, S. Low-affinity transcription factor binding sites shape morphogen responses and enhancer evolution. Philos Trans. R. Soc. Lond., B, Biol. Sci. 368, 20130018 (2013). Lee, T. I. et al. Transcriptional regulatory networks in Saccharomyces cerevisiae. Science 298, 799–804 (2002). Ma'ayan, A. et al. Formation of regulatory patterns during signal propagation in a mammalian cellular network. Science 309, 1078–1083 (2005). Rosenfeld, N. & Alon, U. Response delays and the structure of transcription networks. J. Mol. Biol. 329, 645–654 (2003). Alon, U. An Introduction to Systems Biology: Design Principles of Biological Circuits (Chapman and Hall/CRC, Boca Raton, FL, 2007). Hayot, F. & Jayaprakash, C. A feedforward loop motif in transcriptional regulation: induction and repression. J. Theor. Biol. 234, 133–143 (2005). Pedraza, J. M. & van Oudenaarden, A. Noise propagation in gene networks. Science 307, 1965–1969 (2005). Ghaemmaghami, S. et al. Global analysis of protein expression in yeast. Nature 425, 737–741 (2003). Brown, C. R., Mao, C., Falkovskaia, E., Jurica, M. S. & Boeger, H. Linking stochastic fluctuations in chromatin structure and gene expression. PLoS Biol. 11, e1001621 (2013). Mao, C. et al. Quantitative analysis of the transcription control mechanism. Mol. Syst. Biol. 6, 431 (2010). Shahbazian, M. D. & Grunstein, M. Functions of site-specific histone acetylation and deacetylation. Annu. Rev. Biochem. 76, 75–100 (2007). Voss, T. C. & Hager, G. L. Dynamic regulation of transcriptional states by chromatin and transcription factors. Nat. Rev. Genet. 15, 69–81 (2013). Katan-Khaykovich, Y. & Struhl, K. Dynamics of global histone acetylation and deacetylation in vivo: rapid restoration of normal histone acetylation status upon removal of activators and repressors. Genes Dev. 16, 743–752 (2002). Courey, A. J. & Jia, S. Transcriptional repression: the long and the short of it. Genes Dev. 15, 2786–2796 (2001). Poss, Z. C., Ebmeier, C. C. & Taatjes, D. J. The Mediator complex and transcription regulation. Crit. Rev. Biochem. Mol. Biol. 48, 575–608 (2013). Decker, K. B. & Hinton, D. M. Transcription regulation at the core: similarities among bacterial, archaeal, and eukaryotic RNA polymerases. Annu. Rev. Microbiol. 67, 113–139 (2013). Roy, A. L. & Singer, D. S. Core promoters in transcription: old problem, new insights. Trends Biochem. Sci. 40, 165–171 (2015). van Drogen, F., Stucke, V. M., Jorritsma, G. & Peter, M. MAP kinase dynamics in response to pheromones in budding yeast. Nat. Cell Biol. 3, 1051 (2001). Kafri, M., Metzl-Raz, E., Jona, G. & Barkai, N. The cost of protein production. Cell Rep. 14, 22–31 (2016). Gerrish, P. J. & Lenski, R. E. The fate of competing beneficial mutations in an asexual population. Genetica 102, 127 (1998). van Nimwegen, E., Crutchfield, J. P. & Huynen, M. Neutral evolution of mutational robustness. Proc. Natl. Acad. Sci. USA 96, 9716–9720 (1999). Kimura, M. On the probability of fixation of mutant genes in a population. Genetics 47, 713–719 (1962). Work was supported by the University of Arizona and by a Pew Scholarship to J.M., John Templeton Foundation grant 39667 to J.M., and by National Institutes of Health grants R35GM118170 to M.L.S. and R01GM076041 to J.M. We thank Hinrich Boeger for helpful discussions and careful reading of the manuscript, Jasmin Uribe for early work on this project, and the high-performance computing center at the University of Arizona for generous allocations. Department of Molecular and Cellular Biology, University of Arizona, Tucson, AZ, 85721, USA Kun Xiong Ronin Institute, Montclair, NJ, 07043, USA Alex K. Lancaster Center for Genomics and Systems Biology, Department of Biology, New York University, New York, NY, 10003, USA Mark L. Siegal Department of Ecology and Evolutionary Biology, University of Arizona, Tucson, AZ, 85721, USA Joanna Masel K.X. and J.M. designed the simulations, analyzed the results, and wrote the manuscript. K.X. performed the simulations and statistical analyses. K.X., A.K.L., and J.M. wrote the simulation code. M.L.S. and J.M. conceptualized the initial design of the simulations. Correspondence to Joanna Masel. The authors declare no competing interests. Journal peer review information: Nature Communications thanks Anton Crombach and other anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Xiong, K., Lancaster, A.K., Siegal, M.L. et al. Feed-forward regulation adaptively evolves via dynamics rather than topology when there is intrinsic noise. Nat Commun 10, 2418 (2019). https://doi.org/10.1038/s41467-019-10388-6 Extrinsic noise of the target gene governs abundance pattern of feed-forward loop motifs Md Sorique Aziz Momin & Ayan Biswas Physical Review E (2020) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Editors' Highlights Nature Communications ISSN 2041-1723 (online)
CommonCrawl
Nexus Network Journal Remarks on the Surface Area and Equality Conditions in Regular Forms. Part I: Triangular Prisms Ahmed A. Elkhateeb First Online: 06 March 2014 This work presents four mathematical remarks concluded from the mathematical analysis for the interrelationships between the dependent and independent variables that control the measures: perimeter, floor area, walls surface area and total surface area in the regular forms that have a given volume. Such forms include prismatic and pyramidal forms. The work consists of four parts, of which this first part presents the remarks of the isosceles triangular right prism. The first remark examines the effect of θ, the angle of the triangular base, on the total surface area. The second remark calculates the minimum total surface area in two cases, depending on whether angle θ is constant or variable. The third remark calculates the walls ratio and the critical walls ratio. The last remark studies the required conditions for the numerical equality in two cases, where the perimeter is equal to the area, and where the total surface area is equal to the volume. Trigonometry Algebra Differential equations Volume Area Total surface area Perimeter Regular polygons Right triangular prisms Minimum total surface area Walls ratio Numerical equality Form is the visual appearance of a three-dimensional object, and is often the main target in the architectural design. Form and space create the ambient world in which we live and experience our environment. According to Francis Ching, the form is established "… by the shapes and interrelationships of the planes that describe the boundaries of the volume" (Ching 2007, p. 28). Mathematically, the perimeter (Per), floor area (Ar), total surface area (S) and volume (V) are four basic measures that can describe numerically any form. For our purposes here, a fifth value is also significant, θ, the angle of a triangular base. The interest in area and volume calculations has been started very early. It may go back to the period of the ancient Greeks (ca. eighth–sixth century BC) who calculated formulas to measure areas of simple geometrical shapes. Further, the ancient Greeks measured volumes according to their dry or liquid conditions suited, respectively to measuring grain and wine. In his "Elements", Euclid presented many axioms and postulates related to the area of simple geometrical shapes. Nowadays, the methods and formulas to calculate Per, Ar, S and V for the common forms are almost available in every mathematical textbook [see, for example, (Ferguson and Piggott 1923; Bird 2003; Gieck and Gieck 2006)]. In modern advanced building analysis and design (such as room acoustics, artificial or day lighting and environmental control), areas and volumes are crucial measures. For example, in room acoustics, the ratio between Per and Ar determines the shape factor Sh f . Also, the ratio between V and S of a room determines its mean free path l and reverberation time T (Sabine 1993), both are basic measures in room acoustics (Elkhateeb 2012). In artificial lighting, room area is a main measure in the calculation of room cavity ratios that is a basic factor in lighting design (Grondzik et al. 2006). In environmental control, the areas of walls and/or roofs that face directly the sun control the amount of heat that transfers to the interior of a room according to the thermal connectivity and resistivity of their materials (Konya 2011). Thus, it is important for architects and practitioners in this field to be aware about the mathematical characteristics of these two measures (area and volume) and how they affect each other. The regular forms that will be addressed in this work are divided into two main groups: The first includes all of the right prisms that have regular bases according to the definition of the term "regular" as used through this work (see Sect. "Definition of "Regular Forms" in this Work"). The plan is to cover the entire range of the regular basic shapes (from the triangle, to the circle). This group will be addressed in the first three parts of the work: Part I: triangular prisms (this present paper); Part II: quadratic prisms (including both: rectangle and trapezoid); Part III: multi-sides prisms (from pentagon to circle). The second includes all of the right regular pyramids either complete or incomplete (frustum of right pyramid). The base(s) has to be also a regular shape. These will be discussed in Part IV. Definition of "Regular Forms" in this Work In case of right prisms, the term "regular", as used in this part and in the subsequent parts of the work, means that the bases of the prism have at least one axis of symmetry. Thus, in right triangular prisms, for example, only isosceles or equilateral triangles will be considered. In quadratic shapes only rectangular or symmetrical (isosceles) trapezoidal shapes will be considered. In the third dimension, the room is a right prism (i.e., all the sides of the prism are rectangles). In the case of right pyramids, the term "regular" means that the base is a regular polygon (i.e., a multi-sides shape, from the equilateral triangle to the circle). In the third dimension, the apex of this pyramid is aligned directly above the center of its regular base. The regular forms (or rooms, both terms could be used through the work) as described above have been chosen as the subject of this work because they are the most common in architectural applications. In addition, they can be grouped, organized and mathematically analyzed utilizing a consistent methodology. Although irregular rooms can be also studied using the same methodology, they have to be investigated separately according to the assumptions of each case but not as a group as the regular ones. Basic mathematical formulas to calculate the different measures (mainly, Per, Ar, S and V) of any regular form were established many years ago and are well-known. Nevertheless, to my knowledge, there is no advanced study that analyzes the interrelationships between these variables (dependent and independent). Consequently, a discussion of the way they affect each other is still lacking. The main aim of this work is to highlight specific cases that have a specific behavior in the advanced building analysis. For example, in a right prismatic room with isosceles triangular bases and a given volume (see "The Mathematical Relationships of Regular Triangular Prisms"), the work seeks a better understanding of the way in which the variables (θ, Per, Ar, S and V) affect each other. In particular, the work tries to draw clear conclusions about: How the angle θ (or θ and β) affects S; When S becomes minimum (SMin); The ratio between walls surface area SW and S (SW/S = RW); When Ar numerically equals Per; When S is numerically equal to V. For the purpose of this work, and in the beginning, the rules of analytical geometry and trigonometry were utilized to derive a set of mathematical functions that relate the dependent variables Per, HR (height of the room; see Fig. 2) and S to the independent variables θ, Ar and V. In a room that has a given volume V, there are different tactics to achieve this V. For the purpose of this work, only two tactics will be considered (see Fig. 1): The two tactics considered in this work, an example from the triangular prisms Case of constant θ with variable Ar, and HR; Case of variable θ with constant Ar and HR. For both cases, the derived functions were used to examine the effect of θ (or θ and β) on S and to calculate SMin for the room under discussion. Finally, the conditions for the equality were calculated utilizing the rules of algebra and trigonometry based on the derived functions. The Mathematical Relationships of Regular Triangular Prisms In an isosceles triangular right prism and for the purpose of this work, it is assumed that the angle θ, Ar and V are the independent variables whereas Per and S are the dependent ones. This section derives the main mathematical functions among these two groups of variables. During the analysis, the effect of only one independent variable on the other variables will be considered. Fig. 2 represents the different terms that will be used in this analysis, namely: θ (the base angle of the triangle), a (the side of the triangle), b (the base of the triangle), h (the height or altitude of the triangle) and HR (the height of the room). Isosceles triangular rooms, the different variables. a left, Room plan, b right, Room 3-D An isosceles triangle can be completely identified knowing both Ar and θ. From the first principles, it can be concluded that: $$ h = \sqrt {\frac{Ar sin \theta }{cos\;\theta }} $$ $$ b = \frac{ 2h}{{{ \tan }\theta }} . $$ $$ Per = 2h\left( {\frac{{ 1+ { \cos }\theta }}{{{ \sin }\theta }}} \right) $$ $$ Ar = \frac{{h^{ 2} }}{{{ \tan }\theta }} . $$ In the third dimension, an isosceles triangular shape can be extruded to form a right prism. In this case, its volume V can be calculated from: $$ V = \frac{{h^{ 2} H_{R} }}{{{ \tan }\theta }} $$ Consequently $$ H_{R} = \frac{Vtan\theta }{{h^{ 2} }} $$ and the total surface area S of a right prism with isosceles triangular bases can be calculated as: $$ S = 2Ar + \left( {Per \times H_{R} } \right). $$ Given the values of Per (Eq. 3), Ar (Eq. 4), and HR (Eq. 6) as a function of θ, Eq. 7 can be rewritten as either: $$ S = \frac{{ 2h^{ 2} }}{{{ \tan }\theta }} + 2 hH_{R} \left( {\frac{{ 1+ { \cos }\theta }}{{{ \sin }\theta }}} \right) $$ $$ S = \frac{{ 2h^{ 2} }}{{{ \tan }\theta }} + 2 \frac{V tan \theta }{h}\left( {\frac{{ 1+ { \cos }\theta }}{{{ \sin }\theta }}} \right). $$ Remark 1: Effect of θ on S Figure. 3 is a graphical representation of Eq. 8. As can be concluded from this figure, the behavior of the function changes dramatically from one zone to another based on the value of θ. The function is semi-symmetrical and reaches its minimum at θ = 60o. This angle, θ = 60o, splits the function into two main zones: The relationship between θ and S, graphical representation of Eq. 8 Zone 1: This zone encloses between 0o < θ ≤ 60o, in this zone S is a decreasing function of θ. This zone can be also divided into two sub-zones: Zone of rapid decay (a) (0o < θ ≤ 15o): where S loses about 60 % of its maximum value. Zone of slow decay (b) (15o ≤ θ ≤ 60o): θ increases rapidly in comparison with the reduction in S (in this zone, S loses about 22 % of its value at θ = 15o). Zone 2: in this zone S is an increasing function of θ. This zone (between 60o ≤ θ < 90o) can be also divided into two additional sub-zones (c) (up to θ ≤ 85o), and (d). Both zones are almost identical to the sub-zones (b) and (a), respectively. It can also be concluded from Fig. 3 that the variation in S corresponding to θ in the range between 45o and 60o is limited and can be ignored (Elkhateeb 2012). Nevertheless, beyond this range (whether θ ≥ 60o or θ ≤ 45o) this variation is obvious and must be considered upon deciding the dimensions and setups of a room. One can argue that values of θ outside this range (45o–60o) are not common in architectural applications. However, we cannot rely on that, as everything is possible in architecture. More discussion about the relationship between θ and S is presented in "Case II, Variable θ, Constant Ar and HR ". Remark 2: the Minimum Total Surface Area, SMin In a prismatic room that has isosceles triangular bases and a given V, SMin depends on the case of the angle θ, which could be either: Constant, in this case both Ar and HR will be the variables, or Variable, in this case both Ar and HR will be constants. Thus, Remark 2 will be divided into two sub-remarks in order to discuss both cases. Case I, Constant θ, Variable Ar and HR In this case, among the different isosceles triangular rooms that have the same θ and V, SMin occurs when the first derivative of Eq. 9 equals zero, i.e.: $$ \frac{dS}{dh} = \frac{ 4h}{tan\theta } - 2\frac{V tan \theta }{{h^{ 2} }}\left( {\frac{ 1+ cos\theta }{sin\theta }} \right) = 0. $$ This leads to: $$ h^{ 3} = \frac{V}{ 2}tan^{ 2} \theta \left( {\frac{ 1+ cos\theta }{sin\theta }} \right) . $$ From Eq. 5, the last formula (Eq. 11) can be rewritten as: $$ h = \frac{{H_{R } tan \theta }}{ 2} \left( {\frac{ 1+ cos\theta }{sin\theta }} \right) . $$ Thus, $$ \frac{{H_{R} }}{h} = \frac{ 2 sin \theta }{ tan \theta + sin\theta } . $$ Equations 12 and 13 indicate the conditions under which S will assume its minimum value. The ratio H R /h will be called ω. When ω results SMin, it will be called the critical ratio ω o . Thus, Eq. 13 can be rewritten as $$ \omega_{o} = \frac{ 2 sin \theta }{ tan \theta + sin\theta } . $$ Applying the rules of trigonometry, this last formula can be simplified to: $$ \omega_{o} = \frac{ 2}{sec \theta + 1} . $$ As can be concluded from Eq. 15, ω o depends entirely on θ: for every θ there is a specific ω o that produces SMin. A room that has such dimensions possesses the minimum total surface area among other rooms that have the same V and θ. The values of ω o were calculated for the range 20o ≤ θ ≤ 80o, and results are presented in Fig. 4. It is clear from this figure that ω o is a decreasing function of θ. The shaded zone in this figure indicates the common zone in architectural applications. Values of ω o in the range 20o ≤ θ ≤ 80o according to Eq. 15 To determine room dimensions that fulfill SMin, the following methodology can be applied: Determine both θ and V of the room; Calculate ω o by applying Eq. 15; From Eq. 15, calculate h as a function of HR, then apply Eq. 6 to get h; Apply Eq. 15 again to get HR; Utilize Eq. 4 to get Ar. Since θ and V are constants, Ar (accordingly Per) and HR will determine the value of S according to Eq. 7. Also, HR is a decreasing function of Ar. The relationship between HR or Ar on the one hand and S on the other, depends completely on ω o . As can be seen in Figs. 5 and 6, ω o divides the functions (HR − S or Ar − S) into two separate zones: The relationship of HR to S (case of θ = 45o) The relationship of Ar to S (case of θ = 45o) Zone (a): where ω < ω o . In this zone, S is a decreasing function of HR (see Fig. 5) and an increasing function of Ar (see Fig. 6), note that the location of the zones is reversed in this last figure. This means that any increase in room height will decrease its total surface area. Zone (b): where ω > ω o . In this zone, S is an increasing function of HR and a decreasing function of Ar (see Figs. 5 and 6). This means that an increase in HR will increase S. This is the opposite of what happens in zone (a). Table 1 shows a solved example that sheds more light on the effect of constant θ and variable Ar on S (Remark 2, case 1). A room has an isosceles triangular shape (θ = 45o) and a volume of 4,000 m3. By applying Eqs. 1 (for h), 3 (for Per), 4 (for Ar), 7 (for S) and 15 (for ω o ), Table 1 can be calculated. For θ = 45o, ω o = 0.83 (see Fig. 4). HR was assumed for each case except for ω o (the bold text in the Table). As can be concluded from this table, although increasing HR in the zone ω > ω o increases S, this increase is limited (around 1–3.5 % of S corresponding to ω o ). On the contrary, increasing Ar in the zone where ω < ω o has a significant effect on S (around 9–52 % of S corresponding to ω o ). Example shows the effect of Ar and HR on S (constant θ and variable Ar) θ o V (m3) ω (ratio) h (m) HR (m) Ar (m2) Per (m) S (m2) ΔS (%) ω > ω o ω o ω < ω o Bold values express a special case when S becomes minimum Case II, Variable θ, Constant Ar and HR This case is perhaps easier than case I. As V, Ar and consequently HR are constants for all rooms, thus and according to Eq. 7, the perimeter Per will be the main governor for S. In this case, among the different isosceles triangular rooms with 0o < θ < 90o, SMin will occur when the perimeter of the room is minimum. This can be mathematically calculated using Eq. 7 together with Eqs. 1 and 3 when the first derivative of Eq. 3 equals zero, i.e.: $$ \begin{aligned} \frac{ds}{d\theta } = \frac{dper}{d\theta } = \, & 2\sqrt {\frac{Ar sin \theta }{cos\theta }} \times \left( {\frac{{ - sin^{ 2} \theta - cos\theta ( 1+ cos\theta )}}{{sin^{2} \theta }}} \right) + \left( {\frac{ 1+ cos\theta }{sin\theta }} \right) \\ \times \,\sqrt {\frac{cos\theta }{Arsin\theta }} \times \left( {\frac{{Ar cos^{ 2} \theta + Ar sin^{ 2} \theta }}{{cos^{ 2} \theta }}} \right) = 0 \\ \end{aligned} $$ By applying the rules of trigonometry and algebra, Eq. 16 gives: $$ \cos \theta \; = \;0.5,\;i.e.,\;\theta \; = \;60^{\circ}\; $$ This result comes in a complete agreement with the findings of Remark 1 (see Fig. 3). It also agrees with the mathematical fact that the isosceles triangle has the minimum perimeter among the other triangles (Alsina and Nelsen 2009). Further, the equilateral triangle (θ = 60o) possesses the absolute minimum perimeter, consequently the minimum total surface area among the other rooms that have the same Ar and V but different θ. Remark 3: Walls Ratio RW Walls ratio RW is the ratio between walls surface area SW and room total surface area S (i.e., RW = SW/S). In our case, this ratio can be mathematically calculated applying the formula: $$ R_{W} = \frac{{Per \times H_{R} }}{{ 2Ar + Per \times H_{R} }} . $$ By substitution for Per and Ar from Eqs. 3 and 4, Eq. 18 can be rewritten as: $$ R_{W} = \frac{{H_{R} ( 1+ cos\theta )}}{{hcos\theta + H_{R} ( 1+ cos\theta )}} . $$ The relationship between RW and θ resembles the relationship between S and θ (see Fig. 3), thus RW reaches its minimum value when θ = 60o. In the zone where θ < 60o, RW is a decreasing function of θ. In the zone where θ > 60o, RW is an increasing function of θ. Under the conditions assumed for this work, for a given θ and according to Eqs. 18 and 19, it is clear that RW is a decreasing function of Ar (or h) and increasing function of HR. This is a logical conclusion as long as V is constant. What should be mentioned in this context is the special case of the isosceles triangular right prism when its dimensions fulfill ω o , this special RW will be called the critical walls ratio R Wo . To calculate R Wo , the conditions for ω o must be applied, thus, Eq. 19 can be rewritten as: $$ R_{Wo} = \frac{{\frac{ 2h( 1+ cos\theta )}{(sec\theta + 1)}}}{{hcos\theta + \frac{ 2h( 1+ cos\theta )}{(sec\theta + 1)}}} . $$ $$ R_{Wo} = \frac{ 2}{ 3} . $$ This means that R Wo is constant for any θ (0o < θ < 90o) and is equal to 2/3. Remark 4: Case of Numerical Equality For the room under discussion, two cases of numerical equality will be examined. The first considers the numerical equality between the perimeter Per and the floor area Ar. The last considers the numerical equality between the total surface area S and the volume V. Case I: Equality of Per and Ar In this case and according to Eqs. 3 and 4, the numerical equality between Per and Ar occurs when: $$ 2h\left( {\frac{{ 1+ { \cos }\theta }}{{{ \sin }\theta }}} \right) = \frac{{h^{ 2} }}{{{ \tan }\theta }} . $$ The value of h that fulfills the equality will be called the critical altitude h o . By applying the rules of algebra and trigonometry, Eq. 22 can be rewritten as: $$ h_{o} = 2\left( {sec\theta + 1} \right). $$ Equation 23 reveals the condition under which both Per and Ar will be equal. Like ω o , it is clear from Eq. 23 that this case of numerical equality depends entirely on θ, and for every θ there is a specific h o for which Per equals Ar. The values of h o in the range 20o ≤ θ ≤ 80o are presented in Fig 7. It is clear from this figure that h o is an increasing function of θ. The values of b, Per and Ar can be calculated from Eqs. 2, 3 and 4. Values of h o in the range 20o ≤ θ ≤ 80o according to Eq. 23 Case II: Equality of S and V In this case, the numerical equality between S and V occurs when: $$ 2Ar + (Per \times H_{Ro} ) = Ar \times H_{Ro} $$ where H Ro is the critical room height that fulfills this equality. Based on Eqs. 3 and 4, and applying the rules of algebra and trigonometry, Eq. 24 can be rewritten as: $$ H_{Ro} = \frac{ 2h}{{h - 2\left( {sec \theta + 1} \right)}} . $$ Thus, according to Eq. 25, for every θ and Ar, there is a specific H Ro such that S and V are numerically equal. This can be calculated in the following sequence: Determine both θ and Ar for the room; Apply Eq. 1 to get h; Substitute in Eq. 25 to get the critical room height H Ro , the height of the prism that fulfills the numerical equality between S and V. The minus sign (−) in the denominator of Eq. 25 also reveals that there are acceptable and unacceptable range in this equation. In other words, for every θ there is a minimum h (accordingly Ar) under which the numerical equality between S and V will never exist. This occurs when H Ro tends to ∞, i.e., h = 2(sec θ +1), or Ar equals Per according to Eq. 23. Figure 8 represents the relationship between Ar and H Ro calculated from Eq. 25 (for θ = 45o). As can be seen from the figure, in the acceptable range, H Ro is a decreasing function of Ar while the function can be divided into two main zones, zone of rapid decay (when Ar tends to be equal to Per) and zone of slow decay (when Ar is far from this equality). The relationship of Ar and H Ro (case of θ = 45o) This work has examined the interrelationships between the dependent and independent variables that control the values of the measures Per, Ar, S and V and how these variables affect each other in the case of regular triangular right prisms. Under the conditions assumed for this work, four remarks were concluded. In the first, the effect of θ on S was investigated. In the second remark, the minimum total surface area SMin for the room under discussion was calculated in two cases, case of constant θ and case of variable θ. In the first case, new variable (ω = HR/h) was introduced. For every θ there is a specific ω (called ω o ) that results SMin. Results showed that ω o depends entirely on θ. The values of ω o in the range 20o ≤ θ ≤ 80o were calculated and presented. In the second case, where θ is variable, results showed that SMin corresponds to θ = 60o. The third remark calculates walls ratio RW, results showed that RW reaches its minimum value when θ = 60o. In case of the isosceles triangular right prism that has dimensions fulfill ω o , RW was called R Wo . Results showed that R Wo is constant (=2/3) regardless the value of θ. The last remark investigates the conditions for the numerical equality either between Per and Ar or S and V. In the first case, another variable (h o ) was introduced. Results showed that h o depends also on θ. For every θ there is a specific h o that fulfills the numerical equality between Per and Ar. The values of h o in the range 20o ≤ θ ≤ 80o were calculated and presented. In the second case, the condition for the numerical equality between S and V was calculated. Results showed that for every θ and Ar, there is a specific HR (called H Ro ) that fulfills this equality. Results also showed that for every θ there is a minimum h under which this equality will never exist. This corresponds to h o (i.e., Ar = Per). The author would like to express his gratitude and sincere appreciation to all colleagues who make this work possible. In particular, thanks to Prof. Dr. Morad Abdel Kader, Dr. Esraa Elkhateeb and Dr. Ahmed Zakareia for their valuable discussion and help during the analysis. Alsina, Claudi, and Nelsen, Roger. 2009. When less is more? visualizing basic inequalities. Mathematical association of AmericaGoogle Scholar Bird, John. 2003. Engineering Mathematics. 4th edn. NewnesGoogle Scholar Ching, Francis, D.K. 2007. Architecture: form, space, and order. 3rd edn. Wiley.Google Scholar Elkhateeb, Ahmed. 2012. Form-acoustics relationship, effect of room shape and volume on the quality of speech. Lambert Academic PublishingGoogle Scholar Ferguson, D.F., and Piggott, H.E. 1923. Areas and volumes: their accurate and approximate determination. Constable & company ltdGoogle Scholar Gieck, Kurt, and Gieck, Reiner. 2006. Engineering formulas. 8th edn. McGraw-HillGoogle Scholar Grondzik, Walter, T., Kwok, Alison, Stein, Benjamin, and Reynolds, John. 2006. Mechanical and electrical equipment for buildings. 10th edn. Wiley.Google Scholar Konya, Allan. 2011. Open image in new window Cairo: Anglo-Egptian Bookshop. (Translated by Ahmed Elkhateeb) (Arabic trans. of Konya, Allan, Design Primer for Hot Climates, 2nd ed., London, Architectural Press, 1984).Google Scholar Sabine, Wallace. 1993. Collected papers on acoustics. California: Peninsula publishingGoogle Scholar © Kim Williams Books, Turin 2014 1.Department of Architecture, Faculty of Environmental DesignsKing Abdulaziz UniversityJeddahSaudi Arabia Elkhateeb, A.A. Nexus Netw J (2014) 16: 219. https://doi.org/10.1007/s00004-014-0178-8 First Online 06 March 2014 Publisher Name Springer Basel
CommonCrawl
International Journal of Concrete Structures and Materials Investigation of the Deformation and Failure Characteristics of High-Strength Concrete in Dynamic Splitting Tests Xudong Chen ORCID: orcid.org/0000-0003-0534-69271, Jin Wu1, Kai Shang1, Yingjie Ning2 & Lihui Bai2 International Journal of Concrete Structures and Materials volume 16, Article number: 58 (2022) Cite this article The dynamic response properties of concrete have been of interest during the use of buildings due to seismic, impact, and explosion events. The splitting Hopkinson lever is a classical device for testing the dynamic mechanical properties of materials. In this paper, dynamic splitting tests on concrete were conducted using it, and a time series predictive computational model for the incident, reflected and transmitted pulses of high-strength concrete specimens at high strain rates was developed, and the extension mechanism of splitting tensile cracks in high-strength concrete was detected and analyzed based on the DIC technique. The results show that: the peak strength of C60 specimens and C80 specimens increased by about 60% and 90%, respectively, from 0.05 MPa to 0.09 MPa in impact strength; the triangular damaged area at the end of the contact surface of the specimen and the rod subjected to high impact pressure increased significantly, the dynamic energy dissipation increased, and the damage degree of the specimens increased; under the action of high strain rate, the brittleness of the concrete specimens with higher strength increased, the damage rate The higher strength concrete specimens have increased brittleness, faster damage rate and higher crack extension under high strain rate. The results of the paper can provide important references for the design of buildings under impact loading. Concrete is widely used in civil engineering and military facilities, and the analysis of the mechanical properties of concrete under static or quasi-static loading has been the subject of extensive research by scholars worldwide in recent decades. A more mature theoretical system has been formed in the study of the static mechanics of concrete structures, and the force characteristics and deformation features of concrete in general under static loading are understood. During their service life, they may be subjected to impact loads, such as explosions and shocks (Li et al., 2004). It is very important to study the dynamic properties of concrete materials. However, due to the complexity and variability of the factors affecting the dynamic mechanical properties of concrete (Su et al., 2016; Wu et al., 2015), and due to the late start of research on the dynamic mechanical properties of concrete and the limitations of research methods, the study of the dynamic mechanical properties of concrete, especially for high-strength concrete, is still a great challenge for many scholars. Under the action of blast load and impact load, the tensile effect caused by the stress pulse reflected from the edge of the specimen has a significant impact on the failure of the specimen (Li et al., 2020; Chen et al., 2021; Jiang et al., 2020). Therefore, the dynamic splitting test properties of concrete play an important role in the safety of concrete structures (Paiak et al., 2021). A separate Hopkinson pressure bar (SHPB) test can perform dynamic mechanical properties of concrete under high-speed impact and can eliminate the effect of axial inertia in the test (Bertholf et al., 1975; Gong et al., 2019). Yang et al. (2015) conducted a Brazilian disc test on mortar using a separate Hopkinson pressure bar (SHPB) device and found that mortar is a strain rate sensitive material. Khan et al. (2019) simulated the actual stress state of concrete structures under dynamic loads, such as earthquakes, and dynamic splitting tests under variable lateral pressure were conducted to investigate the connection between the measured pressure and strain rate and the splitting test strength of concrete. It was found that the increase in strain rate could increase the strength and splitting modulus of concrete splitting test strength (Huang et al., 2022), which is consistent with the results of Chen's study (Chen et al., 2017). It is known that materials behave differently under static and dynamic loads (Miao, 2018; Renliang et al., 2019; Zwiessler et al., 2017). In dynamic loading, the frictional and inertial forces at the concrete ends produce additional constraints in the concrete, resulting in lateral forces and multi-axial stress states in the concrete (Pająket al., 2019). In SHPB tests, the frictional forces at the ends can be minimized by applying petroleum jelly and polyester foil to the specimen surface (Durand et al., 2016). For concrete, the displacement and strain can be analyzed using DIC after the mechanical properties test (Shah et al., 2011; Skarżyński et al., 2018). Concrete is a material commonly used in the construction industry and knowing its displacement and strain helps to determine the pattern of cracks on the concrete surface and many other properties, so it is important to perform such tests and analyses on concrete. Hamrat et al. (2016) used the digital image correlation (DIC) technique for ordinary concrete and high-strength concrete for flexural performance experimental study of their crack width and strain measurements, it was found that the DIC technique can measure and track the crack width variation with high accuracy. Liu et al. (2019) used the DIC technique for capturing the detailed formation and expansion of cracks during loading. The strain data obtained from the DIC method was compared and validated with the strain data obtained from the conventional method (strain gauges). DIC has proven to be a reliable and accurate non-contact testing method that can be successfully used to determine the mechanical properties of various concrete materials (Huang et al., 2019; Mróz et al., 2020). However, at the present stage, DIC technology is mainly applied to the measurement of quasi-static deformation of materials, and the research related to dynamic tests is less. The application of DIC technology to dynamic tests, combined with high-speed cameras to achieve high frame rate filming conditions, can effectively solve the difficulty of fine measurement of deformation under impact loading of materials. In this study, the dynamic splitting tensile properties of high-strength concrete (C60 and C80) at different is of impact strength was investigated using SHPB tests, and the changes in damage morphology, dynamic splitting tensile strength, and dynamic dissipation energy with impact strength and strain rate were analyzed. By predicting the distribution characteristics of the strain waveform of high-strength concrete under a high strain rate with time, the corresponding computational model was established. In addition, the crack extension mechanism of high-strength concrete was detected and analyzed by the digital image correlation (DIC) technique, and the crack extension and strain variation of high-strength concrete in splitting and tensile tests were determined with high accuracy. The purpose of this study is to investigate the splitting and tensile processes and damage mechanisms of high-strength concrete under different impact strengths and to propose strain-related scaling laws and damage modes for brittle materials under dynamic splitting and tensile conditions. 2.1 Materials and Specimen Preparation The raw materials used in this test include P-II 52.5 silicate cement, Class F Grade I fly ash, slag powder, silica fume, and nakasand with fineness modulus of 2.6, 5–20 mm crushed stone, and high-performance water reducing agent with 30% water reduction rate. The physicochemical properties of the cement are shown in Table 1. To maintain the same size of the specimens used in the dynamic test, specimens with an aspect ratio of 0.5 are usually used (Dai et al., 2010). Therefore, the specimens used in this test for the dynamic splitting are ϕ 100 mm × 50 mm cylindrical specimens. The ratio design of C60 and C80 high-strength concrete prepared for this test is shown in Table 2. Table 1 Physicochemical properties of cement. Table 2 Design of high-strength concrete ratio (m3/kg). 2.2 Test Methods 2.2.1 Static Mechanical Properties According to the requirements of GB T50081-2002 (2002), 100 mm × 100 mm × 300 mm prismatic specimens and ϕ 100 mm × 200 mm cylindrical specimens were used for the elastic modulus test and static mechanical property test, respectively. Five specimens of C60 and C80 strengths were taken and the average value was obtained to obtain the modulus of elasticity of the high-strength concrete specimens. The elastic modulus can be calculated by the following formula: $$E = \frac{{\sigma_{1/3} - \sigma_{0.5} }}{{\varepsilon_{1/3} - \varepsilon_{0.5} }},$$ where E is the Elastic modulus; σ1/3 is 1/3 of the axial compressive strength of the prismatic specimen; σ0.5 = 0.5 MPa; ε1/3 and ε0.5 are the strains corresponding to σ1/3 and σ0.5, respectively, during loading. According to the relevant test requirements of GB T50081-2002 (2002), this paper conducted elastic modulus tests and static mechanical property tests on two kinds of high-strength concrete specimens, and the basic mechanical property parameters of high-strength concrete were measured, as shown in Table 3. Table 3 Basic mechanical property parameters of high-strength concrete. 2.2.2 Dynamic Splitting Test Usually, a key factor affecting the deformation behavior under dynamic loading is the impact pressure (Chen et al., 2020; Huang et al., 2020). For many materials, the mechanical behavior under different impact pressures may vary significantly (Frew et al., 2005). Mechanical properties based on static tests can be misleading and may significantly underestimate or overestimate the effective properties under dynamic impact conditions (Huang et al., 2020). In dynamic impact testing, the spilt Hopkinson pressure bar (SHPB) is an effective method for testing the dynamic properties of materials (Li et al., 2000). The SHPB test is based on two basic assumptions. One is the assumption of a one-dimensional stress wave of the elastic rod, that is, the pressure rod is always in the elastic strain range during the experiment (this assumption can be satisfied if the stiffness of the rod is much greater than the stiffness of the specimen); the second is that the stress and strain in the specimen are uniformly distributed along the axial direction of the specimen, that is, the assumption of uniformity (this assumption can be satisfied if the length of the specimen is much smaller than the wavelength of the stress wave). These two basic assumptions are satisfied to obtain valid experimental results (Gong et al., 2019). In this test, the dynamic splitting test properties of high-strength concrete (C60 and C80) specimens were tested using a 100-mm-diameter split Hopkinson pressure bar (SHPB) as a dynamic loading device. As shown in Fig. 1, the SHPB mainly consists of a driving system, an incident bar, a transmission bar, an absorber bar, and a data acquisition system. The lengths of the incidence rod and transmission rod are 4 m and 3 m, respectively. the bullet, incidence rod, transmission rod, and absorption rod in the compression rod system are composed of high-strength 40Cr alloy steel with a density of 7800 kg/m3, an elastic modulus of 250 GPa, and a yield strength of 800 MPa. Schematic diagram of the dynamic splitting test system. During the test, the specimen needs to be clamped between the incident and transmission rods so that the two opposite surfaces of the specimen are parallel to the axis of the compression rod and the midpoint of the contact line between the specimen and the rod coincides with the center point of the end of the rod. A thin layer of lubricant is applied to the interface between the specimen and the rod contact to minimize the effect of frictional effects. The impact pressures set for this test are 0.05 MPa, 0.06 MPa, 0.07 MPa, 0.08 MPa, and 0.09 MPa (Table 4). Under the impact of the air pump pressure, the bullet strikes the incident rod and an incident wave is formed. This incident wave propagates between the incident rod and the specimen, forming a transmission wave and a reflection wave, which enter the transmission rod and the reflection rod, respectively. Table 4 SHPB dynamic splitting and pulling tests. The incident and reflected waves are collected by a strain gauge attached to the incident rod, and the transmitted waves are collected by a strain gauge attached to the transmission rod. Under each impact pressure, five specimens were tested for each of the two specimens, and the average of the dynamic splitting test strength of the specimens was calculated to be the dynamic splitting test strength. The response of the concrete specimens measured by the SHPB system under impact loading is shown in Fig. 2. The figures "Raw signals of the Brazilian test" show typical impulse signal curves recorded by strain gauges on the incident and reflected bars as incident, reflected, and transmitted waves, respectively. It should be noted that the reflected signals are very weak compared to the incident or reflected waves. This is mainly because the splitting tensile strength of the concrete is lower than the compressive strength. Therefore, after the damage of the specimen, most of the elastic stress waves are reflected as tensile waves from the incident rod. The figures "The incident (In), reflected (Re), transmitted (Tr) and superposed (In + Re) waves" show the stress balance check in the case of eliminating the time difference between the three waves, which is the result of the dynamic loading process. This is an important indicator of the validity of the data on whether the specimen achieves stress equilibrium in the dynamic loading process. According to the three-wave equilibrium theory, the specimen is considered to be in stress equilibrium when the incident and reflected stresses are equal to the transmitted stress. In addition, the shaping sheet technology was used during the test to extend the stress wave rise time and further achieve the stress equilibrium state by achieving multiple reflections of the stress wave in the specimen. Impulse signals acquired in dynamic splitting tests. As shown in Fig. 2, the typical pulse of some high-strength concrete specimens under dynamic impact strength are shown. It can be seen that the transmitted wave (Tr) is much smaller than the incident wave (In) and the reflected wave (Re), and the transmitted wave (Tr) always remains the same as the sum of the incident wave and the reflected wave (In + Re) during the whole loading process. In addition, the incident wave (In) and reflected wave (Re) is much larger for C80 concrete specimens, which is due to the higher dynamic splitting test strength of C80 concrete than that of C60 concrete. The rate of increase of the incident pulse slows down with the plastic deformation of the pulse shaper after the bullet rushes out of the impact and causes the incident, reflected, and transmitted pulses to reach their peaks almost simultaneously (Chen et al., 2015). It is important to note that the transmitted wave is very weak compared to the incident or reflected wave. This is mainly because the splitting tensile strength of concrete are lower than its compressive strength. Therefore, after the specimen is damaged, most of the stress waves are reflected as tensile waves along the incident rod. According to the stress wave theory (Khosravani et al., 2018; Chen et al., 2018)., the dynamic splitting test load, dynamic splitting test strength, and strain rate of high-strength concrete can be calculated using the following equations: $$\left\{ {\begin{array}{*{20}ll} {P_{dsr} (t) = E_{b} A_{b} \varepsilon_{t} (t)} \\ {\sigma_{dst} (t) = \frac{{2P_{dst} (t)}}{\pi ld} = \frac{{2E_{b} A_{b} \varepsilon_{t} (t)}}{\pi ld}} \\ {\dot{\varepsilon } = \frac{{\sigma_{dst} }}{\vartriangle tE}} \\ \end{array} } \right.,$$ where Pdsr(t) is the dynamic splitting test load of the high-strength concrete specimen over time. Eb and Ab are the elastic modulus and cross-sectional area of the separated Hopkinson compression bar, respectively. εt(t) is the transmitted strain signal. σdst(t) is the dynamic splitting test stress of the specimen over time. l and d are the length and diameter of the specimen, respectively. \(\dot{\varepsilon }\) is the dynamic splitting test strain rate of the specimen. Δt is the time required to reach the peak dynamic load, and E is the static modulus of elasticity of the high-strength concrete specimen. 2.2.3 Digital Image Correlation Technique The digital image correlation (DIC) technique is a photomechanical technique that calculates selected target surface displacements in a series of digital images with high accuracy (Pan et al., 2018). The images are recorded during the test and then post-processed. In recent years, the rapid development of high-speed digital cameras has made possible full-field deformation-related measurements in SHPB tests, which, when combined with surface structure tracking methods, such as DIC, can provide at least tens of high-resolution images at high frame rates (Sharafisafa et al., 2020). The first image is called the "reference image" and the second image is called the "deformation image". Usually, the DIC first defines a grid of analysis points on the reference image. In order for the method to identify the deformation points, a speckle image is usually created by hand. Black paint is sprayed on the white surface of the specimen at a distance of about 50 cm from the sprayer, which is long enough to spray small black spots on the specimen while avoiding producing too many black spots that would cover the entire surface. To obtain effective correlation, the scatter pattern should be non-repetitive, isotropic, and high contrast, i.e., a random pattern that does not show a tendency toward a certain direction and shows dark blacks and bright whites of sufficient size for high strain resolution. By creating this type of pattern, a very sensitive bokeh is avoided. Then, a set of pixels often called "subsets", is defined at each node of the grid. Image correlation is performed for each node by identifying the most similar subsets of the deformed image based on some statistical calculations of image correlation within each subset (Sharafisafa et al., 2020; Braunagel et al., 2020; Gao et al., 2020). In this test, to ensure accurate measurement of DIC, artificial textures were added to the lightly painted surface of the test piece to improve the collected data results (Fig. 3). In addition, the two-dimensional DIC method used in this test allows monitoring and evaluating the displacement changes along the shear surface of the specimen with the strain changes caused by curvature. Based on this feature, images of the splitting test damage process of high-strength concrete specimens during the SHPB test were collected in this study, and the specimen crack width and strain data were obtained based on the processing and analysis of the DIC technique for monitoring the deformation of the structure. Specimens for dynamic splitting test. 3 Test Results and Analysis 3.1 Analysis of Damage Patterns Figs. 4 and 5 show the damage morphology of the specimens of C60 and C80 high-strength concrete specimens under different impact strengths (0.05 MPa, 0.06 MPa, 0.07 MPa, 0.08 MPa, 0.09 MPa), respectively. Comparing the damage morphology of high-strength concrete under different impact conditions, it can be seen that under lower impact pressures, the specimens were crushed into large fragments, and with the increase of impact strength, the damage degree of edge damage of the specimens increased, and the crack width increased. In addition, with the increase of impact strength, the size of the fragments under the specimen crushed and spalled became smaller, and the damage mode gradually changed from longitudinal splitting damage to crushing damage. Damage patterns of C60 concrete specimens at different impact strengths. The crack extensions of C60 and C80 concrete at different impact pressures are shown in Fig. 6, as observed using a high-speed camera. According to the principle of the dynamic Brazilian disc splitting test, the splitting tensile stress at the center of the specimen is maximum when the specimen is subjected to the impact load (Khan et al., 2019). Therefore, as shown in Fig. 6, under the impact pressure, the splitting crack first developed at the center of the specimen and gradually extended straight along the radial direction, where the load was applied, eventually splitting the specimen into two halves. Comparing Fig. 6a, b and c, d, it is easy to see that when the impact pressure was small, the cracks of the specimen are relatively flat and extended along a single path with relatively small crack width. When the impact pressure increased, the specimens showed secondary cracks with curved crack expansion paths and increased dynamic energy dissipation. Compared with the low impact pressure, the triangular damaged area at the end of the contact surface of the specimen and the rod subjected to high impact pressure increased significantly, some secondary fragments were formed, and the damage to the specimen increased. It is noteworthy that the primary cracking starts from the center and expands towards both ends. As time increases, the high strain region becomes larger and expands and there is a secondary strain concentration region at the end. In this process, the cracking and the strain concentration region are initiated and expanded simultaneously. At higher impact loads, it can be seen that the stress concentration pattern deviates from the horizontal direction as the loading process proceeds. Crack propagation morphologies of C60 and C80 under different impact pressure. 3.2 Dynamic Splitting Test Stress–Strain Response Fig. 7a, b shows the dynamic splitting test stress–strain response of C60 concrete and C80 concrete, respectively. After the air pump in the detached Hopkinson, compression rod device is pressurized to the specified impact strength value, the bullet is shot and then splits the specimen. As shown in Fig. 7, the dynamic splitting test strength of the high-strength concrete gradually increased with increasing impact pressure. The peak strengths of C60 and C80 specimens increased by about 60% and 90%, respectively, from 0.05 MPa to 0.09 MPa. Moreover, the dynamic splitting test strength of the specimens showed an overall trend of gradual increase in strain corresponding to the final damage. The stress–strain curve of high-strength concrete is similar to that of ordinary concrete, with an initial approximate linear elastic phase, followed by a nonlinear phase increase in stress. After the stress reaches its peak, it then softens. In the softening phase, the stress decrease is smaller for high impact strength. Dynamic splitting test stress–strain response of high-strength concrete under different impact pressures. Ai et al. (2019) found that maintaining a certain impact strength for the bullet to be ejected allows the strain rate at the time of the test to remain within a more stable value, i.e., a determined value of impact strength implies an exact value of strain rate corresponding to it. Dynamic mechanical properties of materials. In the DIF–strain rate relationship plot for the high-strength concrete (C60 and C80) specimens in Fig. 8, the DIF vs. strain rate relationship for C60 concrete is more nearly linear. However, for C80 concrete, a sharp shift in DIF values can be observed as the strain rate increases. the tensile strength of C80 concrete increased slowly at low strain rates, and the strain rate sensitivity of concrete increased significantly when the strain rate exceeded the transition strain rate by about 27 s−1. In addition to this, the slope of the fitted line for C60 is 3.59%, and the slope of the fitted line for C80 shifts to 14.84% after taking the lead at 1.42%. This indicates that the sensitivity of C80 to the strain rate effect (Ross et al., 1995) is lower than that of C60 at first and increases with the increase of strain rate, C80's sensitivity increases and exceeds that of C60. Combined with the analysis in Fig. 7b, it was found that the peak strength of C80 concrete specimens at an impact strength of 0.9 MPa (high strain rate at this point) had a significant increase, compared to that at an impact strength of 0.5 MPa, the peak strength increases by about 90%. Such a trend corresponds to the DIF–strain rate variation relationship for the C80 specimens in Fig. 8. DIF–strain rate relationship for different high-strength concretes. 3.3 Dynamic Dissipation Energy Based on the law of conservation of energy, the energy consumed by high-strength concrete specimens subjected to an impact load can be calculated from the energy carried by the incident, reflected, and transmitted pulses as follows (Feng et al., 2018): $$W_{d} = W_{i} - W_{r} - W_{t} ,$$ (3a) $$\left\{ {\begin{array}{*{20}c} {W_{i} = E_{b} A_{b} C_{b} \int_{0}^{t} {\varepsilon_{i}^{2} dt} } \\ {W_{r} = E_{b} A_{b} C_{b} \int_{0}^{t} {\varepsilon_{r}^{2} dt} } \\ {W_{t} = E_{b} A_{b} C_{b} \int_{0}^{t} {\varepsilon_{t}^{2} dt} } \\ \end{array} } \right.,$$ (3b) where Wd is the dissipation energy; Wi, Wr, and Wt are the energy carried by the incident pulse, reflected pulse, and transmitted pulse, respectively; Eb, Ab, and Cb are the Elastic modulus, cross-sectional area, and the velocity of the P-wave of the compression bar, respectively; εi, εr, and εt are the incident strain signal, reflected strain signal, and transmitted strain signal, respectively. The dynamic dissipation energy of high-strength concrete determined using Eqs. 3(a) and (b) shown in Fig. 9. From Fig. 3, it is easy to find that the dissipation energy of high-strength concrete increases with the increase in impact strength. Under high strain rates, the dissipation energy of C60 specimens is slightly higher than that of C80 concrete specimens. In particular, at an impact strength of 0.09 MPa, when the strain rate is about 32 s−1, the dissipation energy at this time is at least 300% higher than the dissipation energy at an impact strength of 0.05 MPa. This is due to the greater brittleness of the concrete material and the greater degree of damage to the specimens at high strain rates. As the impact strength increases, the impact energy also increases, and the dissipation energy of the high-strength concrete specimens increases with the increase in impact energy. In addition to this, as the strain increases, the microcracks generated inside the high-strength concrete specimens increase and the degree of damage to the concrete gradually increases. Fu et al. (2021) found that the energy consumed for the formation of new cracks in concrete is significantly greater than that consumed for the expansion of existing cracks. Therefore, the conclusion obtained from this test can also corroborate the phenomenon that the energy consumed by high-strength concrete specimens increases with the increase of strain. Dissipative energy–strain characteristic curve of high-strength concrete. 3.4 Regression Analysis of Impulse Wave Response of Each Rod In this test, time series equations were developed based on the incident and reflected pulses acquired in the SHPB test, based on a one-dimensional nonlinear regression analysis with the pulse acquisition time t as the starting point. The incident strain signal and reflected strain signal of high-strength concrete at a high strain rate (impact strength of 0.09 MPa) were predicted, and the confidence interval R2 was used to evaluate the goodness of the model fit to the sample data. After Matlab programming and comparing the results of the preliminary runs several times, the equation relationship between the strain signal value and time for this test was initially determined to be a unitary seventh order equation with a confidence interval R2 ≥ 0.95, with the following equation: $$\varepsilon = at^{7} + bt^{6} + ct^{5} + k,$$ where a, b, c, k are model variable coefficients, and the unit of time t is μs. After one-dimensional nonlinear analysis by Matlab software, the model coefficients of the nonlinear regression equations of strain signals vs. time for C60 concrete specimens and C80 concrete specimens are shown in Table 5. Table 5 Calculated coefficients of C60 and C80 one-dimensional nonlinear regression. Comparing the coefficients of the nonlinear regression equation for C60 and C80 concrete specimens in Table 5, it can be found that the absolute values of the coefficients corresponding to the reflected pulses in the regression coefficient equation for C60 are basically larger than those for C80. This indicates that among the strain values predicted by this nonlinear regression equation for C60 and C80, the reflected strain signal of C60 has a higher correlation with the change in time, i.e., the reflected strain signal of C60 The rate of change with time is faster for C60. Based on the coefficients obtained from the programming calculations, the regression analysis equation for each strength concrete is obtained by substituting them into the univariate nonlinear regression equation as $$\begin{aligned} & {\rm C}60:\\ \left\{ {\begin{array}{*{20}ll} {\varepsilon_{{\text{i}}} { = 1}{\text{.94}} \times {10}^{ - 15} {\text{t}}^{7} - 4.52 \times 10^{ - 12} t^{6} + 3.93 \times 10^{ - 9} t^{5} - 1.96} \\ {\varepsilon_{{\text{r}}} { = } - {1}{\text{.59}} \times {10}^{ - 15} {\text{t}}^{7} + 3.74 \times 10^{ - 12} t^{6} - 3.28 \times 10^{ - 9} t^{5} + 15.32} \\ {\varepsilon_{t} = \varepsilon_{i} + \varepsilon_{r} } \\ \end{array} } \right., \end{aligned}$$ $$\begin{aligned} & {\rm C}80:\\ \left\{ {\begin{array}{*{20}ll} {\varepsilon_{{\text{i}}} { = 1}{\text{.86}} \times {10}^{ - 15} {\text{t}}^{7} - 4.52 \times 10^{ - 12} t^{6} + 4.13 \times 10^{ - 9} t^{5} - 6.32} \\ {\varepsilon_{{\text{r}}} { = } - {1}{\text{.17}} \times {10}^{ - 15} {\text{t}}^{7} - 1.59 \times 10^{ - 12} t^{6} - 2.85 \times 10^{ - 9} t^{5} + 2.94} \\ {\varepsilon_{t} = \varepsilon_{i} + \varepsilon_{r} } \\ \end{array} } \right.. \end{aligned}$$ Fig. 10 shows the strain–time response of the incident and reflected pulses at high strain rates and the model curves predicted by the nonlinear regression equations. Fig. 10 also includes the actual measured transmissive strain signals from the SHPB test. Section 2.2.2, demonstrates that the sum of the incident and reflected strain signals is approximately equal to the transmissive signal in the test. Therefore, in this test, the actual measured transmitted strain signal was compared with the predicted incident and reflected signal curves to see if the predicted values matched the measured values as close as possible. The predicted strain signals were compared with the corresponding test values using a nonlinear fitting equation, and the predicted values were found to be a good predictor of the strain signals collected in the SHPB test. Fig. 10 Measured and predicted strain–time response of SHPB incident, reflected and transmitted rods under high strain rates. 3.5 Failure Mode The DIC technique is an image measurement technique based on numerical analysis, which can track the visible changes in the image and obtain the full-field displacement of the surface of the specimen by comparing the changes in the values of the scattered images before and after deformation. In this test, the displacement–strain field of high-strength concrete under a dynamic high-speed impact splitting test is analyzed based on the scatter images of specimens during damage acquired by high-speed cameras, and the crack development of concrete specimens is detected and presented based on the DIC technique (Bhosale et al., 2020). To further analyze the splitting process of high-strength concrete disc specimens at different impact strengths, the crack opening displacements (CODs) were used to track the crack development in this test. As shown in Fig. 11a, the position of the extensometer calibrator at the disc specimen is shown. For each extensometer calibrator, the crack opening displacements (CODs) can be evaluated by the difference between the displacements of two points in the Y-direction. Therefore, the COD–strain curves can be obtained at different impact pressures by the dynamic displacement field in the Y-direction calculated by DIC. The COD–strain curves of high-strength concrete discs under different impact pressures are shown in Fig. 11b, c. From Fig. 11b, it can be seen that in the C60 disc specimens, the crack opening displacements generally increase with the increase of impact pressure, and the COD values even reach 7 mm at impact pressures of 0.8 MPa and 0.9 MPa. Comparing Fig. 11b, c, it was found that the COD values of the C80 disc specimens are mostly higher than those of the C60 disc specimens. COD values are mostly smaller than those of C60, which is due to the higher strength of C80 concrete and, at the same time, its brittleness than that of C60. DIC calculation results. a Location of the extensometer calibrator along the crack route. b The C60 COD–strain curve calculated by the DIC. c The C80 COD–strain curve calculated by the DIC. During the process of impact loading on concrete specimens, micro-strains and micro-cracks are accumulated inside the specimens and gradually develop into micro-cracks and finally damage. As shown in Fig. 12, the DIC strain clouds of high-strength concrete (C60 and C80) under 0.05 MPa impact pressure are shown. It can be seen from the images that as the impact load increases, the micro-cracks carry out to reach the cementing surface and the stress reaches the peak point. At this point, cracks appear on the surface of the specimens, and obvious red crack development traces appear in the images. With the continued loading of the impact load, the cracks continue to develop laterally until the specimens are completely damaged. Strain clouds of high-strength concrete under 0.05 MPa impact pressure. It is worth mentioning that by comparing Fig. 12a, b, it is found that the cracking time of C60 concrete specimens is earlier than that of C80 concrete specimens. At the same time, the crack expansion time until damage is about 25 μs for C60 concrete specimens, while it is about 35 μs for C80 concrete specimens. The time from macroscopic crack formation to specimens damage is shorter for C60 concrete specimens than for C80 specimens. This phenomenon corresponds to the results obtained in Sect. 3.2 of this study. The critical damage strain for crack formation or development is one of the key issues in this study to enable the detection of damage patterns in concrete. In general, concrete is a quasi-brittle material with inherently weak tensile strength, when the strain values in concrete are relatively low. In this experiment, the crack paths in the potential damage zone of concrete were recorded and detected by the DIC technique, and then the damage zone of the specimens was mapped using the multiscale critical damage strain (Mamand et al.,2017; Bu et al., 2020), which includes both macroscopic and microscopic crack extensions. As shown in Fig. 13, the strain clouds of high-strength concrete specimens at high impact strength (at this point, high strain rate) are shown. As can be seen in Fig. 13a, the C60 specimens show a gradual concentration of stress between 325 μs and 330 μs. At 340 μs, it can be observed that the stress concentration area expands from the middle of the specimens to both sides, and some micro-cracks start to develop gradually into macro cracks. In Fig. 13b, the time from the appearance of cracks to the complete damage destruction of the C80 specimens is 35 μs, which is shorter than the time used for the C60 specimens (50 μs). Therefore, under a high strain rate, the concrete specimens with higher strength have a faster damage rate and, at the same time, higher crack extension. Strain cloud of high-strength concrete under 0.09 MPa impact pressure. Comparing the strain clouds of high-strength concrete specimens under different impact strengths in Figs. 12 and 13, it can be found that under the dynamic impact load, with the increase of impact strength, the strain rate also increases, and the instability of crack damage evolution pattern on the surface of high-strength concrete specimens increases. In addition, with the increase of impact load, the time from the appearance of cracks to the complete damage of high-strength concrete specimens grows, and the degree of cracking also increases. This phenomenon can prove that the brittleness of concrete increases with its strength, and at the same time, the brittle damage of concrete is more obvious with the increase in strain rate. In this experiment, the splitting test properties of high-strength concrete were investigated based on the DIC technique. The SHPB test was used to investigate the splitting test damage form, stress–strain response, and dynamic dissipation energy of high-strength concrete at different impact strengths, and the time series prediction calculation model of the incident reflected and transmitted pulses of high-strength concrete specimens at high strain rates was established. The crack expansion during the splitting test of high-strength concrete at different impact strengths was also detected and analyzed based on the DIC technique, and the following conclusions were obtained. The stress–strain curve of high-strength concrete is similar to that of ordinary concrete, with an initial approximate linear elastic phase, followed by a nonlinear phase increase of stress. After the stress reaches its peak, it enters the softening phase, in which the stress decreases less for high impact pressure. The impact strength increased from 0.05 MPa to 0.09 MPa, and the peak strengths of C60 and C80 specimens increased by about 60% and 90%, respectively, and the dynamic splitting tensile strength of the specimens showed an overall trend of gradual increase in strain corresponding to the final damage. The measured strain signals were fitted using a nonlinear fitting equation, and the predicted strain signals obtained were compared with the corresponding test values, and it was found that the predicted values using the one-dimensional nonlinear regression analysis equation could well predict the strain signals collected in the SHPB test. Compared with the low impact pressure, the triangular damaged area at the end of the contact surface of the specimen and the rod subjected to high impact pressure increased significantly, the dynamic energy dissipation increased, and the damage degree of the specimen increased. The higher strength concrete specimens subjected to high strain rates had a faster damage rate, while the crack extension was also higher and the brittle damage was more pronounced. The results of this study confirm that impact strength has a significant effect on the dynamic splitting test properties of high-strength concrete, determine the crack initiation pattern during the dynamic splitting test process of high-strength concrete under different strain rates, and establish a strain–time fitting model for high-strength concrete at high strain rates. This provides a good experimental basis for further research on the dynamic splitting test properties of high-strength concrete and provides a theoretical basis for the design of high-strength concrete structures in practical engineering. The data and materials are available. SHPB: Separate Hopkinson pressure bar DIC: Crack opening displacement Ai, D., Zhao, Y., Wang, Q., & Li, C. (2019). Experimental and numerical investigation of crack propagation and dynamic properties of rock in SHPB indirect tension test. International Journal of Impact Engineering, 126, 135–146. Bertholf, L. D., & Karnes, C. H. (1975). Two-dimensional analysis of the split Hopkinson pressure bar system. Journal of the Mechanics and Physics of Solids, 23(1), 1–19. Bhosale, A. B., & Prakash, S. S. (2020). Crack propagation analysis of synthetic vs. steel vs. hybrid fibre-reinforced concrete beams using digital image correlation technique. International Journal of Concrete Structures and Materials, 14(1), 1–19. Braunagel, M. J., & Griffith, W. A. (2020). A split Hopkinson pressure bar method for controlled rapid stress cycling using an oscillating double striker bar. Rock Mechanics and Rock Engineering, 53(8), 3845–3851. Bu, J., Chen, X., Hu, L., Yang, H., & Liu, S. (2020). Experimental study on crack propagation of concrete under various loading rates with digital image correlation method. International Journal of Concrete Structures and Materials, 14(1), 1–25. Chen, D., Liu, F., Yang, F., Jing, L., Feng, W., Lv, J., & Luo, Q. (2018). Dynamic compressive and splitting tensile response of unsaturated polyester polymer concrete material at different curing ages. Construction and Building Materials, 177, 477–498. Chen, H., Zhou, X., Li, Q., He, R., & Huang, X. (2021). Dynamic compressive strength tests of corroded SFRC exposed to drying-wetting cycles with a 37 mm diameter SHPB. Materials, 14(9), 2267. Chen, X., Ge, L., Zhou, J., & Wu, S. (2017). Dynamic Brazilian test of concrete using split Hopkinson pressure bar. Materials and Structures, 50(1), 1–15. Chen, X., Shi, D., & Guo, S. (2020). Experimental study on damage evaluation, pore structure and impact tensile behavior of 10-year-old concrete cores after exposure to high temperatures. International Journal of Concrete Structures and Materials, 14(1), 1–17. Chen, X., Wu, S., & Zhou, J. (2015). Compressive strength of concrete cores under high strain rates. Journal of Performance of Constructed Facilities, 29(1), 06014005. Dai, F., Huang, S., Xia, K., & Tan, Z. (2010). Some fundamental issues in dynamic compression and tension tests of rocks using split Hopkinson pressure bar. Rock Mechanics and Rock Engineering, 43(6), 657–666. Durand, B., Delvare, F., Bailly, P., & Picart, D. (2016). A split Hopkinson pressure bar device to carry out confined friction tests under high pressures. International Journal of Impact Engineering, 88, 54–60. Feng, W., Liu, F., Yang, F., Li, L., & Jing, L. (2018). Experimental study on dynamic split tensile properties of rubber concrete. Construction and Building Materials, 165, 675–687. Frew, D. J., Forrestal, M. J., & Chen, W. (2005). Pulse shaping techniques for testing elastic-plastic materials with a split Hopkinson pressure bar. Experimental Mechanics, 45(2), 186–195. Fu, Q., Zhao, X., Zhang, Z., Peng, G., Zeng, X., & Niu, D. (2021). Dynamic splitting tensile behaviour and statistical scaling law of hybrid basalt-polypropylene fibre-reinforced concrete. Archives of Civil and Mechanical Engineering, 21(4), 1–22. Gao, M. Z., Zhang, J. G., Li, S. W., Wang, M., Wang, Y. W., & Cui, P. F. (2020). Calculating changes in fractal dimension of surface cracks to quantify how the dynamic loading rate affects rock failure in deep mining. Journal of Central South University, 27(10), 3013–3024. GB, T50081–2002. (2002). Testing methods of mechanical properties of normal concrete. China: China Building Materials Academy. Gong, F. Q., Si, X. F., Li, X. B., & Wang, S. Y. (2019). Dynamic triaxial compression tests on sandstone at high strain rates and low confining pressures with split Hopkinson pressure bar. International Journal of Rock Mechanics and Mining Sciences, 113, 211–219. Hamrat, M., Boulekbache, B., Chemrouk, M., & Amziane, S. (2016). Flexural cracking behavior of normal strength, high strength and high strength fiber concrete beams, using Digital Image Correlation technique. Construction and Building Materials, 106, 678–692. Huang, B., & Xiao, Y. (2020). Compressive impact tests of lightweight concrete with 155-mm-diameter spilt hopkinson pressure bar. Cement and Concrete Composites, 114, 103816. Huang, R., Li, S., Meng, L., Jiang, D., & Li, P. (2020). Coupled effect of temperature and strain rate on mechanical properties of steel fiber-reinforced concrete. International Journal of Concrete Structures and Materials, 14(1), 1–15. Huang, Y., He, X., Wang, Q., & Xiao, J. (2019). Deformation field and crack analyses of concrete using digital image correlation method. Frontiers of Structural and Civil Engineering, 13(5), 1183–1199. Huang, Z., Chen, W., Hao, H., Aurelio, R., Li, Z., & Pham, T. M. (2022). Test of dynamic mechanical properties of ambient-cured geopolymer concrete using split Hopkinson pressure bar. Journal of Materials in Civil Engineering, 34(2), 04021440. Jiang, Z., Hu, C., Liu, M., Easa, S. M., & Zheng, X. (2020). Characteristics of morphology and tensile strength of asphalt mixtures under impact loading using split Hopkinson pressure bar. Construction and Building Materials, 260, 120443. Khan, M. Z. N., Hao, Y., & Hao, H. (2019). Mechanical properties and behaviour of high-strength plain and hybrid-fiber reinforced geopolymer composites under dynamic splitting tension. Cement and Concrete Composites, 104, 103343. Khosravani, M. R., & Weinberg, K. (2018). A review on split Hopkinson bar experiments on the dynamic characterisation of concrete. Construction and Building Materials, 190, 1264–1283. Li, C., Xu, Y., Chen, P., Li, H., & Lou, P. (2020). Dynamic mechanical properties and fragment fractal characteristics of fractured coal–rock-like combined bodies in split hopkinson pressure bar tests. Natural Resources Research, 29(5), 3179–3195. Li, M., Qian, C., & Sun, W. (2004). Mechanical properties of high-strength concrete after fire. Cement and Concrete Research, 34(6), 1001–1005. Li, X. B., Lok, T. S., Zhao, J., & Zhao, P. J. (2000). Oscillation elimination in the Hopkinson bar apparatus and resultant complete dynamic stress–strain curves for rocks. International Journal of Rock Mechanics and Mining Sciences, 37(7), 1055–1060. Liu, F., Ding, W., & Qiao, Y. (2019). Experimental investigation on the flexural behavior of hybrid steel-PVA fiber reinforced concrete containing fly ash and slag powder. Construction and Building Materials, 228, 116706. Mamand, H., & Chen, J. (2017). Extended digital image correlation method for mapping multiscale damage in concrete. Journal of Materials in Civil Engineering, 29(10), 04017179. Miao, Y. G. (2018). On loading ceramic-like materials using split Hopkinson pressure bar. Acta Mechanica, 229(8), 3437–3452. Mróz, K., Tekieli, M., & Hager, I. (2020). Feasibility study of digital image correlation in determining strains in concrete exposed to fire. Materials, 13(11), 2516. Pająk, M., Baranowski, P., Janiszewski, J., Kucewicz, M., Mazurkiewicz, Ł, & Łaźniewska-Piekarczyk, B. (2021). Experimental testing and 3D meso-scale numerical simulations of SCC subjected to high compression strain rates. Construction and Building Materials, 302, 124379. Pająk, M., Janiszewski, J., & Kruszka, L. (2019). Laboratory investigation on the influence of high compressive strain rates on the hybrid fibre reinforced self-compacting concrete. Construction and Building Materials, 227, 116687. Pan, B. (2018). Digital image correlation for surface deformation measurement: Historical developments, recent advances and future goals. Measurement Science and Technology, 29(8), 082001. Renliang, S., Yongwei, S., Liwei, S., & Yao, B. (2019). Dynamic property tests of frozen red sandstone using a split hopkinson pressure bar. Earthquake Engineering and Engineering Vibration, 18(3), 511–519. Ross, C. A., Tedesco, J. W., & Kuennen, S. T. (1995). Effects of strain rate on concrete strength. Materials Journal, 92(1), 37–47. Shah, S. G., & Chandra Kishen, J. M. (2011). Fracture properties of concrete–concrete interfaces using digital image correlation. Experimental Mechanics, 51(3), 303–313. Sharafisafa, M., Aliabadian, Z., & Shen, L. (2020). Crack initiation and failure development in bimrocks using digital image correlation under dynamic load. Theoretical and Applied Fracture Mechanics, 109, 102688. Sharafisafa, M., & Shen, L. (2020). Experimental investigation of dynamic fracture patterns of 3D printed rock-like material under impact with digital image correlation. Rock Mechanics and Rock Engineering, 53(8), 3589–3607. Skarżyński, Ł, & Suchorzewski, J. (2018). Mechanical and fracture properties of concrete reinforced with recycled and industrial steel fibers using Digital Image Correlation technique and X-ray micro computed tomography. Construction and Building Materials, 183, 283–299. Su, Y., Li, J., Wu, C., Wu, P., & Li, Z. X. (2016). Influences of nano-particles on dynamic strength of ultra-high performance concrete. Composites Part B: Engineering, 91, 595–609. Wu, B., Chen, R., & Xia, K. (2015). Dynamic tensile failure of rocks under static pre-tension. International Journal of Rock Mechanics and Mining Sciences, 80, 12–18. Yang, F., Ma, H., Jing, L., Zhao, L., & Wang, Z. (2015). Dynamic compressive and splitting tensile tests on mortar using split Hopkinson pressure bar technique. Latin American Journal of Solids and Structures, 12, 730–746. Zwiessler, R., Kenkmann, T., Poelchau, M. H., Nau, S., & Hess, S. (2017). On the use of a split Hopkinson pressure bar in structural geology: High strain rate deformation of Seeberger sandstone and Carrara marble under uniaxial compression. Journal of Structural Geology, 97, 225–236. The authors would like to thank the financial support by National Key R&D Program of China (Grant No. 2021YFB2600200), National Natural Science Foundation of China (Grant No. 51979090) and State Key Laboratory of High-Performance Civil Engineering Materials (Grant NO. 2019CEM002). College of Civil and Transportation Engineering, Hohai University, Nanjing, 210098, China Xudong Chen, Jin Wu & Kai Shang Zhejiang Communications Construction Group Co., Ltd, Hangzhou, 310051, China Yingjie Ning & Lihui Bai Xudong Chen Jin Wu Kai Shang Yingjie Ning Lihui Bai XC: conceptualization and writing—original draft preparation. JW: conceptualization, formal analysis, writing—original draft preparation, and acquisition of data. KS: acquisition of data. YN: acquisition of data. LB: acquisition of data. All authors read and approved the final manuscript. Xudong Chen: Professor, Ph. D, College of Civil and Transportation Engineering, Hohai University, Nanjing 210098, China. Jin Wu: Miss, Master's Student, College of Civil and Transportation Engineering, Hohai University, Nanjing 210098, China. Kai Shang: Mr, Bachelor, College of Civil and Transportation Engineering, Hohai University, Nanjing 210098, China. Yingjie Ning: Mr, Bachelor, Senior Engineer, Chief Engineer, Zhejiang Communications Construction Group Co., Ltd., Hangzhou 310051, China. Lihui Bai: Ms, Bachelor, Senior Engineer, Deputy Department Manager, Zhejiang Communications Construction Group Co., Ltd., Hangzhou 310051, China. Correspondence to Xudong Chen. All authors have read the final version of the manuscript and agree to its publication. Journal information: ISSN 1976-0485 / eISSN 2234-1315. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Chen, X., Wu, J., Shang, K. et al. Investigation of the Deformation and Failure Characteristics of High-Strength Concrete in Dynamic Splitting Tests. Int J Concr Struct Mater 16, 58 (2022). https://doi.org/10.1186/s40069-022-00548-2 Accepted: 04 July 2022 high-strength concrete separated Hopkinson compression bars dynamic splitting test nonlinear regression analysis
CommonCrawl
\begin{document} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{definition-sort-of}[theorem]{`Definition'} \newtheorem{notation}[theorem]{Notation} \newtheorem{example}[theorem]{Example} \newtheorem{remark}[theorem]{Remark} \newtheorem{fact}[theorem]{Fact} \newenvironment{block}[1]{ \noindent {\Large \bf #1.}}{ } \title{Representations of trigonometric Cherednik algebras of rank 1 in positive characteristic} \author{Fr\'ed\'eric Latour} \maketitle \section{Introduction} Cherednik's double affine Hecke algebras are an important class of algebras attached to root systems. They were introduced in \cite{Ch3} as a tool of proving Macdonald's conjectures, but are also interesting by themselves, since they provide universal deformations of twisted group algebras of double affine Weyl groups. One may distinguish rational, trigonometric, and elliptic Cherednik algebras, which contain 0, 1, and 2 copies of the root lattice, respectively (rational and trigonometric algebras are degenerations of the elliptic ones; see \cite{EG}). Development of representation theory of Cherednik algebras (in particular, description of all irreducible finite dimensional representations) is an important open problem. In the characteristic zero case, it is solved completely only for type A, while in other types only partial results are available (see \cite{EG},\cite{BEG}, and \cite{ChO} for the rank 1 case). In positive characteristic, the rank 1 case (in the more general setting of complex reflection groups) is settled by the author in \cite{L}, after which the higher rank case (of type A) was considered in \cite{FG}. The goal of this paper is to extend the results of \cite{L} to the trigonometric case. That is, we study the representation theory of trigonometric Cherednik algebras in positive characteristic $p$ in the simplest case of rank 1. Our main result is a complete description of irreducible representations of such algebras. The paper is organized as follows. In Section 2, we state the main results. In Section 3, we prove the results for the ``classical'' case, i.e. the case when the ``Planck's constant'' $t$ is zero. In this case, generic irreducible representations have dimension $2$; one-dimensional representations exist when the ``coupling constant'' $k$ is zero. In Section 4, we prove the results for the ``quantum'' case, i.e. the case when the ``Planck's constant'' $t$ is nonzero. In this case, generic irreducible representations have dimension $2p$; smaller representations exist when the ``coupling constant'' $k$ is an element of $\mathbf{F}_p \subset \mathbf{k}$; namely, if $k$ is an integer with $0 \leq k \leq p-1$, then there exist irreducible representations of dimensions $p-k$ and $p+k$. {\bf Acknowledgements.} The author thanks his adviser Pavel Etingof for posing the problem and useful discussions, as well as for helping to write an introduction. The work of the author was partially supported by the National Science Foundation (NSF) grant DMS-9988796 and by a Natural Sciences and Engineering Research Council of Canada (NSERC) Julie Payette research scholarship. \section{Statement of Results} Let $\mathbf{k}$ be an algebraically closed field of characteristic $p$, where $p \neq 2$. Let $t, k \in \mathbf{k}$, and let $\mathbf{H}(t,k)$ be the algebra (over $\mathbf{k}$) generated by $\mathsf{X}, \mathsf{X}^{-1}, \mathsf{s}$ and $\mathsf{y}$, subject to the following relations: \begin{eqnarray} \mathsf{s} \mathsf{X} &=& \mathsf{X}^{-1} \mathsf{s} \label{rel1} \\ \mathsf{s}^2 &=& 1 \label{rel2} \\ \mathsf{s} \mathsf{y} + \mathsf{y} \mathsf{s} &=& -k \label{rel3} \\ \mathsf{X} \mathsf{y} \mathsf{X}^{-1} &=& \mathsf{y} - t + k\mathsf{s}. \label{rel4} \end{eqnarray} We will classify the irreducible representations of $\mathbf{H}(t,k)$. Now, for $t \neq 0$, $\mathbf{H}(t,k)$ is clearly isomorphic to $\mathbf{H}(1,\frac{k}{t})$ under the map $$\mathsf{X} \mapsto \mathsf{X}, \mathsf{s} \mapsto \mathsf{s}, \mathsf{y} \mapsto \frac{1}{t} \mathsf{y}.$$ Thus it is sufficient to classify irreducible representations of $\mathbf{H}(0,k)$ and $\mathbf{H}(1,k)$. For brevity we will use the notation $\mathbf{H}_0 \defeq \mathbf{H}(0,k)$ and $\mathbf{H}_1 \defeq \mathbf{H}(1,k)$, assuming that $k$ has been fixed once and for all. \subsection{Irreducible representations of $\mathbf{H}_0$} \begin{proposition} Let $k \neq 0$. Then the irreducible representations of $\mathbf{H}_0$ are the following: \begin{itemize} \item For $a, \beta \in \mathbf{k}$, $a, \beta \neq 0$, we have a two-dimensional representation $V_{0,1}^{\beta, a}$ with basis $\{ v_0, v_1 \}$, defined by the following: \begin{eqnarray*} \mathsf{y} v_0 &=& \beta v_0, \\ \mathsf{y} v_1 &=& - \beta v_1, \\ \mathsf{X} v_0 &=& a v_0 - \frac{k^2}{4 \beta^2} v_1, \\ \mathsf{X} v_1 &=& v_0 + \left(\frac{1}{a} - \frac{k^2}{4 a \beta^2} \right) v_1, \\ \mathsf{s} v_0 &=& -\frac{k}{2 \beta} v_0 + \frac{k^3 - 4 k \beta^2}{8a \beta^3} v_1, \\ \mathsf{s} v_1 &=& - \frac{2 a \beta}{k} v_0 + \frac{k}{2 \beta} v_1; \end{eqnarray*} \item For $a = \pm 1, b \in \mathbf{k},$ we have a two-dimensional representation $V_{0,2}^{a,b}$ with basis $\{ v_0, v_1 \}$, defined by the following: \begin{eqnarray*} \mathsf{y} v_0 &=& 0, \\ \mathsf{y} v_1 &=& v_0, \\ \mathsf{s} v_0 &=& v_0 - k v_1, \\ \mathsf{s} v_1 &=& -v_1, \\ \mathsf{X} v_0 &=& a (v_0 - k v_1), \\ \mathsf{X} v_1 &=& b v_0 + (a - k b) v_1. \end{eqnarray*} \end{itemize} $V_{0,1}^{\beta, a}$ and $V_{0,1}^{\beta', a'}$ are isomorphic if and only if $\beta' = \beta, a' = a$ or $\beta' = -\beta, a' = \frac{4 \beta^2 - k^2}{4 a \beta^2}.$ $V_{0,2}^{a,b}$ and $V_{0,2}^{a',b'}$ are isomorphic if and only if $a = a'$ and $b = b'.$ Furthermore, representations with different subscripts are never isomorphic. \label{prop1} \end{proposition} \begin{proposition} Let $k = 0$. Then the irreducible representations of $\mathbf{H}_0$ are the following: \begin{itemize} \item For $a, \beta \in \mathbf{k}$, $a, \beta \neq 0$, we have a two-dimensional representation $V_{0,3}^{\beta, a}$ with basis $\{ v_0, v_1 \}$, defined by the following: \begin{eqnarray*} \mathsf{y} v_0 &=& \beta v_0, \\ \mathsf{y} v_1 &=& - \beta v_1, \\ \mathsf{X} v_0 &=& a v_0, \\ \mathsf{X} v_1 &=& \frac{1}{a} v_1, \\ \mathsf{s} v_0 &=& v_1, \\ \mathsf{s} v_1 &=& v_0; \end{eqnarray*} \item For $a \in \mathbf{k}$, $a \notin \{ 0, \pm 1 \},$ we have a two-dimensional representation $V_{0,4}^a$ with basis $\{ v_0, v_1 \}$, defined by the following: \begin{eqnarray*} \mathsf{y} v_0 &=& 0, \\ \mathsf{y} v_1 &=& 0, \\ \mathsf{X} v_0 &=& a v_0, \\ \mathsf{X} v_1 &=& \frac{1}{a} v_1, \\ \mathsf{s} v_0 &=& v_1, \\ \mathsf{s} v_1 &=& v_0; \end{eqnarray*} \item For $a = \pm 1, b = \pm 1,$ we have a one-dimensional representation $V_{0,5}^{a,b}$ on which $\mathsf{y}, \mathsf{X}$ and $\mathsf{s}$ act as $0, a$ and $b$ respectively. \end{itemize} $V_{0,3}^{\beta, a}$ and $V_{0,3}^{\beta', a'}$ are isomorphic if and only if $\beta' = \beta, a' = a$ or $\beta' = -\beta, a' = \frac{1}{a}.$ $V_{0,4}^a$ and $V_{0,4}^{a'}$ are isomorphic if and only if $a' = a$ or $a' = \frac{1}{a}$. $V_{0,5}^{a,b}$ and $V_{0,5}^{a',b'}$ are isomorphic if and only if $a'=a, b'=b.$ Furthermore, representations with different subscripts are never isomorphic. \label{prop2} \end{proposition} \subsection{Irreducible representations of $\mathbf{H}_1$} \begin{proposition} Let $k \notin \mathbf{F}_p$. Then the irreducible representations of $\mathbf{H}_1$ are the following: \begin{itemize} \item For $\mu,d \in \mathbf{k}, d \neq 0, b = (\mu^p - \mu)^2$ with $\frac{k}{2}$ not a root of $f(y) = (y^p - y)^2 - b,$ and also for $\mu = \pm \frac{k}{2}, d \neq 0,$ we have a $2p$-dimensional representation $V_{1,1}^{\mu,d}$ with basis $\{ v_{\mu+j}, v_{-\mu+j}, j = 0,1,\ldots, p-1 \}$, defined by the following: \begin{eqnarray} \mathsf{y} v_{\beta} &=& \beta v_{\beta}, \quad \beta = \pm \mu, \pm\mu +1, \ldots, \pm \mu + p -1; \label{v11first} \\ \mathsf{s} v_{-\mu-j} &=& - \frac{1}{\mu+j} v_{\mu+j} + \frac{k}{2(\mu+j)} v_{-\mu-j}, \quad j = 1, 2, \ldots, p-1; \\ \mathsf{s} v_{\mu+j} &=& \left( \frac{k^2}{4(\mu+j)} - (\mu+j) \right) v_{-\mu-j} - \frac{k}{2(\mu+j)} v_{\mu+j}, \quad j = 1, 2, \ldots, p-1; \\ \mathsf{s} v_{-\mu} &=& \frac{k}{2 \mu} v_{-\mu} - \frac{d}{\mu} v_{\mu}; \\ \mathsf{s} v_{\mu} &=& \left( \frac{k^2}{4d \mu} - \frac{\mu}{d} \right) v_{-\mu} - \frac{k}{2 \mu} v_{\mu}; \\ \mathsf{X} v_{\beta} &=& \mathsf{s} v_{-\beta-1}, \quad \beta = \pm \mu, \pm\mu +1, \ldots, \pm \mu + p -1 \label{v11last}; \end{eqnarray} \item For $\theta = \pm 1,$ we have a $2p$-dimensional representation $V_{1,2}^{\theta}$ with basis $\{ v_{j}, w_{j}, j = 0,1,\ldots, p-1 \},$ defined by the following: \begin{eqnarray} \mathsf{y} v_j &=& j v_j, \quad j = 0, 1, \ldots, p-1; \label{v12first} \\ \mathsf{y} w_j &=& j w_j + v_j, \quad j = 0, 1, \ldots, p-1; \\ \mathsf{s} v_0 &=& -k w_0; \\ \mathsf{s} w_0 &=& -\frac{1}{k} v_0; \\ \mathsf{s} v_{-j} &=& \frac{1}{j} v_j + \frac{k}{2j} v_{-j}, \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ \mathsf{s} v_{j} &=& \left( j - \frac{k^2}{4j} \right) v_{-j} - \frac{k}{2j} v_{j}, \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ \mathsf{s} w_{-j} &=& \frac{1}{j^2} v_j + \frac{k}{2j^2} v_{-j} - \frac{1}{j} w_j + \frac{k}{2j} w_{-j} \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ \mathsf{s} w_{j} &=& \left( 1 + \frac{k^2}{4j^2} \right) v_{-j} + \frac{k}{2j^2} v_j - \frac{k}{2j} w_j + \left( \frac{k^2}{4j} - j \right) w_{-j}, \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ \mathsf{X} v_{j} &=& - \mathsf{s} v_{-j-1}, \quad j \neq \frac{p-1}{2}; \\ \mathsf{X} v_{\frac{p-1}{2}} &=& \theta \mathsf{s} v_{\frac{p-1}{2}}; \label{v12-2ndlast}\\ \mathsf{X} w_{j} &=& \mathsf{s} w_{-j-1}, \quad j \neq \frac{p-1}{2}; \\ \mathsf{X} w_{\frac{p-1}{2}} &=& -\theta \mathsf{s} w_{\frac{p-1}{2}}. \label{v12last} \end{eqnarray} \end{itemize} $V_{1,1}^{\mu,d}$ and $V_{1,1}^{\mu',d'}$ are isomorphic if and only if $$(\mu' - \mu \in \mathbf{F}_p \mbox{ {\rm and} } d' = d) \mbox{ {\rm or} } (\mu' + \mu \in \mathbf{F}_p \mbox{ {\rm and} } dd' = \prod_{c \in \mathbf{F}_p} \left(\frac{k^2}{4} - (\mu+c)^2 \right) ).$$ $V_{1,2}^{\theta}$ and $V_{1,2}^{\theta'}$ are isomorphic if and only if $\theta = \theta'$ Furthermore, representations with different subscripts are never isomorphic. \label{prop3} \end{proposition} Now, in the case where $k \in \mathbf{F}_p$, note that there is an isomorphism between $\mathbf{H}(1,k)$ and $\mathbf{H}(1,-k)$, given by $$\mathsf{y} \mapsto \mathsf{y}, \mathsf{s} \mapsto -\mathsf{s}, \mathsf{X} \mapsto \mathsf{X}, k \mapsto -k.$$ So we may assume that $k$ is an {\em even integer} with $0 \leq k \leq p-1.$ \begin{proposition} Let $k$ be even with $2 \leq k \leq p-1$. Then the irreducible representations of $\mathbf{H}_1$ are the following: \begin{itemize} \item For $\mu,d \in \mathbf{k}, d \neq 0,$ we have $V_{1,1}^{\mu,d}$, defined as in Proposition \ref{prop3}. \item For $\theta = \pm 1$, we have a $(p-k)$-dimensional representation $V_{1,3}^{\theta}$ with basis $\{ v_{\frac{k}{2}}, v_{\frac{k}{2}+1}, \ldots, v_{-\frac{k}{2}-1} \}$, defined by \begin{eqnarray} \mathsf{y} v_j &=& j v_j, \quad j = \frac{k}{2}, \frac{k}{2} + 1 \ldots, - \frac{k}{2} - 1; \label{v13first} \\ \mathsf{s} v_{-j} &=& \frac{k}{2j} v_{-j} - \frac{1}{j} v_j, \quad j = \frac{k}{2} + 1, \ldots, \frac{p-1}{2}; \\ \mathsf{s} v_j &=& -j v_{-j} + \frac{k^2}{4j} v_{-j} - \frac{k}{2j} v_j, \quad j = \frac{k}{2} + 1, \ldots, \frac{p-1}{2}; \\ \mathsf{s} v_{\frac{k}{2}} &=& -v_{\frac{k}{2}}; \end{eqnarray} \begin{eqnarray} \mathsf{X} v_{j} &=& - \mathsf{s} v_{-j-1}, \quad j \neq \frac{p-1}{2}; \\ \mathsf{X} v_{\frac{p-1}{2}} &=& \theta \mathsf{s} v_{\frac{p-1}{2}} \label{v13last}. \end{eqnarray} \item For $\theta = \pm 1$, we have a $(p+k)$-dimensional representation $V_{1,4}^{\theta}$ with basis $\{ v_j, w_i, j = 0, \ldots, p-1, i = -\frac{k}{2}, \ldots, \frac{k}{2} - 1 \}$, defined by \begin{eqnarray} \mathsf{y} v_j &=& j v_j, \quad j = 0, 1, \ldots, p-1; \label{v14first} \\ \mathsf{y} w_j &=& j w_j + v_j, \quad j = -\frac{k}{2}, -\frac{k}{2} + 1, \ldots, \frac{k}{2} - 1; \\ \mathsf{s} v_0 &=& -k w_0; \\ \mathsf{s} w_0 &=& -\frac{1}{k} v_0; \\ \mathsf{s} v_{-j} &=& \frac{1}{j} v_j + \frac{k}{2j} v_{-j}, \quad j = 1, \ldots, \frac{k}{2} - 1, \frac{k}{2} + 1, \frac{p-1}{2}; \\ \mathsf{s} v_{j} &=& \left( j - \frac{k^2}{4j} \right) v_{-j} - \frac{k}{2j} v_{j}, \quad j = 1, \ldots, \frac{k}{2} - 1, \frac{k}{2} + 1, \frac{p-1}{2}; \\ \mathsf{s} v_{-\frac{k}{2}} &=& v_{-\frac{k}{2}}; \\ \mathsf{s} v_{\frac{k}{2}} &=& 2 v_{-\frac{k}{2}} - v_{\frac{k}{2}}; \\ \mathsf{s} w_{-j} &=& \frac{1}{j^2} v_j + \frac{k}{2j^2} v_{-j} - \frac{1}{j} w_j + \frac{k}{2j} w_{-j} \quad j = 1, \ldots, \frac{k}{2} - 1; \\ \mathsf{s} w_{j} &=& \left( 1 + \frac{k^2}{4j^2} \right) v_{-j} + \frac{k}{2j^2} v_j - \frac{k}{2j} w_j + \left( \frac{k^2}{4j} - j \right) w_{-j}, \quad j = 1, \ldots, \frac{k}{2} - 1; \\ \mathsf{s} w_{-\frac{k}{2}} &=& -\frac{2}{k} v_{\frac{k}{2}} + \frac{2}{k} v_{-\frac{k}{2}} + w_{-\frac{k}{2}}; \\ \mathsf{X} v_{j} &=& - \mathsf{s} v_{-j-1}, \quad j \neq \frac{p-1}{2}; \\ \mathsf{X} v_{\frac{p-1}{2}} &=& \theta \mathsf{s} v_{\frac{p-1}{2}}; \\ \mathsf{X} w_{j} &=& = \mathsf{s} w_{-j-1}, \quad j = -\frac{k}{2}, \ldots, \frac{k}{2} - 1. \label{v14last} \end{eqnarray} \item For $c \in \mathbf{k}$, we have a $2p$-dimensional representation $V_{1,5}^{c}$ with basis $\{ v_j, w_i, u_l, j = 0, \ldots, p-1, i = -\frac{k}{2}, \ldots, \frac{k}{2} - 1, l = \frac{k}{2}, \ldots, -\frac{k}{2} -1 \},$ defined by \begin{eqnarray} \mathsf{y} v_j &=& j v_j, \quad j = 0, 1, \ldots, p-1; \label{v15first} \\ \mathsf{y} w_j &=& j w_j + v_j, \quad j = -\frac{k}{2}, -\frac{k}{2} + 1, \ldots, \frac{k}{2} - 1; \\ \mathsf{s} v_0 &=& w_0; \\ \mathsf{s} w_0 &=& v_0; \\ \mathsf{s} v_{-j} &=& \frac{1}{j} v_j + \frac{k}{2j} v_{-j}, \quad j = 1, \ldots, \frac{k}{2} - 1, \frac{k}{2} + 1, \frac{p-1}{2}; \\ \mathsf{s} v_{j} &=& \left( j - \frac{k^2}{4j} \right) v_{-j} - \frac{k}{2j} v_{j}, \quad j = 1, \ldots, \frac{k}{2} - 1, \frac{k}{2} + 1, \frac{p-1}{2}; \\ \mathsf{s} v_{-\frac{k}{2}} &=& v_{-\frac{k}{2}}; \\ \mathsf{s} v_{\frac{k}{2}} &=& 2 v_{-\frac{k}{2}} - v_{\frac{k}{2}}; \end{eqnarray} \begin{eqnarray} \mathsf{s} w_{-j} &=& \frac{1}{j^2} v_j + \frac{k}{2j^2} v_{-j} - \frac{1}{j} w_j + \frac{k}{2j} w_{-j} \quad j = 1, \ldots, \frac{k}{2} - 1; \\ \mathsf{s} w_{j} &=& \left( 1 + \frac{k^2}{4j^2} \right) v_{-j} + \frac{k}{2j^2} v_j - \frac{k}{2j} w_j + \left( \frac{k^2}{4j} - j \right) w_{-j}, \quad j = 1, \ldots, \frac{k}{2} - 1; \\ \mathsf{s} w_{-\frac{k}{2}} &=& -\frac{2}{k} v_{\frac{k}{2}} + \frac{2}{k} v_{-\frac{k}{2}} + w_{-\frac{k}{2}}; \\ \mathsf{s} u_j &=& -\frac{1}{j} u_{-j} - \frac{k}{2j} u_j, \quad j = \frac{k}{2} + 1, \ldots, \frac{p-1}{2}; \\ \mathsf{s} u_{-j} &=& \left( \frac{k^2}{4j} -j \right) u_j + \frac{k}{2j} u_{-j}, \quad j = \frac{k}{2} + 1, \ldots, \frac{p-1}{2}; \\ \mathsf{s} u_{\frac{k}{2}} &=& \frac{2c}{k} v_{-\frac{k}{2}} - u_{\frac{k}{2}}; \\ \mathsf{X} v_{j} &=& - \mathsf{s} v_{-j-1}, \quad j \neq \frac{p-1}{2}; \\ \mathsf{X} v_{\frac{p-1}{2}} &=& \mathsf{s} u_{\frac{p-1}{2}}; \\ \mathsf{X} u_{j} &=& - \mathsf{s} u_{-j-1}, \quad j = \frac{k}{2}, \ldots, \frac{p-3}{2}, \frac{p+1}{2}, \ldots, -\frac{k}{2}-1; \\ \mathsf{X} u_{\frac{p-1}{2}} &=& \mathsf{s} v_{\frac{p-1}{2}}. \label{v15last} \end{eqnarray} \end{itemize} $V_{1,1}^{\mu,d}$ and $V_{1,1}^{\mu',d'}$ are isomorphic if and only if $$(\mu' - \mu \in \mathbf{F}_p \mbox{ {\rm and} } d' = d) \mbox{ {\rm or} } (\mu' + \mu \in \mathbf{F}_p \mbox{ {\rm and} } dd' = \prod_{c \in \mathbf{F}_p} \left(\frac{k^2}{4} - (\mu+c)^2 \right)).$$ $V_{1,3}^{\theta}$ and $V_{1,3}^{\theta'}$ are isomorphic if and only if $\theta = \theta'$. $V_{1,4}^{c}$ and $V_{1,4}^{c'}$ are isomorphic if and only if $c = c'$. $V_{1,5}^{c}$ and $V_{1,5}^{c'}$ are isomorphic if and only if $c = c'$. Furthermore, representations with different subscripts are never isomorphic. \label{prop4} \end{proposition} \begin{proposition} Let $k = 0$. Then the representations of $\mathbf{H}_1$ are the following: \begin{itemize} \item For $\mu,d \in \mathbf{k}, d \neq 0, b = (\mu^p - \mu)^2$, we have $V_{1,1}^{\mu,d}$, defined as in Proposition \ref{prop3}. \item For $c, \theta = \pm 1$, we have a $p$-dimensional representation $V_{1,6}^{c, \theta}$ with basis $\{ v_j, j = 0, 1, \ldots, p-1 \}$, defined by \begin{eqnarray} \mathsf{y} v_j &=& j v_j, \quad j = 0, 1, \ldots, p-1; \label{v16first} \\ \mathsf{s} v_0 &=& c v_0; \\ \mathsf{s} v_j &=& - j v_{-j}, \quad j = 1, \ldots, \frac{p-1}{2}; \\ \mathsf{s} v_{-j} &=& - \frac{1}{j} v_j, \quad j = 1, \ldots, \frac{p-1}{2}; \\ \mathsf{X} v_{j} &=& \mathsf{s} v_{-j-1}, \quad j \neq \frac{p-1}{2}; \\ \mathsf{X} v_{\frac{p-1}{2}} &=& \theta \mathsf{s} v_{\frac{p-1}{2}}. \label{v16last} \end{eqnarray} \item For $c = \pm 1, a \in \mathbf{k}$, we have a $2p$-dimensional representation $V_{1,7}^{c,a}$ with basis $\{ v_j, u_j, j = 0, 1, \ldots, p-1 \}$, defined by \begin{eqnarray} \mathsf{y} v_j &=& j v_j, \quad j = 0, 1, \ldots, p-1; \label{v17first} \\ \mathsf{y} u_j &=& j u_j, \quad j = 0, 1, \ldots, p-1; \\ \mathsf{s} v_0 &=& v_0; \\ \mathsf{s} u_0 &=& a v_0 - u_0; \end{eqnarray} \begin{eqnarray} \mathsf{s} v_{-j} &=& - \frac{1}{j} v_j \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ \mathsf{s} v_j &=& - j v_{-j} \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ \mathsf{s} u_j &=& \frac{1}{j} u_{-j}, \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ \mathsf{s} u_{-j} &=& j u_j, \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ \mathsf{X} v_{j} &=& \mathsf{s} v_{-j-1}, \quad j \neq \frac{p-1}{2}; \\ \mathsf{X} v_{\frac{p-1}{2}} &=& \mathsf{s} u_{\frac{p-1}{2}}; \\ \mathsf{X} u_{j} &=& \mathsf{s} u_{-j-1}, \quad j \neq \frac{p-1}{2}; \\ \mathsf{X} u_{\frac{p-1}{2}} &=& \mathsf{s} v_{\frac{p-1}{2}}. \label{v17last} \end{eqnarray} \end{itemize} $V_{1,1}^{\mu,d}$ and $V_{1,1}^{\mu',d'}$ are isomorphic if and only if $$(\mu' - \mu \in \mathbf{F}_p \mbox{ {\rm and} } d' = d) \mbox{ {\rm or} } (\mu' + \mu \in \mathbf{F}_p \mbox{ {\rm and} } dd' = \prod_{c \in \mathbf{F}_p} \left(\frac{k^2}{4} - (\mu+c)^2 \right)).$$ $V_{1,6}^{c,\theta}$ and $V_{1,6}^{c',\theta'}$ are isomorphic if and only if $\theta = \theta'$. $V_{1,7}^a$ and $V_{1,7}^{a'}$ are isomorphic if and only if $a = a'$. Furthermore, representations with different subscripts are never isomorphic. \label{prop5} \end{proposition} \section{Proof of Propositions \ref{prop1} and \ref{prop2}} \begin{lemma}[PBW for $\mathbf{H}_0$, easy direction] The elements $$\mathsf{s}^i \mathsf{X}^j \mathsf{y}^l, \quad \quad j,l \in \mathbf{Z}, l \geq 0, i \in \{0,1\}$$ span $\mathbf{H}_0$ over $\mathbf{k}$. \label{PBW} \end{lemma} \begin{proof} Given a product of $\mathsf{X}, \mathsf{y}, \mathsf{s}, \mathsf{X}^{-1}$ in any order, one can ensure that the $\mathsf{y}$'s are to the right of all the $\mathsf{X}$'s by using $\mathsf{y}\mathsf{X} = \mathsf{X}\mathsf{y} - k\mathsf{s}\mathsf{X}$ repeatedly, and one can also ensure that the $\mathsf{s}$'s are to the left of all the $\mathsf{X}$'s and $\mathsf{y}$'s by using $\mathsf{X}\mathsf{s} = \mathsf{s} \mathsf{X}^{-1}$ and $\mathsf{y}\mathsf{s} = -k -\mathsf{s}\mathsf{y}$ repeatedly. \end{proof} \begin{lemma} $\mathsf{X} + \mathsf{X}^{-1}, \mathsf{y}^2$ and $\mathsf{X}\mathsf{y} - \mathsf{y}\mathsf{X}^{-1}$ belong to the center $\cent{\mathbf{H}_0}$ of $\mathbf{H}_0$. \label{lemma1} \end{lemma} \begin{proof} First, let us show that $\mathsf{y}^2 \in \cent{\mathbf{H}_0}$. We have \begin{eqnarray*} \mathsf{X} \mathsf{y}^2 &=& (\mathsf{y} \mathsf{X} + k \mathsf{s}\mathsf{X}) \mathsf{y} \\ &=& \mathsf{y}\mathsf{X}\mathsf{y} + k\mathsf{s}\mathsf{X}\mathsf{y} \\ &=& \mathsf{y}(\mathsf{y}\mathsf{X} + k\mathsf{s}\mathsf{X}) + k\mathsf{s}(\mathsf{y}\mathsf{X} + k\mathsf{s}\mathsf{X}) \\ &=& \mathsf{y}^2 \mathsf{X} + k (\mathsf{y}\mathsf{s}+\mathsf{s}\mathsf{y}) \mathsf{X} + k^2\mathsf{s}^2\mathsf{X} \\ &=& \mathsf{y}^2 \mathsf{X} + -k^2 \mathsf{X} + k^2 \mathsf{X} \\ &=& \mathsf{y}^2 \mathsf{X}; \end{eqnarray*} thus $[\mathsf{X}, \mathsf{y}^2] = 0$. We also have \begin{eqnarray*} \mathsf{s} \mathsf{y}^2 &=& (-\mathsf{y}\mathsf{s} -k) \mathsf{y} = -\mathsf{y}\mathsf{s}\mathsf{y} - k\mathsf{y} = -\mathsf{y}(-\mathsf{y}\mathsf{s} - k) - k \mathsf{y} = \mathsf{y}^2 \mathsf{s}; \end{eqnarray*} thus $[\mathsf{s}, \mathsf{y}^2] = 0$. It follows that $\mathsf{y}^2 \in \cent{\mathbf{H}_0}$. Next, we show that $\mathsf{X} + \mathsf{X}^{-1} \in \cent{\mathbf{H}_0}$. We have $$\mathsf{y} (\mathsf{X} + \mathsf{X}^{-1}) = \mathsf{X}\mathsf{y} - k \mathsf{s}\mathsf{X} + \mathsf{X}^{-1} \mathsf{y} + k \mathsf{X}^{-1} \mathsf{s} \\ = (\mathsf{X} + \mathsf{X}^{-1}) \mathsf{y},$$ and $$\mathsf{s} (\mathsf{X} + \mathsf{X}^{-1}) = \mathsf{X}^{-1} \mathsf{s} + \mathsf{X} \mathsf{s} = (\mathsf{X} + \mathsf{X}^{-1}) \mathsf{s}.$$ Thus $[\mathsf{y}, \mathsf{X} + \mathsf{X}^{-1}] = [\mathsf{s}, \mathsf{X} + \mathsf{X}^{-1}] = 0$, and so $\mathsf{X} + \mathsf{X}^{-1} \in \cent{\mathbf{H}_0}.$ Finally, we show that $\mathsf{X}\mathsf{y} - \mathsf{y}\mathsf{X}^{-1} \in \cent{\mathbf{H}_0}$. First we note that $$\mathsf{y} \mathsf{X} - \mathsf{X}^{-1} \mathsf{y} = \mathsf{y} \mathsf{X} + \mathsf{X} \mathsf{y} - (\mathsf{X} + \mathsf{X}^{-1}) \mathsf{y} = \mathsf{y} \mathsf{X} + \mathsf{X} \mathsf{y} - \mathsf{y} (\mathsf{X} + \mathsf{X}^{-1}) ) = \mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1},$$ and thus $$\mathsf{X} (\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}) = \mathsf{X} (\mathsf{y} \mathsf{X} - \mathsf{X}^{-1} \mathsf{y}) = (\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}) \mathsf{X},$$ $$\mathsf{y} (\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}) = \mathsf{y} (\mathsf{y} \mathsf{X} - \mathsf{X}^{-1} \mathsf{y}) = \mathsf{y}^2 \mathsf{X} - \mathsf{y} \mathsf{X}^{-1} \mathsf{y} = \mathsf{X} \mathsf{y}^2 - \mathsf{y} \mathsf{X}^{-1} \mathsf{y} = (\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}) \mathsf{y},$$ and $$\mathsf{s} (\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}) = \mathsf{s} (\mathsf{y} \mathsf{X} - \mathsf{X}^{-1} \mathsf{y}) = - (\mathsf{y} \mathsf{s} + k) \mathsf{X} - \mathsf{X} \mathsf{s} \mathsf{y} = - \mathsf{y} \mathsf{X}^{-1} \mathsf{s} - k \mathsf{X} + \mathsf{X} (\mathsf{y}\mathsf{s} + k) = (\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}) \mathsf{s}.$$ Thus $\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1} \in \cent{\mathbf{H}_0}.$ \end{proof} \begin{corollary} $\mathbf{H}_0$ is finitely generated as a module over its center. \label{cor2} \end{corollary} \begin{proof} From Lemmas \ref{PBW} and \ref{lemma1}, we see that $\mathbf{H}_0$ is generated over its center by $$\mathsf{s}^i \mathsf{X}^j \mathsf{y}^l, \quad \quad i,j,l \in \{0,1\}.$$ \end{proof} \begin{corollary} Every irreducible $\mathbf{H}_0$-module is finite-dimensional over $\mathbf{k}$. \label{cor3} \end{corollary} \begin{proof} Standard. \end{proof} Thus Schur's lemma implies that central elements of $\mathbf{H}_0$ act as scalars in any irreducible $\mathbf{H}_0$-module. From this point, we will use the following notation: the eigenspace of $\mathsf{y}$ with eigenvalue $\beta$ will be denoted $V[\beta]$. \begin{corollary} Let $V$ be an irreducible $\mathbf{H}_0$-module, and let $\beta$ be an eigenvalue of $\mathsf{y}$. Suppose $\beta \neq 0$. Then, $$V = V[\beta] \oplus V[-\beta],$$ and $\operatorname{\mathsf{dim}} V[\beta] = \operatorname{\mathsf{dim}} V[-\beta] = 1.$ \label{corlast} \end{corollary} \begin{proof} Suppose $V[\beta] \neq 0$, and let $v \in V[\beta]$ be nonzero. From the proof of corollary \ref{cor2}, we know that $V$ is spanned by $$\{v, \mathsf{X} v, \mathsf{s} v, \mathsf{s} \mathsf{X} v\}.$$ Now let $w = \mathsf{s} \mathsf{X} v;$ then $$\mathsf{y} w = \mathsf{y} \mathsf{s} \mathsf{X} v = - \mathsf{s} \mathsf{y} \mathsf{X} v - k \mathsf{X} v = - \mathsf{s} \mathsf{X} \mathsf{y} v + k \mathsf{s}^2 \mathsf{X} v - k \mathsf{X} v = - \beta \mathsf{s} \mathsf{X} v = - \beta w.$$ Thus, $w \in V[-\beta].$ Clearly, $w \neq 0$, and thus $V[-\beta] \neq 0$. Now let $v' = 2 \beta \mathsf{X} v - k w.$ Then, $$\mathsf{y} v' = 2 \beta \mathsf{y} \mathsf{X} v + \beta k w = 2 \beta \mathsf{X} \mathsf{y} v - 2 k \beta \mathsf{s} \mathsf{X} v + \beta k w = 2 \beta^2 \mathsf{X} v - \beta k w = \beta v'.$$ Hence, $v' \in V[\beta].$ Also, if $w' = k v + 2 \beta \mathsf{s} v,$ then $$\mathsf{y} w' = k \mathsf{y} v + 2 \beta \mathsf{y} \mathsf{s} v = \beta k v - 2 \beta \mathsf{s} \mathsf{y} v - 2 \beta k v = - \beta k v - 2 \beta \mathsf{s} \mathsf{y} v = - \beta w',$$ Hence, $w' \in V[-\beta].$ From this it follows that $$V = V[\beta] \oplus V[-\beta].$$ Now let $\overline{\Hh}_0$ be the subalgebra of $\mathbf{H}_0$ generated by $\cent{\mathbf{H}_0}$ and $2 \beta \mathsf{X} - k \mathsf{s} \mathsf{X}$. It is clear that $V[\beta] = \overline{\Hh} v$. Since this is true for all nonzero $v \in V[\beta]$, it follows that $V[\beta]$ is an irreducible representation of $\overline{\Hh}_0$. Since $\overline{\Hh}_0$ is commutative, we see that $V[\beta]$ is one dimensional. The same holds for $V[-\beta]$, and the corollary is proved. \end{proof} \begin{corollary} Assume $k \neq 0$. Let $V$ be an irreducible $\mathbf{H}_0$-module, and suppose $0$ is an eigenvalue of $\mathsf{y}$. Then, $V = V_{\text{gen}}[0],$ the generalized eigenspace of $0$. We also have $\operatorname{\mathsf{dim}} V = 2$ and $\operatorname{\mathsf{dim}} V[0] = 1.$ \label{corlast2} \end{corollary} \begin{proof} Let $v \in V[0]$ be nonzero. From the proof of corollary \ref{cor2}, we know that $V$ is spanned by $$\{v, \mathsf{X} v, \mathsf{s} v, \mathsf{s} \mathsf{X} v \}.$$ Let $w = - \mathsf{s} v;$ then, $$\mathsf{y} w = - \mathsf{y} \mathsf{s} v = \mathsf{s} \mathsf{y} v + k v = k v.$$ Let $v' = \mathsf{s} \mathsf{X} v = \mathsf{X}^{-1} \mathsf{s} v;$ then, $$\mathsf{y} v' = \mathsf{y} \mathsf{X}^{-1} \mathsf{s} v = \mathsf{X}^{-1} y \mathsf{s} v + k \mathsf{X}^{-1} \mathsf{s}^2 v = - \mathsf{X}^{-1} \mathsf{s} \mathsf{y} v = 0.$$ Let $w' = - \mathsf{X} v = - \mathsf{s} v';$ then, as above, we have $\mathsf{y} w' = v'.$ So we have $$\mathsf{y} v = 0, \quad \mathsf{y} w = k v, \quad \mathsf{y} v' = 0, \quad \mathsf{y} w' = k v';$$ therefore, $V = V_{\text{gen}}[0]$ and $V[0]$ is spanned by $v$ and $v'$. Now let $\overline{\Hh}_0$ be the subalgebra of $\mathbf{H}_0$ generated by $\cent{\mathbf{H}_0}$ and $\mathsf{s} \mathsf{X}$. It is clear that $V[0] = \overline{\Hh} v$. Since this is true for all nonzero $v \in V[0]$, it follows that $V[0]$ is an irreducible representation of $\overline{\Hh}_0$. Since $\overline{\Hh}_0$ is commutative, we see that $V[0]$ is one dimensional. The corollary follows from this. \end{proof} \begin{corollary} Assume $k = 0$. Let $V$ be an irreducible $\mathbf{H}_0$-module, and suppose $0$ is an eigenvalue of $\mathsf{y}$. Then, $$V = V[0],$$ the eigenspace of $0$. We also have $$\operatorname{\mathsf{dim}} V = \begin{cases} 1 & \mbox{if } 1 \mbox{ or } -1 \mbox{ is an eigenvalue of } \mathsf{X} \\ 2 & \mbox{otherwise}. \end{cases}$$ \label{corlast3} \end{corollary} \begin{proof} From the proof of corollary \ref{corlast2}, we see that $\mathsf{y}$ acts on $V$ as the zero operator. Let $\lambda$ be an eigenvalue of $\mathsf{X}$, let $V_{\X}[\lambda]$ denote the associated eigenspace and let $v \in V_{\X}[\lambda]$ be nonzero. From the proof of corollary \ref{cor2}, we know that $V$ is spanned by $\{v, \mathsf{s} v \}.$ Now $$\mathsf{X} \mathsf{s} v = \mathsf{s} \mathsf{X}^{-1} v = \lambda^{-1} \mathsf{s} v,$$ so $\mathsf{s} v \in V_{\X}[\lambda^{-1}].$ Clearly, $\mathsf{s} v \neq 0;$ thus, if $\lambda \neq \pm 1,$ then $$V = V_{\X}[\lambda] \oplus V_{\X}[\lambda^{-1}] \quad \mbox{ and } \operatorname{\mathsf{dim}} V = 2.$$ If $\lambda = \pm 1,$ it follows that $\mathsf{X}$ and $\mathsf{s}$ commute as operators on $V$; since $V$ is irreducible, this implies that $\operatorname{\mathsf{dim}} V = 1.$ \end{proof} \begin{proof}[Proof of Proposition \ref{prop1}] Let $\beta \neq 0$, and let $V$ be a two-dimensional representation of $\mathbf{H}_0$ in which $V[\beta]$ and $V[-\beta]$ both have dimension 1. Let $v_0 \in V[\beta]$, $v_1 \in V[-\beta]$ be nonzero. Let the matrices representing $\mathsf{s}$ and $\mathsf{X}$ with respect to the basis $\{ v_0, v_1 \}$ be as follows: $$\mathsf{s} \mapsto \left( \begin{array}{cc} \gamma_0 & \delta_0 \\ \gamma_1 & \delta_1 \end{array} \right), \quad \mathsf{X} \mapsto \left( \begin{array}{cc} \theta_0 & \omega_0 \\ \theta_1 & \omega_1 \end{array} \right). $$ First, we note that $\mathsf{X}$ and $\mathsf{y}$ cannot have a common eigenvector; for if $\mathsf{X} w = \gamma w$ and $\mathsf{y} w = \beta' w$, then $k \mathsf{s} w = \mathsf{X} \mathsf{y} \mathsf{X}^{-1} w - \mathsf{y} w = 0,$ and combining this with $\mathsf{s}^2 = 1$ gives $w = 0.$ Hence, by scaling, we can assume that $\omega_0 = 1.$ Now the central element $\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}$ acts on $V$ as a scalar. The matrix representation of $\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}$ is $$\left( \begin{array}{cc} \frac{\beta}{\det \mathsf{X}} \left(\theta_0 \det \mathsf{X} - \omega_1 \right) & -\frac{\beta}{\det \mathsf{X}} (\det \mathsf{X} - 1) \\ \frac{\beta \theta_1}{\det \mathsf{X}} (\det \mathsf{X} - 1) & -\frac{\beta}{\det \mathsf{X}} \left(\theta_0 \det \mathsf{X} - \theta_0 \right) \end{array} \right).$$ Thus, $0 = -\frac{\beta}{\det \mathsf{X}} (\det \mathsf{X} - 1),$ which means that $\det \mathsf{X} = 1$. Hence, \begin{eqnarray} \theta_1 = \theta_0 \omega_1 - 1. \label{p1e1} \end{eqnarray} Using \eqref{rel4}, we see that $\mathsf{X} \mathsf{y} \mathsf{X}^{-1} - \mathsf{y} - k \mathsf{s} = 0.$ Using \eqref{p1e1}, we see that the matrix yrepresentation of $\mathsf{X} \mathsf{y} \mathsf{X}^{-1} - \mathsf{y} - k \mathsf{s}$ is $$\left( \begin{array}{cc} 2 \beta \theta_0 \omega_1 - 2 \beta - k \gamma_0 & - 2 \beta \theta_0 - k \delta_0 \\ 2 \beta \theta_0 \omega_1^2 - 2 \beta \omega_1 - k \gamma_1 & - 2 \beta \theta_0 \omega_1 + 2 \beta - k \delta_1 \end{array} \right).$$ Hence, \begin{eqnarray} \gamma_0 &=& \frac{2 \beta}{k} (\theta_0 \omega_1 - 1), \label{p1e2} \\ \gamma_1 &=& \frac{2 \beta}{k} \left(\theta_0 \omega_1^2 - \omega_1 \right), \label{p1e3} \\ \delta_0 &=& - \frac{2 \beta}{k} \theta_0 \label{p1e4} \\ \delta_1 &=& \frac{2 \beta}{k} (1 - \theta_0 \omega_1). \label{p1e5} \end{eqnarray} Using \eqref{rel2}, we see that $\mathsf{s}^2 = 1$. Using \eqref{p1e2}--\eqref{p1e5}, we see that the matrix representation of $\mathsf{s}^2$ is $$\left( \begin{array}{cc} \frac{4 \beta^2}{k^2} (1 - \theta_0 \omega_1) & 0 \\ 0 & \frac{4 \beta^2}{k^2} (1 - \theta_0 \omega_1) \end{array} \right).$$ Thus, \begin{equation} \omega_1 = \frac{1}{\theta_0} \left(1 - \frac{k^2}{4 \beta^2} \right). \label{p1e6} \end{equation} Using \eqref{p1e1}--\eqref{p1e6}, we see that $V$ is isomorphic to $V_{0,1}^{\beta,\theta_0}.$ Furthermore, it is easy to see that for all $a, \beta \in \mathbf{k} \setminus \{0\},$ $V_{0,1}^{\beta, a}$ is a representation of $\mathbf{H}_0$; furthermore, each eigenvector of $\mathsf{y}$ clearly generates $\mathbf{H}_0$, and thus $V_{0,1}^{\beta,\theta_0}$ is irreducible. Now the eigenvalues of $\mathsf{y}$ in $V_{0,1}^{\beta, a}$ are $\beta$ and $-\beta$, and $2 \beta(\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}) - 2 \beta^2(\mathsf{X} + \mathsf{X}^{-1})$ acts on $V_{0,1}^{\beta, a}$ as $\frac{k^2 - 4 \beta^2}{a} \operatorname{\mathsf{Id}},$ while for $\beta = \frac{k}{2},$ $\mathsf{X} + \mathsf{X}^{-1}$ acts on $V_{0,1}^{\beta, a}$ as $a \operatorname{\mathsf{Id}}$. From this it follows that $V_{0,1}^{\beta, a}$ and $V_{0,1}^{\beta', a'}$ are isomorphic if and only if $\beta' = \beta, a' = a$ or $\beta' = -\beta, a' = \frac{4 \beta^2 - k^2}{4 a \beta^2}$. Now let $V$ be a two-dimensional representation of $\mathbf{H}_0$ in which $V[0]$ has dimension 1 and $V_{\text{gen}}[0]$ has dimension 2. Let $v_0, v_1 \in V$ be nonzero elements such that $\mathsf{y} v_0 = 0, \mathsf{y} v_1 = v_0$. Let the matrices representing $\mathsf{s}$ and $\mathsf{X}$ with respect to the basis $\{ v_0, v_1 \}$ be as follows: $$\mathsf{s} \mapsto \left( \begin{array}{cc} \gamma_0 & \delta_0 \\ \gamma_1 & \delta_1 \end{array} \right), \quad \mathsf{X} \mapsto \left( \begin{array}{cc} \theta_0 & \omega_0 \\ \theta_1 & \omega_1 \end{array} \right). $$ Now the matrix representation of $\mathsf{s} \mathsf{y} + \mathsf{y} \mathsf{s} + k$ is $$ \left( \begin{array}{cc} \gamma_1 + k & \gamma_0 + \delta_1 \\ 0 & \gamma_1 + k \end{array} \right).$$ \eqref{rel3} thus implies that $\gamma_1 = -k$ and $\gamma_0 = -\delta_1.$ Scaling, we may assume that $\gamma_0 = 1$. Next, we note that $\mathsf{s}^2 - 1$ acts on $V$ as $-\delta_0 k \operatorname{\mathsf{Id}};$ \eqref{rel2} thus implies that $\delta_0 = 0.$ We then see that the matrix representation of $\mathsf{X}\mathsf{y} - \mathsf{y}\mathsf{X} - k\mathsf{s}\mathsf{X}$ is $$ \left( \begin{array}{cc} -\theta_1 - k \theta_0 & \theta_0 - \omega_1 - k \omega_0 \\ k ( k \theta_0 + \theta_1 ) & \theta_1 + k^2 \omega_0 + k \omega_1 \end{array} \right).$$ \eqref{rel4} thus implies that $\theta_1 = -k \theta_0, \theta_0 = \omega_1 + k \omega_0.$ Finally, the matrix representation of $\mathsf{X} \mathsf{s} \mathsf{X} - \mathsf{s}$ is $$ \left( \begin{array}{cc} \theta_0^2 - 1 & 0 \\ - k ( \theta_0^2 - 1 ) & - \theta_0^2 + 1 \end{array} \right).$$ \eqref{rel4} thus implies that $\theta_0 = \pm 1$. Thus $V$ is isomorphic to $V_{0,2}^{\theta_0, \omega_0}.$ It is easy to see that $V_{0,2}^{a,b}$ is indeed a representation of $\mathbf{H}_0$; furthermore, each eigenvector of $\mathsf{y}$ clearly generates $\mathbf{H}_0$, and thus $V_{0,2}^{a,b}$ is irreducible. Now $\mathsf{X} + \mathsf{X}^{-1}$ acts on $V_{0,2}^{a,b}$ as $(2 a - kb) \operatorname{\mathsf{Id}},$ while $\mathsf{X}\mathsf{y} - \mathsf{y}\mathsf{X}^{-1}$ acts as $- ak \operatorname{\mathsf{Id}}.$ Therefore, $V_{0,2}^{a,b}$ and $V_{0,2}^{a', b'}$ are isomorphic if and only if $a' = a, b' = b.$ \end{proof} \begin{proof}[Proof of Proposition \ref{prop2}] Let $\beta \neq 0$, and let $V$ be a two-dimensional representation of $\mathbf{H}_0$ in which $V[\beta]$ and $V[-\beta]$ both have dimension 1. Let $v_0 \in V[\beta]$, $v_1 \in V[-\beta]$ be nonzero. Let the matrices representing $\mathsf{s}$ and $\mathsf{X}$ with respect to the basis $\{ v_0, v_1 \}$ be as follows: $$\mathsf{s} \mapsto \left( \begin{array}{cc} \gamma_0 & \delta_0 \\ \gamma_1 & \delta_1 \end{array} \right), \quad \mathsf{X} \mapsto \left( \begin{array}{cc} \theta_0 & \omega_0 \\ \theta_1 & \omega_1 \end{array} \right). $$ First, we note thet $\mathsf{X}$ and $\mathsf{y}$ commute, so they must have a common eigenvector; for the moment, let us assume that $\omega_0 = 0$. Now the central element $\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}$ acts on $V$ as a scalar. The matrix representation of $\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X}^{-1}$ is $$\left( \begin{array}{cc} \frac{\beta}{\det \mathsf{X}} (\theta_0 \det \mathsf{X} - \omega_1) & 0 \\ \frac{\beta \theta_1}{\det \mathsf{X}} (\det \mathsf{X} - 1) & -\frac{\beta}{\det \mathsf{X}} (\omega_1 \det \mathsf{X} - \theta_0) \end{array} \right).$$ Thus, $0 = \frac{\beta \theta_1}{\det \mathsf{X}} (\det \mathsf{X} - 1),$ which means that $\det \mathsf{X} = 1$. Hence, \begin{eqnarray} \omega_1 = \frac{1}{\theta_0}. \label{p2e1} \end{eqnarray} Using \eqref{rel4}, we see that $\mathsf{X} \mathsf{y} \mathsf{X}^{-1} - \mathsf{y} = 0.$ Using \eqref{p2e1}, we see that the matrix representation of $\mathsf{X} \mathsf{y} \mathsf{X}^{-1} - \mathsf{y}$ is $$\left( \begin{array}{cc} 0 & 0 \\ \frac{2 \beta}{\theta_0} \theta_1 & 0 \end{array} \right).$$ Hence, $\theta_1 = 0.$ (If we had assumed earlier that $\theta_1=0$, here we would get $\omega_0=0.$) Using \eqref{rel1}, we see that $\mathsf{y} \mathsf{s} + \mathsf{s} \mathsf{y} = 0.$ Using \eqref{p2e1}, we see that the matrix representation of $\mathsf{y} \mathsf{s} + \mathsf{s} \mathsf{y}$ is $$\left( \begin{array}{cc} 2 \beta \gamma_0 & 0 \\ 0 & - 2 \beta \delta_1 \end{array} \right).$$ Then we must have $\gamma_0 = \delta_1 = 0,$ and thus $\mathsf{s}^2$ acts on $V$ as $\delta_0 \gamma_1 \operatorname{\mathsf{Id}}.$ Using \eqref{rel2}, we see that $\mathsf{s}^2 = 1;$ thus $\delta_0 = \frac{1}{\gamma_1}.$ By scaling, we may assume that $\gamma_1 = 1,$ and we see that $V = V_{0,3}^{\beta,\theta_0}.$ It is clear that $V_{0,3}^{\beta, a}$ is an irreducible representation of $\mathbf{H}_0$ and that $V_{0,3}^{\beta, a}$ and $V_{0,3}^{\beta', a'}$ are isomorphic if and only if $\beta' = \beta, a' = a$ or $\beta' = -\beta, a' = \frac{1}{a}.$ Now let $V$ be a two-dimensional representation of $\mathbf{H}_0$ in which $\mathsf{y}$ acts as zero and $\mathsf{X}$ has eigenvalues $\lambda$ and $\lambda^{-1}$, with $\lambda \neq \pm 1$. Let $v_0 \in V_{\X}[\lambda]$, $v_1 \in V_{\X}[\lambda^{-1}]$ be nonzero. Let the matrix representing $\mathsf{s}$ with respect to the basis $\{ v_0, v_1 \}$ be as follows: $$\mathsf{s} \mapsto \left( \begin{array}{cc} \gamma_0 & \delta_0 \\ \gamma_1 & \delta_1 \end{array} \right). $$ Using \eqref{rel1}, we see that $\mathsf{X} \mathsf{s} - \mathsf{s} \mathsf{X}^{-1} = 0.$ But the matrix representation of $\mathsf{X} \mathsf{s} - \mathsf{s} \mathsf{X}^{-1}$ is $$ \left( \begin{array}{cc} \gamma_0 \frac{\lambda^2 - 1}{\lambda} & 0 \\ 0 & - \delta_1 \frac{\lambda^2 - 1}{\lambda} \end{array} \right). $$ Since $\lambda \neq \pm 1$, we see that $\gamma_0 = \delta_1 = 0.$ Thus $\mathsf{s}^2$ acts on $V$ as $\gamma_1 \delta_0 \operatorname{\mathsf{Id}}$. Using \eqref{rel2}, we see that $\mathsf{s}^2 = 1.$ Hence, $\delta_0 = \frac{1}{\gamma_1}.$ By scaling, we may assume that $\gamma_1 = \delta_0 = 1.$ Thus $V$ must be isomorphic to $V_{0,4}^{\lambda}.$ It is clear that $V_{0,4}^a$ is an irreducible representation of $\mathbf{H}_0$ and that $V_{0,4}^a$ and $V_{0,4}^{a'}$ are isomorphic if and only if $a' = a$ or $a' = \frac{1}{a}.$ Finally, the classification of one-dimensional representations of $\mathbf{H}_0$ is trivial. \end{proof} \section{Proof of Propositions \ref{prop3}, \ref{prop4} and \ref{prop5}} \begin{lemma}[PBW for $\mathbf{H}_1$, easy direction] The elements $$\mathsf{s}^i \mathsf{X}^j \mathsf{y}^l, \quad \quad j,l \in \mathbf{Z}, l \geq 0, i \in \{0,1\}$$ span $\mathbf{H}_1$ over $\mathbf{k}$. \label{1PBW} \end{lemma} \begin{proof} Similar to the proof of lemma \ref{PBW}. \end{proof} \begin{lemma} $\mathsf{X}^{p} + \mathsf{X}^{-p}$ and $(\mathsf{y}^p - \mathsf{y})^2$ belong to the center $\cent{\mathbf{H}_1}$ of $\mathbf{H}_1$. \label{1lemma1} \end{lemma} \begin{proof} First, let us show that $\mathsf{X}^{p} + \mathsf{X}^{-p} \in \cent{\mathbf{H}_1}$. We have \begin{eqnarray*} \mathsf{y} (\mathsf{X}^p + \mathsf{X}^{-p}) &=& \mathsf{y} (\mathsf{X} + \mathsf{X}^{-1})^p \\ &=& (\mathsf{X} \mathsf{y} + \mathsf{X} - k \mathsf{s} \mathsf{X} + \mathsf{X}^{-1} \mathsf{y} - \mathsf{X}^{-1} + k \mathsf{X}^{-1} \mathsf{s}) (\mathsf{X} + \mathsf{X}^{-1})^{p-1} \\ &=& (\mathsf{X} + \mathsf{X}^{-1}) \mathsf{y} (\mathsf{X}^{p-1} + \mathsf{X}^{-p+1}) + (\mathsf{X} - \mathsf{X}^{-1}) (\mathsf{X} + \mathsf{X}^{-1})^{p-1} \\ &=& \cdots \\ &=& (\mathsf{X} + \mathsf{X}^{-1})^p \mathsf{y} + p (\mathsf{X} - \mathsf{X}^{-1}) (\mathsf{X} + \mathsf{X}^{-1})^{p-1} \\ &=& (\mathsf{X}^p + \mathsf{X}^{-p}) \mathsf{y}; \end{eqnarray*} thus $[\mathsf{y}, \mathsf{X}^p + \mathsf{X}^{-p}] = 0$. We also have $$\mathsf{s} \mathsf{X}^p = \mathsf{X}^{-p} \mathsf{s}, \quad \mathsf{s} \mathsf{X}^{-p} = \mathsf{X}^p \mathsf{s};$$ thus $[\mathsf{s}, \mathsf{X}^p + \mathsf{X}^{-p}] = 0$. It follows that $\mathsf{X}^p + \mathsf{X}^{-p} \in \cent{\mathbf{H}_1}$. Next, we show that $(\mathsf{y}^p - \mathsf{y})^2 \in \cent{\mathbf{H}_1}$. We have \begin{eqnarray*} \mathsf{X} (\mathsf{y}+1)^2 &=& \mathsf{X} \mathsf{y}^2 + 2 \mathsf{X} \mathsf{y} + \mathsf{X} \\ &=& (\mathsf{y} \mathsf{X} - \mathsf{X} + k \mathsf{s} \mathsf{X}) \mathsf{y} + 2 \mathsf{X} \mathsf{y} + \mathsf{X} \\ &=& \mathsf{y} (\mathsf{y} \mathsf{X} - \mathsf{X} + k \mathsf{s} \mathsf{X}) - \mathsf{X} \mathsf{y} + k \mathsf{s} (\mathsf{y} \mathsf{X} - \mathsf{X} + k \mathsf{s} \mathsf{X}) + 2 \mathsf{X} \mathsf{y} + \mathsf{X} \\ &=& \mathsf{y}^2 \mathsf{X} + k (\mathsf{y} \mathsf{s} + \mathsf{s} \mathsf{y} + k) \mathsf{X} + (\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X} + \mathsf{X} - k \mathsf{s} \mathsf{X}) \\ &=& \mathsf{y}^2 \mathsf{X}. \end{eqnarray*} So $\mathsf{X} (\mathsf{y}+1)^2 = \mathsf{y}^2 \mathsf{X},$ and thus $\mathsf{X} g(\mathsf{y}+1) = g(\mathsf{y}) \mathsf{X}$ for all even polynomials $g.$ In particular, $[\mathsf{X}, (\mathsf{y}^p - \mathsf{y})^2] = 0.$ Furthermore, we have $[\mathsf{s}, \mathsf{y}^2] = 0$ (for the same reason as in the case $t=0).$ It follows that $(\mathsf{y}^p - \mathsf{y})^2 \in \cent{\mathbf{H}_1}.$ \end{proof} \begin{corollary} $\mathbf{H}_1$ is finitely generated as a module over its center. \label{1cor2} \end{corollary} \begin{proof} From Lemmas \ref{1PBW} and \ref{1lemma1}, we see that $\mathbf{H}_1$ is generated over its center by $$\mathsf{s}^i \mathsf{X}^j \mathsf{y}^l, \quad \quad i \in \{0,1\}, j \in \{-p+1, -p+2, \ldots, p-1, p \}, l \in \{0, 1, \ldots, 2p-1 \},$$ \end{proof} \begin{corollary} Every irreducible $\mathbf{H}_1$-module is finite-dimensional over $\mathbf{k}$. \label{1cor3} \end{corollary} \begin{proof} Standard. \end{proof} Consider the following elements of $\mathbf{H}_1:$ $$\mathsf{A} \defeq \mathsf{s}\mathsf{X}, \quad \mathsf{B} = \mathsf{s}\mathsf{y} + \frac{k}{2}.$$ These elements were introduced by Cherednik in \cite{Ch4} and are called {\em intertwiners}. We note that $\mathsf{B}$ is also equal to $-\mathsf{y}\mathsf{s} - \frac{k}{2}.$ \begin{lemma} $$\mathsf{A}^2 = 1, \quad \mathsf{B}^2 = - \mathsf{y}^2 + \frac{k^2}{4}.$$ \end{lemma} \begin{proof} We have $$\mathsf{A}^2 = \mathsf{s}\mathsf{X}\mathsf{s}\mathsf{X} = \mathsf{s} \mathsf{s} \mathsf{X}^{-1} \mathsf{X} = 1$$ and $$\mathsf{B}^2 = (\mathsf{s}\mathsf{y} + k) \mathsf{s}\mathsf{y} + \frac{k^2}{4} = -\mathsf{y}\mathsf{s} \mathsf{s}\mathsf{y} + \frac{k^2}{4} = -\mathsf{y}^2 + \frac{k^2}{4}.$$ \end{proof} \begin{lemma} $$\mathsf{A} \mathsf{y} = (-\mathsf{y} -1) \mathsf{A}, \quad \mathsf{B} \mathsf{y} = - \mathsf{y} \mathsf{B}.$$ \end{lemma} \begin{proof} We have $$\mathsf{A} \mathsf{y} = \mathsf{s} \mathsf{X} \mathsf{y} = \mathsf{s} (\mathsf{y} \mathsf{X} - \mathsf{X} + k \mathsf{s} \mathsf{X}) = - \mathsf{y} \mathsf{s} \mathsf{X} - k \mathsf{X} - \mathsf{s} \mathsf{X} + k \mathsf{X} = (-\mathsf{y} - 1) \mathsf{A},$$ and $$\mathsf{B} \mathsf{y} = - \mathsf{y} \mathsf{s} \mathsf{y} - \frac{k}{2} \mathsf{y} = \mathsf{y}^2 \mathsf{s} + \frac{k}{2} \mathsf{y} = - \mathsf{y} \mathsf{B}.$$ \end{proof} \begin{corollary} Let $V$ be a representation of $\mathbf{H}_1$. Then $$\mathsf{A} : V[\beta] \to V[\-\beta-1]$$ is an isomorphism and $$\mathsf{B} : V[\beta] \to V[\-\beta]$$ is a homomorphism. $\mathsf{B}$ is an isomorphism if and only if $\beta \neq \pm \frac{k}{2}.$ The same result holds for generalized eigenspaces. \label{1cor4} \end{corollary} \begin{lemma} Let $V$ be an irreducible representation of $\mathbf{H}_1$ on which the central element $(\mathsf{y}^p - \mathsf{y})^2$ acts as $b \neq 0$. Then $$V \supset \oplus_{c \in \mathbf{F}_p} \left( V[\mu + c] \oplus V[-\mu+c] \right),$$ where $\mu$ is a root of the equation $(\mu^p - \mu)^2 = b.$ Each eigenspace has dimension $1$. \label{1lem6} \end{lemma} \begin{proof} Let $\mu$ be an eigenvalue of $\mathsf{y}$, and let $v \in V[\mu].$ Note that $b v = (\mathsf{y}^p - \mathsf{y})^2 v = (\mu^p - \mu)^2 v,$ so we have $(\mu^p - \mu)^2 = b.$ By corollary \ref{1cor4}, we have the following homomorphisms: \begin{equation} V[\mu] \stackrel{\mathsf{A}}{\to} V[-\mu-1] \stackrel{\mathsf{B}}{\to} V[\mu+1] \stackrel{\mathsf{A}}{\to} V[-\mu-2] \stackrel{\mathsf{B}}{\to} V[\mu+2] \stackrel{\mathsf{A}}{\to} \cdots \stackrel{\mathsf{A}}{\to} V[-\mu-p+1] \stackrel{\mathsf{B}}{\to} V[\mu+p-1] \stackrel{\mathsf{A}}{\to} V[-\mu]. \label{eqn91} \end{equation} If none of the eigenvalues in \eqref{eqn91} is equal to $\frac{k}{2},$ then all of the homomorphisms in \eqref{eqn91} are isomorphisms (by corollary \ref{1cor4}). Otherwise, we may assume without loss of generality that $\mu = \frac{k}{2},$ and once again all of the homomorphisms in \eqref{eqn91} are isomorphisms. Thus, $\operatorname{\mathsf{dim}} V \geq 2p \operatorname{\mathsf{dim}} V[\mu]$. Now the dimension of the algebra $\mathbf{H}_1 / (\mathsf{X}^p + \mathsf{X}^{-p} = a, (\mathsf{y}^p - \mathsf{y})^2 = b)$ acting irreducibly on $V$ is at most $8 p^2$ (see the proof of corollary \ref{1cor2}). Hence, $4 p^2 (\operatorname{\mathsf{dim}} V[\mu])^2 = (\operatorname{\mathsf{dim}} V)^2 \leq 8 p^2,$ which implies that $\operatorname{\mathsf{dim}} V[\mu] = 1.$ The result follows. \end{proof} \begin{lemma} Suppose $k \notin \mathbf{F}_p.$ Let $V$ be an irreducible representation of $\mathbf{H}_1$ on which the central element $(\mathsf{y}^p - \mathsf{y})^2$ acts as $0$. Then each generalized eigenspace $V_{\text{gen}}[c], c \in \mathbf{F}_p$ has dimension $2$. \label{1lem8} \end{lemma} \begin{proof} Let $\mu$ be an eigenvalue of $\mathsf{y}$, and let $v \in V[\mu].$ Note that $0 = (\mathsf{y}^p - \mathsf{y})^2 v = (\mu^p - \mu)^2 v,$ so we have $\mu^p - \mu = 0.$ Hence, $\mu \in \mathbf{F}_p.$ By corollary \ref{1cor4}, we have the following homomorphisms: \begin{equation} V_{\text{gen}}[\mu] \stackrel{\mathsf{B} \mathsf{A}}{\to} V_{\text{gen}}[\mu+1] \stackrel{\mathsf{B} \mathsf{A}}{\to} V_{\text{gen}}[\mu+2] \stackrel{\mathsf{B} \mathsf{A}}{\to} \cdots \stackrel{\mathsf{B} \mathsf{A}}{\to} V_{\text{gen}}[\mu-1] \stackrel{\mathsf{B} \mathsf{A}}{\to} V_{\text{gen}}[\mu]. \label{eqn92} \end{equation} Since $k \notin \mathbf{F}_p,$ none of the eigenvalues in \eqref{eqn92} is equal to $\pm \frac{k}{2}.$ By corollary \ref{1cor4}, all of the homomorphisms in \eqref{eqn92} are isomorphisms, and the eigenvalues of $\mathsf{y}$ are precisely the elements of $\mathbf{F}_p.$ Thus, $\operatorname{\mathsf{dim}} V = p \operatorname{\mathsf{dim}} V_{\text{gen}}[0]$. Now the dimension of the algebra $\mathbf{H}_1 / (\mathsf{X}^p + \mathsf{X}^{-p} = a, (\mathsf{y}^p - \mathsf{y})^2 = 0)$ acting irreducibly on $V$ is at most $8 p^2$ (see the proof of corollary \ref{1cor2}). Hence, $p^2 (\operatorname{\mathsf{dim}} V_{\text{gen}}[0])^2 = (\operatorname{\mathsf{dim}} V)^2 \leq 8 p^2,$ which implies that $\operatorname{\mathsf{dim}} V_{\text{gen}}[0] \leq 2.$ Now let $v \in V[0];$ then $$\mathsf{y} \mathsf{s} v = - \mathsf{s} \mathsf{y} v - k v = - k v.$$ Since $k \neq 0,$ we conclude that $\mathsf{s} v \in V_{\text{gen}}[0] \setminus V[0].$ Therefore, $\operatorname{\mathsf{dim}} V_{\text{gen}}[0] = 2,$ and the result follows. \end{proof} \begin{proof}[Proof of Proposition \ref{prop3}] It is easy to show that if $\mu$ and $d$ satisfy the conditions in the statement of Proposition \ref{prop3}, then $V_{1,1}^{\mu,d}$ is a representation of $\mathbf{H}_1$. Furthermore, if $v \in V_{1,1}^{\mu,d}$ is an eigenvector of $\mathsf{y}$, we see that we can generate all of $V_{1,1}^{\mu,d}$ by applying $\mathsf{A}$ and $\mathsf{B}$. This implies that $V_{1,1}^{\mu,d}$ is actually an irreducible representation of $\mathbf{H}_1$. The same can be said of $V_{1,2}^{\theta}.$ Let $V$ be an irreducible representation of $\mathbf{H}_1$, and suppose that $(\mathsf{y}^p - \mathsf{y})^2$ acts on $V$ as $b \neq 0$. For the moment, let us assume that $\pm \frac{k}{2}$ are not eigenvalues of $\mathsf{y}$. Let $v_{\mu}$ be an eigenvector of $\mathsf{y}$ with eigenvalue $\mu$, and let \begin{eqnarray*} v_{\mu+j} &=& (\mathsf{B} \mathsf{A})^j v_{\mu}, \quad j = 1, 2, \ldots, p-1 \\ v_{-\mu+j} &=& \mathsf{A} (\mathsf{B} \mathsf{A})^{j-1} v_{\mu}, \quad j = 1, 2, \ldots, p. \end{eqnarray*} Note that $\mathsf{B} v_{-\mu} \in V[\mu]$; using lemma \ref{1lem6}, we see that $\mathsf{B} v_{-\mu} = d v_{\mu},$ where $d \in \mathbf{k}$ is nonzero. From this we can deduce that \eqref{v11first}--\eqref{v11last} are satisfied. Thus $V_{1,1}^{\mu,d} \subset V.$ By irreducibility of $V$, it follows that $V = V_{1,1}^{\mu,d}.$ Now we note that $(\mathsf{B} \mathsf{A})^p$ acts as $d \operatorname{\mathsf{Id}}$ on $V[\mu], V[\mu+1], \ldots, V[\mu+p-1]$ and as $\frac{1}{d} \prod_{c \in \mathbf{F}_p} \left( \frac{k^2}{4} - (\mu+c)^2 \right)$ on $V[-\mu], V[-\mu+1], \ldots, V[-\mu+p-1].$ From this we can deduce that $V_{1,1}^{\mu,d}$ and $V_{1,1}^{\mu',d'}$ are isomorphic if and only if ($\mu' - \mu \in \mathbf{F}_p$ and $d' = d$) or ($\mu' + \mu \in \mathbf{F}_p$ and $dd' = \prod_{c \in \mathbf{F}_p} \left( \frac{k^2}{4} - (\mu+c)^2 \right)$). Now, if $\pm \frac{k}{2}$ are eigenvalues of $\mathsf{y}$, then $\mathsf{B}^2$ acts as zero on $V[\pm \frac{k}{2}]$, so either $\mathsf{B}$ acts as zero on $V[\frac{k}{2}]$, in which case we can use the above argument with $\mu = \frac{k}{2}$, or $\mathsf{B}$ acts as zero on $V[\frac{k}{2}]$, in which case we can use the above argument with $\mu = -\frac{k}{2}$. Second, suppose that $(\mathsf{y}^p - \mathsf{y})^2$ acts on $V$ as $0$. Let $v_0$ be an eigenvector of $\mathsf{y}$ with eigenvalue $0$, and let \begin{eqnarray*} v_j &=& (\mathsf{B} \mathsf{A})^j v_0, \quad j = 1,2, \ldots, \frac{p-1}{2}; \\ v_{-j} &=& - \mathsf{A} (\mathsf{B} \mathsf{A})^{j-1} v_0, \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ w_0 &=& - \frac{1}{k} \mathsf{s} v_0; \\ w_j &=& (\mathsf{B} \mathsf{A})^j w_0, \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ w_{-j} &=& \mathsf{A} (\mathsf{B} \mathsf{A})^{j-1} w_0, \quad j = 1, 2, \ldots, \frac{p-1}{2}. \end{eqnarray*} Since $\mathsf{A}$ maps eigenspaces to eigenspaces and generalized eigenspaces to generalized eigenspaces, and since $\mathsf{A}^2=1$, we can use lemma \ref{1lem8} to conclude that $$\mathsf{A} v_{\frac{p-1}{2}} = \theta v_{\frac{p-1}{2}}, \quad \mathsf{A} w_{\frac{p-1}{2}} = \omega w_{\frac{p-1}{2}}, \quad \theta, \omega = \pm 1.$$ From this we can deduce that \eqref{v12first}--\eqref{v12-2ndlast} are satisfied, as well as \begin{equation} \mathsf{X} w_{\frac{p-1}{2}} = \omega \mathsf{s} w_{\frac{p-1}{2}} \label{expr88} \end{equation} Now we know from \eqref{rel4} that \begin{equation} (\mathsf{X} \mathsf{y} - \mathsf{y} \mathsf{X} + \mathsf{X} - k \mathsf{s} \mathsf{X}) w_{\frac{p-1}{2}} \label{expr99} \end{equation} must be zero. Using \eqref{v12first}--\eqref{v12-2ndlast} and \eqref{expr88}, we see that the coefficient of $v_{\frac{p-1}{2}}$ in $\mathsf{X} \mathsf{y} w_{\frac{p-1}{2}}, - \mathsf{y} \mathsf{X} w_{\frac{p-1}{2}}, \mathsf{X} w_{\frac{p-1}{2}}$ and $- k \mathsf{s} \mathsf{X} w_{\frac{p-1}{2}}$ are respectively $k (\theta - \omega), 0, 2 k \omega$ and $0$. From \eqref{expr99}, we get $k (\theta + \omega) = 0,$ which implies $\omega = -\theta$. Hence \eqref{v12last} is also satisfied. Thus $V^{1,2}_{\theta} \subset V.$ By irreducibility of $V$, it follows that $V = V_{1,2}^{\theta}.$ Since $\mathsf{A}$ acts on $V_{1,2}^{\theta}[\frac{p-1}{2}]$ through multiplication by $\theta$, it is clear that $V_{1,2}^{\theta}$ and $V_{1,2}^{\theta'}$ are isomorphic if and only if $\theta=\theta'$. \end{proof} \begin{lemma} Let $k$ be an even integer with $2 \leq k \leq p-1$. Let $V \neq 0$ be an irreducible representation of $\mathbf{H}_1$ on which $(\mathsf{y}^p - \mathsf{y})^2$ acts as zero. Then $V \left[\frac{k}{2} \right] \neq 0$. \end{lemma} \begin{proof} From corollary \ref{1cor4}, we have isomorphisms \begin{equation} V[0] \stackrel{\mathsf{A}}{\to} V[-1] \stackrel{\mathsf{B}}{\to} V[1] \stackrel{\mathsf{A}}{\to} V[-2] \stackrel{\mathsf{B}}{\to} V[2] \stackrel{\mathsf{A}}{\to} \cdots \stackrel{\mathsf{A}}{\to} V \left[-\frac{k}{2} \right] \label{isom66} \end{equation} and $$V \left[\frac{k}{2} \right] \stackrel{\mathsf{A}}{\to} V \left[-\frac{k}{2}-1 \right] \stackrel{\mathsf{B}}{\to} V \left[ \frac{k}{2} +1 \right] \stackrel{\mathsf{A}}{\to} \cdots \stackrel{\mathsf{A}}{\to} V \left[ \frac{p-1}{2} \right].$$ Let us assume that $V \left[\frac{k}{2} \right] = 0$. Since $V \neq 0$, we must have $V[0] \neq 0$. Let $u \in V[0]$ be nonzero; then $\mathsf{y} \mathsf{s} u = - \mathsf{s} \mathsf{y} u - k u = - k u.$ This means $V_{\text{gen}}[0] \setminus V[0]$ is nonempty. From the isomorphisms \eqref{isom66}, we know that there exist $v, w \in V$ such that $\mathsf{y} v = - \frac{k}{2} v, \mathsf{y} w = - \frac{k}{2} w + v$. Since $V \left[\frac{k}{2} \right] = 0,$ we see that $\mathsf{B} v = \mathsf{B} w = 0$. Thus, $$- \frac{k}{2} v = \mathsf{s} \mathsf{y} v = - \frac{k}{2} \mathsf{s} v \Longrightarrow \mathsf{s} v = v;$$ $$- \frac{k}{2} w = \mathsf{s} \mathsf{y} w = - \frac{k}{2} \mathsf{s} w + \mathsf{s} v = - \frac{k}{2} \mathsf{s} w + v \Longrightarrow \mathsf{s} w = \frac{2}{k} v + w.$$ Hence $$w = \mathsf{s}^2 w = \frac{2}{k} \mathsf{s} v + \mathsf{s} w = \frac{4}{k} v + w,$$ which is impossible. Therefore, $V \left[\frac{k}{2} \right] \neq 0$. \end{proof} \begin{lemma} Let $V$ be an irreducible representation of $\mathbf{H}_1$, and suppose that $\mathsf{B} : V[\frac{k}{2}] \to V[-\frac{k}{2}]$ is zero but $V[\frac{k}{2}] \neq 0$. Then $$V = V[\frac{k}{2}] \oplus V[\frac{k}{2}+1] \oplus \cdots \oplus V[-\frac{k}{2}-1].$$ \label{lemma50} \end{lemma} \begin{proof} Let $$W = V[\frac{k}{2}] \oplus V[\frac{k}{2}+1] \oplus \cdots \oplus V[-\frac{k}{2}-1].$$ Since $V$ is irreducible, it is enough to show that $W$ is a subrepresentation. Since $\mathsf{B}$ acts as zero on $V[\frac{k}{2}],$ we see that $W$ is closed under the action of $\mathsf{A}$ and $\mathsf{B}$. Clearly, $\mathsf{y} W \subset W$. For $v \in V[\beta], \beta \neq 0$, we have $$\mathsf{B} v = \left( \mathsf{s} \mathsf{y} + \frac{k}{2} \right) v = \beta \mathsf{s} v + \frac{k}{2} v; \mbox{ so } \mathsf{s} v = \beta^{-1} \left( \mathsf{B} - \frac{k}{2} \right) v.$$ So $\mathsf{s} W \subset W$. Finally, $\mathsf{X} = \mathsf{s} \mathsf{A},$ so $\mathsf{X} W \subset W$, and the proof is complete. \end{proof} \begin{proof}[Proof of Proposition \ref{prop4}] As in the proof of Proposition \ref{prop3}, we see that $V_{1,1}^{\mu,d}$ is an irreducible representation of $\mathbf{H}_1$ whenever $\mu$ and $d$ satisfy the conditions in the statement of Proposition \ref{prop4}. Similarly, for $\theta = \pm 1,$ $V_{1,3}^{\theta}$ is an irreducible representation of $\mathbf{H}_1$ and for $c \in \mathbf{k}$, $V_{1,4}^c$ and $V_{1,5}^c$ are irreducible representations of $\mathbf{H}_1$. Now, let $V$ be an irreducible representation of $\mathbf{H}_1$, and suppose that that $(\mathsf{y}^p - \mathsf{y})^2$ acts on $V$ as $b$. Here $k \in \mathbf{F}_p$, so if $b \neq 0$, then $\frac{k}{2}$ is not a root of $f(y) = (y^p - y)^2 - b$, so the argument in the proof of Proposition \ref{prop3} applies for $V_{1,1}^{\mu, d}$. We will now assume that $b = 0$. Now suppose $\mathsf{B}$ acts as zero on $V[\frac{k}{2}].$ Let $\overline{\Hh}_1$ be the subalgebra of $\mathbf{H}_1$ generated by $\mathsf{A}$ and $\mathsf{B}$. From the proof of Lemma \ref{lemma50}, we see that $V = \overline{\Hh}_1 v$ for any eigenvector of $v$ of $\mathsf{y}$. Since $\mathsf{A}^2 = 1$ and $\mathsf{B}^2$ acts as a scalar on each eigenspace, it follows that $V$ is spanned by $$v, \mathsf{A} v, \mathsf{B} \mathsf{A} v, \mathsf{A} \mathsf{B} \mathsf{A} v, \ldots, \mathsf{B} v, \mathsf{A} \mathsf{B} v, \mathsf{B} \mathsf{A} \mathsf{B} v, \ldots$$ for each eigenvector $v$ of $\mathsf{y}$. If the eigenvalue of $v$ is $\frac{k}{2},$ we conclude, from corollary \ref{1cor4} and from the fact that $\mathsf{B}$ acts as zero on $V[\frac{k}{2}]$, that \begin{equation} V[\frac{k}{2}] \mbox{ is spanned by } v, \mathsf{A} (\mathsf{B} \mathsf{A})^{p-k-1} v. \label{eqn100} \end{equation} Let $\overline{\overline{\Hh}}_1$ be the subalgebra of $\overline{\Hh}_1$ generated by $\mathsf{A} (\mathsf{B} \mathsf{A})^{p-k-1}$. Clearly, $\overline{\overline{\Hh}}_1$ is commutative, and since \eqref{eqn100} holds for all nonzero $v \in V[\frac{k}{2}],$ we conclude that $V[\frac{k}{2}]$ is an irreducible representation of $\overline{\overline{\Hh}}$. By Schur's Lemma, $\mathsf{A} (\mathsf{B} \mathsf{A})^{p-k-1}$ acts on $V[\frac{k}{2}]$ as a scalar, and thus $\operatorname{\mathsf{dim}} V[\frac{k}{2}] = 1.$ We then let $v_{\frac{k}{2}} \in V[\frac{k}{2}]$ be nonzero, and let \begin{eqnarray*} v_{-\frac{k}{2}-j} = \mathsf{A} (\mathsf{B} \mathsf{A})^{j-1} v, \quad j = 1, 2, \ldots, \frac{p-1}{2} - \frac{k}{2}; \\ v_{\frac{k}{2}+j} = (\mathsf{B} \mathsf{A})^j v, \quad j = 1, 2, \ldots, \frac{p-1}{2} - \frac{k}{2}. \end{eqnarray*} Now $\mathsf{A} v_{\frac{p-1}{2}} = \theta v_{\frac{p-1}{2}}$ for some $\theta,$ and $\mathsf{A}^2 = 1$ implies $\theta = \pm 1.$ From the above information, we can deduce that \eqref{v13first}--\eqref{v13last} are satisfied, and thus $V_{1,3}^{\theta} \subset V$. By irreducibility of $V$, we conclude that $V = V_{1,3}^{\theta}$. Since $\mathsf{A}$ acts on $V[\frac{p-1}{2}]$ through multiplication by $\theta$, it follows that $V_{1,3}^{\theta}$ and $V_{1,3}^{\theta'}$ are isomorphic if and only if $\theta = \theta'$. Now suppose $\mathsf{B}$ does not act as zero on $V[\frac{k}{2}].$ Let $v_0$ be an eigenvector of $\mathsf{y}$ with eigenvalue $0$, and let $w_0 = -\frac{1}{k} \mathsf{s} v_0.$ Then $\mathsf{y} w_0 = v_0.$ For $j = 0, \ldots, \frac{k}{2} - 1,$ let \begin{eqnarray*} v_j &=& (\mathsf{B} \mathsf{A})^j v_0 \\ w_j &=& (\mathsf{B} \mathsf{A})^j w_0 \\ v_{-j-1} &=& - \mathsf{A} (\mathsf{B} \mathsf{A})^{j} v_0 \\ w_{-j-1} &=& \mathsf{A} (\mathsf{B} \mathsf{A})^{j} w_0. \end{eqnarray*} It is easy to check that for each $j$, $v_j$ and $v_{-j-1}$ are eigenvectors of $\mathsf{y}$ with eigenvalue $j, -j-1$ respectively, and $\mathsf{y} w_j = j w_j + v_j, \mathsf{y} w_{-j-1} = (-j-1) w_{-j-1} + v_{-j-1}.$ Now $\mathsf{B} v_{-\frac{k}{2}} = 0,$ and $$\mathsf{y} \mathsf{B} w_{-\frac{k}{2}} = - \mathsf{B} \mathsf{y} w_{-\frac{k}{2}} = \frac{k}{2} \mathsf{B} w_{-\frac{k}{2}} - \mathsf{B} v_{-\frac{k}{2}} = \frac{k}{2} \mathsf{B} w_{-\frac{k}{2}}.$$ Thus, $w_{-\frac{k}{2}} \in V[\frac{k}{2}].$ Let us write $v_{\frac{k}{2}} = \mathsf{B} w_{-\frac{k}{2}};$ then, \begin{eqnarray*} \mathsf{s} w_{-\frac{k}{2}} &=& - \frac{2}{k} v_{\frac{k}{2}} + \frac{2}{k} v_{-\frac{k}{2}} + w_{-\frac{k}{2}} \\ \mbox{and } w_{-\frac{k}{2}} = \mathsf{s}^2 w_{-\frac{k}{2}} &=& - \frac{2}{k} \mathsf{s} v_{\frac{k}{2}} + \frac{2}{k} v_{-\frac{k}{2}} + \mathsf{s} w_{-\frac{k}{2}} \\ &=& -\frac{2}{k} \mathsf{s} v_{\frac{k}{2}} + \frac{4}{k} v_{-\frac{k}{2}} - \frac{2}{k} z + w_{-\frac{k}{2}} \\ \Longrightarrow 0 &=& - \mathsf{s} v_{\frac{k}{2}} + 2 v_{-\frac{k}{2}} - v_{\frac{k}{2}}. \end{eqnarray*} In particular, this implies that $v_{\frac{k}{2}} \neq 0.$ For $j = \frac{k}{2} + 1 , \ldots, \frac{p-1}{2},$ let \begin{eqnarray*} v_{-j} &=& - \mathsf{A} (\mathsf{B} \mathsf{A})^{j - \frac{k}{2} - 1} v_{\frac{k}{2}} \\ v_{j} &=& (\mathsf{B} \mathsf{A})^{j - \frac{k}{2}} v_{\frac{k}{2}}. \end{eqnarray*} Now there are two cases: First, suppose that $\mathsf{A} v_{\frac{p-1}{2}} = c v_{\frac{p-1}{2}}$ for some $c \in \mathbf{k}.$ Since $\mathsf{A}^2 = 1,$ we see that $c = \pm 1$. Now let $$W = \operatorname{\mathsf{span}}_{\mathbf{k}} \{ v_0, v_1, \ldots, v_{p-1}, w_{-\frac{k}{2}}, w_{-\frac{k}{2} + 1}, \ldots, w_{\frac{k}{2} - 1} \}.$$ From the above information, we can deduce that \eqref{v14first}--\eqref{v14last} are satisfied. Therefore, $V_{1,4}^{c} \subset V.$ Since $V$ is irreducible, we conclude that $V = V_{1,4}^{c}$. Since $\mathsf{A}$ acts as $c \operatorname{\mathsf{Id}}$ on $V_{1,4}^c \left[ \frac{p-1}{2} \right]$, we see that $V_{1,4}^{c}$ and $V_{1,4}^{c'}$ are isomorphic if and only if $c = c'$. Second, suppose that $\mathsf{A} v_{\frac{p-1}{2}}$ is not a scalar multiple of $v_{\frac{p-1}{2}}.$ In this case, we may write $\mathsf{A} v_{\frac{p-1}{2}} = u_{\frac{p-1}{2}},$ and then \begin{eqnarray*} u_{\frac{p-1}{2}+j} &=& - \mathsf{B} (\mathsf{A} \mathsf{B})^{j-1} u_{\frac{p-1}{2}}, \quad j = 1, \ldots, \frac{p-1-k}{2} \\ u_{\frac{p-1}{2}-j} &=& (\mathsf{A} \mathsf{B})^j u_{\frac{p-1}{2}} \quad j = 1, \ldots, \frac{p-1-k}{2}. \end{eqnarray*} Since $\{ v_{\frac{p-1}{2}}, u_{\frac{p-1}{2}} \}$ is linearly independent, it follows that $\{ v_j, u_j \}$ is linearly independent for all $j = \frac{k}{2}, \frac{k}{2} + 1, \ldots, -\frac{k}{2} -1.$ Now let $$W = \operatorname{\mathsf{span}}_{\mathbf{k}} \{v_0, v_1, \ldots, v_{p-1}, w_{-\frac{k}{2}}, w_{-\frac{k}{2}+1}, \ldots, w_{\frac{k}{2} - 1}, u_{\frac{k}{2}}, u_{\frac{k}{2}+1}, \ldots, u_{-\frac{k}{2}-1} \}.$$ Now, we know from lemma \ref{1lem8} that $\operatorname{\mathsf{dim}} V_{\text{gen}}[-\frac{k}{2}] = 2,$ and this forces $\mathsf{B} u_{\frac{k}{2}} = c v_{-\frac{k}{2}}$ for some $c \in \mathbf{k}$. From the above information, we can deduce that \eqref{v15first}--\eqref{v15last} are satisfied. Therefore, $V_{1,5}^{c} \subset V.$ Since $V$ is irreducible, we conclude that $V = V_{1,5}^{c}$. Since $\mathsf{B} (\mathsf{A} \mathsf{B})^{p-k}$ acts as $c \operatorname{\mathsf{Id}}$ on $V_{1,5}^c \left[ -\frac{k}{2} \right]$, we see that $V_{1,5}^{c}$ and $V_{1,5}^{c'}$ are isomorphic if and only if $c = c'$. \end{proof} \begin{lemma} Suppose $k = 0.$ Let $V$ be an irreducible representation of $\mathbf{H}_1$ on which the central element $(\mathsf{y}^p - \mathsf{y})^2$ acts as $0$. Then each eigenspace $V[c], c \in \mathbf{F}_p$ has dimension at most $2$. \label{1lem77} \end{lemma} \begin{proof} Let $\mu$ be an eigenvalue of $\mathsf{y}$, and let $v \in V[\mu].$ Note that $0 = (\mathsf{y}^p - \mathsf{y})^2 v = (\mu^p - \mu)^2 v,$ so we have $\mu^p - \mu = 0.$ Hence, $\mu \in \mathbf{F}_p.$ By corollary \ref{1cor4}, we have the following homomorphisms: \begin{equation} V_{\text{gen}}[0] \stackrel{iA}{\to} V_{\text{gen}}[-1] \stackrel{\mathsf{B}}{\to} V_{\text{gen}}[1] \stackrel{\mathsf{A}}{\to} \cdots \stackrel{\mathsf{A}}{\to} V_{\text{gen}} \left[ \frac{p+1}{2} \right] \stackrel{\mathsf{B}}{\to} V_{\text{gen}} \left[ \frac{p-1}{2} \right]. \label{eqn77} \end{equation} Since $k = 0$, corollary \ref{1cor4} implies that all of the homomorphisms in \eqref{eqn77} are isomorphisms, and the eigenvalues of $\mathsf{y}$ are precisely the elements of $\mathbf{F}_p.$ Thus, $\operatorname{\mathsf{dim}} V \geq p \operatorname{\mathsf{dim}} V [0]$. Now the dimension of the algebra $\mathbf{H}_1 / (\mathsf{X}^p + \mathsf{X}^{-p} = a, (\mathsf{y}^p - \mathsf{y})^2 = 0)$ acting irreducibly on $V$ is at most $8 p^2$ (see the proof of corollary \ref{1cor2}). Hence, $p^2 (\operatorname{\mathsf{dim}} V[0])^2 \leq (\operatorname{\mathsf{dim}} V)^2 \leq 8 p^2,$ which implies that $\operatorname{\mathsf{dim}} V[0] \leq 2.$ \end{proof} \begin{proof}[Proof of Proposition \ref{prop5}] As in the proof of Proposition \ref{prop3}, we see that $V_{1,1}^{\mu,d}$ is an irreducible representation of $\mathbf{H}_1$ whenever $\mu$ and $d$ satisfy the conditions in the statement of Proposition \ref{prop5}. Similarly, for $c, \theta = \pm 1,$ $V_{1,6}^{c,\theta}$ is an irreducible representation of $\mathbf{H}_1$ and for $c = \pm 1, a \in \mathbf{k}$, $V_{1,7}^{c,a}$ is an irreducible representation of $\mathbf{H}_1$. Now, let $V$ be an irreducible representation of $\mathbf{H}_1$, and suppose that that $(\mathsf{y}^p - \mathsf{y})^2$ acts on $V$ as $b$. If $b \neq 0$, then $k$ is not a root of $f(y) = (y^p - y)^2 - b$, so the argument in the proof of Proposition \ref{prop3} applies for $V_{1,1}^{\mu, d}$. We will now assume that $b = 0$. First we note that $\mathsf{s} \mathsf{y} = - \mathsf{y} \mathsf{s}$, which means that $\mathsf{s} V[0] \subset V[0].$ Now let $v_0 \in V[0]$ be an eigenvector of $\mathsf{s}$. Since $\mathsf{s}^2 = 1,$ we have $\mathsf{s} v_0 = c v_0,$ where $c = \pm 1.$ Let \begin{eqnarray*} v_{-j} &=& \mathsf{A} (\mathsf{B} \mathsf{A})^{j-1} v_0, \quad j = 1, 2, \ldots, \frac{p-1}{2}; \\ v_j &=& (\mathsf{B} \mathsf{A})^j v_0, \quad j = 1, 2, \ldots, \frac{p-1}{2}. \end{eqnarray*} Now there are two cases: First, suppose that $\mathsf{A} v_{\frac{p-1}{2}} = \theta v_{\frac{p-1}{2}}$ for some $\theta \in \mathbf{k}.$ Since $\mathsf{A}^2 = 1,$ we see that $\theta = \pm 1$. Now let $$W = \operatorname{\mathsf{span}}_{\mathbf{k}} \{ v_0, v_1, \ldots, v_{p-1} \}.$$ From the above information, we can deduce that \eqref{v16first}--\eqref{v16last} are satisfied. Therefore, $V_{1,6}^{c,\theta} \subset V.$ Since $V$ is irreducible, we conclude that $V = V_{1,6}^{c,\theta}$. Since $\mathsf{s}$ acts on $V[0]$ as $c \operatorname{\mathsf{Id}}$ and $\mathsf{A}$ acts on $V \left[\frac{p-1}{2} \right]$ as $\theta \operatorname{\mathsf{Id}}$, we see that $V_{1,6}^{c,\theta}$ and $V_{1,6}^{c',\theta'}$ are isomorphic if and only if $c = c'$ and $\theta = \theta'$. Second, suppose that $\mathsf{A} v_{\frac{p-1}{2}}$ is not a scalar multiple of $v_{\frac{p-1}{2}}.$ In this case, we may write $\mathsf{A} v_{\frac{p-1}{2}} = u_{\frac{p-1}{2}},$ and then \begin{eqnarray*} u_{\frac{p-1}{2}+j} &=& \mathsf{B} (\mathsf{A} \mathsf{B})^{j-1} u_{\frac{p-1}{2}}, \quad j = 1, \ldots, \frac{p-3}{2} \\ u_{\frac{p-1}{2}-j} &=& (\mathsf{A} \mathsf{B})^j u_{\frac{p-1}{2}} \quad j = 1, \ldots, \frac{p-1}{2}. \end{eqnarray*} Since $\{ v_{\frac{p-1}{2}}, u_{\frac{p-1}{2}} \}$ is linearly independent, it follows that $\{ v_j, u_j \}$ is linearly independent for all $j$. Hence, by lemma \ref{1lem77}, $V[0] = \operatorname{\mathsf{span}}_{\mathbf{k}} \{ v_0, u_0 \},$ and this forces $\mathsf{s} u_0 = a v_0 + r u_0$ for some $a,r \in \mathbf{k}$. Now $$u_0 = \mathsf{s}^2 u_0 = a \mathsf{s} v_0 + r \mathsf{s} u_0 = a c v_0 + a r v_0 + r^2 u_0.$$ Thus $r = \pm 1.$ Now if $c = r,$ then $a = 0$. But then $v_0 - u_0$ is an eigenvector of $\mathsf{s}$ and $\mathsf{A}$ acts on $v_{\frac{p-1}{2}} - u_{\frac{p-1}{2}}$ as a scalar, and so the first case shows us that $V$ has a $p$-dimensional subrepresentation, contradicting $V$'s irreducibility. So we may assume that $c = -r$; $a$ is then arbitrary. Now we can see that the eigenvalues of $\mathsf{s}$ acting on $V[0]$ are $\pm c$; that is, $\pm 1$. This means that we can assume $c=1$. From all this information, we can deduce that \eqref{v17first}--\eqref{v17last} are satisfied. Therefore, $V_{1,7}^a \subset V.$ Since $V$ is irreducible, we conclude that $V = V_{1,7}^a$. Finally, $a$ is the coefficient of $v$ in $\mathsf{s} \mathsf{A} (\mathsf{B} \mathsf{A})^{p-1} v$; thus, $V_{1,7}^a$ and $V_{1,7}^{a'}$ are isomorphic if and only if $a = a'$. \end{proof} \end{document}
arXiv
\begin{document} \title{An analytic function approach to weak mutually unbiased bases} \author{T. Olupitan, C. Lei, A. Vourdas\\ Department of Computing\\University of Bradford\\ Bradford BD7 1DP, UK} \begin{abstract} Quantum systems with variables in ${\mathbb Z}(d)$ are considered, and three different structures are studied. The first is weak mutually unbiased bases, for which the absolute value of the overlap of any two vectors in two different bases is $1/\sqrt{k}$ (where $k|d$) or $0$. The second is maximal lines through the origin in the ${\mathbb Z}(d)\times {\mathbb Z}(d)$ phase space. The third is an analytic representation in the complex plane based on Theta functions, and their zeros. It is shown that there is a correspondence (triality) that links strongly these three apparently different structures. For simplicity, the case where $d=p_1\times p_2$, where $p_1,p_2$ are odd prime numbers different from each other, is considered. \end {abstract} \pacs{03.65.Aa, 02.10.De} \maketitle \section{Introduction} After the pioneering work by Schwinger \cite{SCH}, there has been a lot of work on various aspects of a quantum system $\Sigma (d)$ with variables in ${\mathbb Z}(d)$ (the ring of integers modulo $d$), described with a $d$-dimensional Hilbert space $H(d)$. The work combines Quantum Physics with Discrete Mathematics and has applications to areas like quantum information, quantum cryptography, quantum coding, etc (for reviews see \cite{1,2,3,4,5,6,7}). A deep problem in this area is mutually unbiased bases \cite{m1,m2,m3,m4,m5,m6,m7,m8,m9,m10}. It is a set of bases, for which the absolute value of the overlap of any two vectors in two different bases is $1/\sqrt{d}$. It is known that the number ${\mathfrak M}$ of mutually unbiased bases satisfies the inequality ${\mathfrak M} \le d+1$, and that when $d$ is a prime number ${\mathfrak M}=d+1$. What makes the case of prime $d$ special, is that ${\mathbb Z}(d)$ becomes a field, which is a stronger mathematical structure than a ring. For the same reason, if we consider quantum systems with variables in the Galois field $GF(p^e)$ (where $p$ is a prime number), the number of mutually unbiased bases is ${\mathfrak M} = p^e+1$. The study of mutually unbiased bases for non-prime $d$, in which case ${\mathbb Z}(d)$ is a ring (but not a field), is a very difficult problem. It is also related to the subjects of $t$-designs\cite{B,B1} and latin squares\cite {LS}. Recent work \cite{W1,W2} introduced a weaker concept called weak mutually unbiased bases (WMUB). It is a set of bases, for which the absolute value of the overlap of any two vectors in two different bases is $1/\sqrt{k}$, where $k|d$ ($k$ is a divisor of $d$), or zero. It has been shown that there are $\psi(d)$ (the Dedekind $\psi$-function) WMUBs. This work has also studied the phase space ${\mathbb Z}(d)\times {\mathbb Z}(d)$ as a finite geometry ${\mathcal G}(d)$. There exists much literature on finite geometries. They consist of a finite number of points and lines which obey certain axioms (e.g., \cite{f1,f2,f3} in a mathematics context, and \cite{r1,r2,r3,r4} in a physics context). Most of this work is on near-linear geometries, where two lines have at most one point in common. The ${\mathbb Z}(d)\times {\mathbb Z}(d)$ geometry is based on rings and it does not obey this axiom. Two lines have in common a `subline' which consists of $k$ points, where $k|d$. Refs\cite{W1,W2} have shown that there is a duality between WMUBs in $H(d)$ and lines in ${\mathcal G}(d)$. This shows a deep connection between finite quantum systems and the geometries of their phase spaces. A very different problem is the use of analytic functions in the context of physical systems. After the pioneering work by Bargmann\cite{A0,A00} for the harmonic oscillator, analytic representations have been used with various quantum systems (e.g., \cite{A1,A2,A3,A4,A5,A6,A7,A8,A9,A10}). In particular the zeros of the analytic functions have been used for the derivation of physical results. For example, there are links between the growth of analytic functions at infinity, and the density of their zeros\cite{C1,C2,C3}, which lead to criteria for the overcompleteness or undercompleteness of a von Neumann lattice of coherent states. Refs\cite{AN4,AN5} have studied analytic representations for quantum systems with variables in ${\mathbb Z}(d)$, using Theta functions \cite{THETA} (see also ref\cite{AN6}). Quantum states are represented with analytic functions in the cell ${\mathfrak S}=[0,d)\times [0,d)$ in the complex plane (i.e., in a torus). These analytic functions have exactly $d$ zeros in the cell ${\mathfrak S}$, which determine uniquely the state of the system. In this paper we use this language of analytic functions for the study WMUBs. We show that: \begin{itemize} \item Each of the $d$ vectors in a WMUB has $d$ zeros on a straight line. \item In a given WMUB, the various vectors have zeros on parallel lines. In different WMUBs, the slope of the lines of zeros, is different. \item The $d^2$ zeros in each WMUB, form a regular lattice in the cell ${\mathfrak S}$, which is the same for all WMUBs. \end{itemize} Based on these results we show that there is a triality between \begin{itemize} \item WMUBs \item Lines through the origin in the finite geometry ${\mathcal G}(d)$ of the phase space \item Sets of parallel lines of zeros of the vectors in WMUBs in the cell ${\mathfrak S}$ \end{itemize} These three mathematical objects, which are very different from each other, have the same mathematical structure. The work links the theory of analytic functions and their zeros, to finite quantum systems, finite geometries and more generally to Discrete Mathematics. In order to avoid a complicated notation, in all sections except section II, we consider the case that $d=p_1\times p_2$, where $p_1,p_2$ are odd prime numbers, different from each other (in section II we state in each subsection, what values $d$ takes). All results are generalizable to the case $d=p_1\times...\times p_N$, where $d$ is an odd integer (see discussion). In the case of even dimension $d$ (e.g., \cite{zak}) , some aspects of the formalism of finite quantum systems require special consideration, and further work is needed in order to extend the ideas of the present paper, to this case. Also when $d$ contains powers of prime numbers, further work is needed (based on labeling with elements of Galois fields). In section 2 we introduce very briefly finite quantum systems, their analytic representation, and mutually unbiased bases, in order to define the notation. In section 3 we review briefly the formalism of weak mutually unbiased bases. An important ingredient is the factorization of $\Sigma (d)$ in terms of smaller systems $\Sigma (p_1)$ and $\Sigma (p_2)$, which is based on the Chinese remainder theorem, and its use by Good \cite{Good} in the context of finite Fourier transforms. In section 4, we use the analytic representation to study WMUBs, and prove the results that we mentioned above. We conclude in section 5, with a discussion of our results. \section{Preliminaries} \subsection{Analytic representation of quantum systems with variables in ${\mathbb Z}(d)$, with odd $d$} We consider a finite quantum system with variables in ${\mathbb Z}(d)$ (the integers modulo $d$)\cite{1,2,3,4,5,6,7}. Let $\ket{X; n}$ the basis of position states in the $d$-dimensional Hilbert space $H(d)$, and $\ket{P; n}$ the basis of momentum states: \begin{eqnarray}\label{Fou} \ket{P;n}={\cal F}|{X};n\rangle ;\;\;\;\;\;\;{\cal F}=d^{-1/2}\sum _{m,n}\omega (mn)\ket{X;m}\bra{X;n};\;\;\;\; \omega (n)=\exp \left [i \frac{2\pi n}{d}\right ] \end{eqnarray} Here ${\cal F}$ is the finite Fourier transform. Displacement operators are given by \begin{eqnarray}\label{99} {D}(\alpha,\beta)={Z}^\alpha {X}^\beta \omega (-2^{-1}\alpha \beta );\;\;\;\;\;\alpha, \beta \in {\mathbb Z}(d) \end{eqnarray} where \begin{eqnarray}\label{69} &&{Z}=\sum _{n}\omega (n)|{X};n\rangle \langle {X};n|=\sum _n\ket{{P};n+1}\bra{{P};n}\nonumber\\ &&{X}=\sum _{n}\omega (-n)|{P};n\rangle \langle {P};n|=\sum _n \ket{{X};n+1}\bra{{ X};n}\nonumber\\ &&{X}^{d}={Z}^{d}={\bf 1};\;\;\;\;\;\; {X}{Z} = {Z}{X}\omega (-1) \end{eqnarray} The $\{{D}(\alpha,\beta)\omega (\gamma)\}$ form a representation of the Heisenberg-Weyl group in this context. Let $\ket {g}$ be an arbitrary state \begin{eqnarray}\label{6} &&\ket {g}=\sum _m g_m\ket {X;m}=\sum _m{\widetilde g}_m\ket {P;m};\;\;\;\;\;\;\sum _m |g_m|^2=1\nonumber\\ &&{\widetilde g}_m=d^{-1/2}\sum _n\omega (-mn)g_n \end{eqnarray} We use the notation (star indicates complex conjugation) \begin{eqnarray} \ket {g^*}=\sum _m g_m^*\ket {X;m};\;\;\;\;\;\; \bra {g}=\sum _m g_m^*\bra {X;m};\;\;\;\;\;\;\bra {g^*}=\sum _m g_m\bra {X;m} \end{eqnarray} We represent the state $\ket{g}$ with the function \begin{eqnarray}\label{aaa1} G(z)=\pi^{-1/4} \sum_{m=0}^{d-1} g_m^*\;\Theta_3 \left [\frac{\pi m}{d}-z\frac{\pi}{d};\frac{i}{d}\right ] \end{eqnarray} where $\Theta_3$ is Theta function \cite{THETA}: \begin{eqnarray}\label{theta} &&\Theta _3(u,\tau)=\sum_{n=-\infty}^{\infty}\exp(i\pi \tau n^2+i2nu) \end{eqnarray} Theta functions are `Gaussian functions wrapped on a circle', and in our case on a `discretized circle'. Their periodicity properties are: \begin{eqnarray} &&\Theta _3(u+\pi,\tau)=\Theta _3(u,\tau +2)=\Theta _3(u,\tau)\nonumber\\ &&\Theta _3(u+\tau \pi,\tau)=\Theta _3(u,\tau)\exp[-i(\pi \tau+2u)] \end{eqnarray} For later use we mention that \begin{equation}\label{y} \Theta_3(u, \tau) = \left(-i\tau\right)^{-1/2}\exp\left[ \frac{u^2}{i\pi\tau}\right]\Theta_3\left( \frac{u}{ \tau}, \frac{-1}{ \tau}\right), \end{equation} and that their zeros are \begin{eqnarray}\label{eq1} \zeta _{MN}=(2M-1)\frac{\pi}{2}+(2N-1)\frac{i\pi}{2d}. \end{eqnarray} $G(z)$ is an analytic function and obeys the periodicity relations \begin{eqnarray} \label{periodicity} &&G( z+d ) = G(z)\nonumber\\ &&G( z+ i d) = G(z)\exp\left( -\pi d -2i\pi z\right). \end{eqnarray} The scalar product is given by \begin{eqnarray}\label{scalar} \langle g_2| g_1^* \rangle &=& \frac{ \sqrt{2\pi}}{d^{5/2}} \int_{\mathfrak S} dz_Rdz_I\exp\left( \frac{-2\pi}{d}z_I^2\right) G_1(z)G_2(z^\ast)=\sum _{m\in {\mathbb Z}(d)}g_{2m}^*g_{1m}^* \end{eqnarray} where $z_R, z_I$ are the real and imaginary parts of $z$. ${\mathfrak S}_{MN}=[M d,(M+1)d)\times [N d,(N+1)d)$ is a cell in the complex plane and $(M,N)$ are integers labelling the cell. In the case $M=N=0$ we use the simpler notation ${\mathfrak S}$. The proof of Eq.(\ref{scalar}) is based on the orthogonality of Theta functions. The analytic function $G(z)$ has exactly $d$ zeros $\zeta _r$ in each cell and the sum of these zeros is \cite{AN4,AN5,AN6} \begin{eqnarray}\label{con} \sum _{r =1}^d \zeta _r= d(M+iN)+\frac{d^2}{2}(1+i). \end{eqnarray} So in each cell $d-1$ zeros are independent, and the last is determined by this constraint. \subsection{Mutually unbiased bases using $Sp(2,{\mathbb Z}(d))$ symplectic transformations, with odd prime $d$}\label{S200} In this subsection $d$ is a prime number and therefore ${\mathbb Z}(d)$ is a field. Symplectic transformations are defined as \begin{eqnarray}\label{nnn} &&X'=S(\kappa, \lambda|\mu ,\nu)\;X\;[S(\kappa, \lambda|\mu ,\nu)]^{\dagger}=D(\lambda, \kappa)\nonumber\\ &&Z'=S(\kappa, \lambda|\mu ,\nu)\;Z\;[S(\kappa, \lambda|\mu ,\nu)]^{\dagger}=D(\nu,\mu)\nonumber\\ &&\kappa \nu-\lambda \mu=1;\;\;\;\;\;\kappa, \lambda, \mu, \nu \in {\mathbb Z}(d) \end{eqnarray} They form a representation of the $Sp(2,{\mathbb Z}(d))$ group. Eqs.(\ref{nnn}) define uniquely (up to a phase factor) the symplectic transformations. $S(\kappa, \lambda|\mu ,\nu)$ is given by\cite{2} \begin{eqnarray} &&S(\kappa, \lambda|\mu ,\nu)=S(1,0|\xi _1,1)S(1,\xi_2|0,1)S(\xi_3,0|0,\xi _3^{-1})\nonumber\\ &&S(1,0|\xi _1,1)=\sum _n\ket{X;n}\bra{X;\xi _1n}\nonumber\\ &&S(1,\xi_2|0,1)=\sum _n\omega (2^{-1}\xi_2 n^2)\ket{X;n}\bra{X;n}\nonumber\\ &&S(\xi_3,0|0,\xi _3^{-1})=\sum _n\omega (2^{-1}\xi_3 n^2)\ket{P;n}\bra{P;n} \end{eqnarray} where \begin{eqnarray} \xi _1=\kappa \mu (1+\lambda \mu)^{-1};\;\;\;\; \xi _2=\lambda \kappa ^{-1} (1+\lambda \mu);\;\;\;\; \xi _3=\kappa (1+\lambda \mu)^{-1}. \end{eqnarray} The multiplication rule is given by \begin{eqnarray}\label{500} &&S(\kappa _1, \lambda _1|\mu _1,\nu _1)S(\kappa _2, \lambda _2|\mu _2,\nu _2)=S(\kappa, \lambda|\mu ,\nu)\nonumber\\ &&\left ( \begin{array}{cc} \kappa& \lambda\\ \mu&\nu \end{array} \right )= \left ( \begin{array}{cc} \kappa _2& \lambda _2\\ \mu _2&\nu _2 \end{array} \right ) \left ( \begin{array}{cc} \kappa _1& \lambda _1\\ \mu _1&\nu _1 \end{array} \right ) \end{eqnarray} We consider the following special case of symplectic transformations: \begin{eqnarray}\label{DD175} &&X'=S(0,-\mu ^{-1}|\mu,\nu)\;X\;[S(0, -\mu ^{-1}|\mu;\nu)]^{\dagger} =Z^{-\mu ^{-1}};\;\;\;\;\mu, \nu\in {\mathbb Z}(d)\nonumber\\ &&Z'=S(0,-\mu ^{-1}|\mu,\nu)\;Z\;[S(0,-\mu ^{-1}|\mu,\nu)]^{\dagger}=X^{\mu}Z^{\nu}\omega (2^{-1}\mu \nu) \end{eqnarray} We note that $S(0,-1|1,0)={\cal F}^{-1}$. We can show that these transformations preserve Eq.(\ref{69}). Acting with them on the position basis, we get new bases: \begin{eqnarray}\label{Sym7} \ket{X(\mu,\nu);m} \equiv S(0,-\mu ^{-1}|\mu,\nu)\ket{X;m};\;\;\;\;\;\nu=0,..., d-1 \end{eqnarray} We note that $\ket{X(\mu,0);m}=\ket{P; -\mu ^{-1}m}$. \begin{lemma}\label{l} \begin{eqnarray}\label{3cv} \ket{X(\mu,\nu);m}=\frac{1}{ \sqrt{d}} \sum_{j=0}^{d-1} \omega[\mu ^{-1}\phi (m,j,\nu)]|X;j\rangle ;\;\;\;\;\phi (m,j,\nu)=-jm+2^{-1}\nu j^2 \end{eqnarray} \end{lemma} \begin{proof} We first prove that these states are eigenstates of $Z'=X^{\mu}Z^{\nu}\omega (2^{-1}\nu \mu)$. \begin{align} &&Z'\ket{X(\mu,\nu);m}= \frac{1}{ \sqrt{d}} \omega (2^{-1}\nu \mu)\sum_{j=0}^{d-1} \omega [\mu ^{-1}\phi (m,j,\nu)]X^{\mu}Z^{ \nu}|X;j\rangle\nonumber\\&& = \frac{1}{ \sqrt{d}}\omega (2^{-1}\nu \mu) \sum_{j=0}^{d-1} \omega [\mu ^{-1}\phi (m,j,\nu)]\omega( \nu j)|X;j+\mu\rangle \end{align} We now change variables $j' = j+\mu$ and we get \begin{align} &Z'\ket{X(\mu,\nu);m}=\omega(m)\ket{X(\mu,\nu);m} \end{align} We next show that $X'\ket{X(\mu,\nu);m}=\ket{X(\mu,\nu);m+1}$. \begin{align} &Z^{-\mu ^{-1}}\ket{X(\mu,\nu);m}= \frac{1}{ \sqrt{d}} \sum_{j=0}^{d-1} \omega [\mu ^{-1}\phi (m,j,\nu)]Z^{-\mu ^{-1}}|X;j\rangle = \frac{1}{ \sqrt{d}} \sum_{j=0}^{d-1} \omega [\mu ^{-1}\phi (m,j,\nu)]\omega(-j\mu ^{-1})|X;j\rangle \nonumber \\ &= \frac{1}{ \sqrt{d}} \sum_{j=0}^{d-1} \omega [\mu ^{-1}\phi (m+1,j,\nu)]|X;j\rangle =\ket{X(\mu, \nu);m+1} \end{align} \end{proof} It is known that for a prime number $d$ there are $d+1$ mutually unbiased bases given by \begin{eqnarray}\label{S100} B(\mu,-1)=\{\ket{X;m}\};\;\;\;\;B(\mu, \nu)=\{\ket{X(\mu, \nu);m}\};\;\;\;\;\;\nu=0,1,...,d-1. \end{eqnarray} Here $\mu $ is fixed. $B(\mu, 0)$ is the basis of momentum states $\{\ket{X(\mu, 0);m}=\ket{P;-\mu ^{-1} m}\}$. They are mutually unbiased bases\cite{m1,m2,m3,m4,m5,m6,m7,m8,m9}, because for all $\nu \ne \nu'$ and for all $n,m$ \begin{eqnarray}\label{8} |\langle X(\mu, \nu);n\ket{X(\mu, \nu ');m}|=d^{-1/2} \end{eqnarray} \subsection{Maximal lines through the origin in ${\cal G}(d)$} Various aspects of the ${\mathbb Z}(d)\times {\mathbb Z}(d)$ phase space as a finite geometry ${\cal G}(d)$ have been studied in\cite{W1,W2}. A special class of finite geometries which has been studied extensively in the discrete mathematics literature\cite{f1,f2,f3} is the near-linear geometries, which have the axiom that two lines have at most one point in common. These geometries are intimately related to fields. The ${\cal G}(d)$ geometry does not obey this axiom, is based on rings and it is a non-near-linear geometry. Two lines through the origin have a `subline' in common, which consists of $k$ points, where $k|d$. If $d$ is a prime number, $k$ is $1$ (in which case the lines have one point in common) or $d$ (in which case the lines are identical), and this is the near-linear geometry. In this subsection $d=p_1\times p_2$, where $p_1,p_2$ are odd prime numbers different from each other. The ${\cal G}(d)$ is defined as $(P(d),L(d))$ where $P(d)$ is the set of the $d^2$ points $(m,n)\in {\mathbb Z}(d)\times {\mathbb Z}(d)$ and $L(d)$ is the set of lines. A maximal line through the origin is the set of $d$ points \begin{eqnarray} &&L(\rho , \sigma )=\{(r\rho ,r\sigma )\;|\;r \in {\mathbb Z}(d)\};\;\;\;\;\rho , \sigma \in {\mathbb Z}(d). \end{eqnarray} If $\tau$ is an invertible element of ${\mathbb Z}(d)$ then $L(\rho,\sigma )$ is the same line as $L(\tau \rho,\tau \sigma )$. An example of a non-maximal line is $L(p_1,\tau p_1)$ (it has only $p_2$ points). There are $\psi(d)=(p_1+1)(p_2+1)$ (the Dedekind psi function) maximal lines through the origin in ${\mathcal G}(d)$. Symplectic transformations on a point $(\rho , \sigma ) \in {\mathbb Z}(d)\times {\mathbb Z}(d)$ are given by \begin{eqnarray} &&{\cal S}(\kappa, \lambda|\mu ,\nu)(\rho , \sigma )= (\rho , \sigma )\left ( \begin{array}{cc} \kappa& \lambda\\ \mu&\nu \end{array} \right ) =(\kappa \rho +\mu \sigma, \lambda \rho +\nu \sigma )\nonumber\\&&\kappa \nu-\lambda \mu=1;\;\;\;\;\;\kappa, \lambda, \mu, \nu \in {\mathbb Z}(d) \end{eqnarray} where we represent points with rows and act on the right, or by \begin{eqnarray} &&{\cal S}(\kappa, \lambda|\mu ,\nu) \left ( \begin{array}{c} \rho\\ \sigma \end{array} \right )= \left ( \begin{array}{cc} \kappa& \lambda\\ \mu&\nu \end{array} \right )^T \left ( \begin{array}{c} \rho\\ \sigma \end{array} \right ) = \left ( \begin{array}{c} \kappa \rho +\mu \sigma\\ \lambda \rho +\nu \sigma \end{array} \right ) \end{eqnarray} where we represent points with columns and act with the transposed matrix on the left. With this notation we get the same multiplication rule as in Eq.(\ref{500}). We have here a representation of the $Sp(2,{\mathbb Z}(d))$ group. Symplectic transformations on points lead to symplectic transformations on lines: \begin{eqnarray}\label{efb} {\cal S}(\kappa, \lambda|\mu ,\nu)L(\rho , \sigma )=L(\kappa \rho +\mu \sigma, \lambda \rho +\nu \sigma ). \end{eqnarray} \section{Factorization} In the rest of the paper $d=p_1\times p_2$, where $p_1,p_2$ are odd prime numbers different from each other. \subsection{Factorization of the system in terms of smaller systems} Based on the Chinese remainder theorem, and following ref.\cite{Good} on the factorization of finite Fourier transforms, we introduce two bijective maps between ${\mathbb Z}(d)$ and ${\mathbb Z}(p_1)\times {\mathbb Z}(p_2)$: \begin{eqnarray}\label{map1} &&m\leftrightarrow (m_1,m_2);\;\;\;\;\;m_i=m ({\rm mod}\ p_i);\;\;\;\; m=m_1s_1+m_2s_2\;({\rm mod}\ d)\nonumber\\ &&m\in {\mathbb Z}(d);\;\;\;\;\;m_i\in {\mathbb Z}(p_i), \end{eqnarray} and \begin{eqnarray}\label{map2} &&m\leftrightarrow (\overline m_1,\overline m_2);\;\;\;\;\;\overline m_i=mt_i=m_it_i({\rm mod}\ p_i);\;\;\;\;m=\overline m_1 r_1+\overline m_2r_2\;({\rm mod}\ d)\nonumber\\ &&m\in {\mathbb Z}(d);\;\;\;\;\;{\overline m}_i\in {\mathbb Z}(p_i). \end{eqnarray} Here $r_i,t_i,s_i$ are the constants \begin{equation}\label{20} r_1=\frac{d}{p_1}=p_2;\;\;\;\;r_2=\frac{d}{p_2}=p_1;\;\;\;\;t_i r_i=1\;( mod\ p_i);\;\;\;\;\;s_i=t_i r_i\in {\mathbb Z}(d). \end{equation} We note that \begin{eqnarray}\label{20A} &&s_1s_2=0\;({\rm mod}\ d);\;\;\;\;\;s_1^2=s_1\;({\rm mod}\ d)\;\;\;\;\;s_2^2=s_2\;({\rm mod}\ d)\;\;\;\;\;s_1+s_2=1\;({\rm mod}\ d)\nonumber\\ &&p_2s_1=p_2\;({\rm mod}\ d);\;\;\;\;p_1s_2=p_1\;({\rm mod}\ d);\;\;\;\;p_1s_1=p_2s_2=0\;({\rm mod }\ d). \end{eqnarray} Also for the map of Eq.(\ref{map1}) \begin{eqnarray}\label{20B} m+\ell \;\leftrightarrow \;({m_1+\ell _1},{m_2+\ell _2});\;\;\;\;\;m\ell \;\leftrightarrow \;({m_1\ell _1},{m_2\ell _2}), \end{eqnarray} and for the map of Eq.(\ref{map2}) \begin{eqnarray}\label{20C} m+\ell \;\leftrightarrow \;(\overline {m}_1+\overline {\ell }_1,\overline {m}_2+\overline{\ell} _2) ;\;\;\;\;\; m\ell \;\leftrightarrow \;(\overline {m}_1\ell _1, \overline {m}_2\ell _2) \end{eqnarray} Using the notation $\omega _i(n)=\exp (\frac{2\pi n_i}{p_i})$ where $n_i\in {\mathbb Z}(p_i)$, we can show that \begin{eqnarray}\label{20D} \omega (mn)=\omega _1(m_1{\overline n}_1)\omega _2(m_2{\overline n}_2). \end{eqnarray} Eqs.(\ref{20A}), (\ref{20B}), (\ref{20C}), (\ref{20D}), are important for the proof of various relations below. We introduce a bijective map from $H(d)$ to $H(p_1)\otimes H(p_2)$ as follows\cite{2}. We use the map of Eq.(\ref{map2}) for position states: \begin{equation} \ket{X;m}\;\;\leftrightarrow\;\;\ket{X_1;{\overline m}_1} \otimes \ket{X_2;{\overline m}_2}, \end{equation} where $\ket{X_i;{\overline m}_i}$ are position states in $H(p_i)$. Using Eq.(\ref{20D}) we prove that the corresponding map for momentum states, is based on the map of Eq.(\ref{map1}), and it is given by \begin{equation} \ket{P;m}\;\;\leftrightarrow\;\;\ket{P_1;{m}_1}\otimes \ket{P_2;{m}_2} \end{equation} where $\ket{P_i;{m}_i}$ are momentum states in $H(p_i)$. The Fourier transform between position and momentum states, implies that if the map of Eq.(\ref{map2}) is used for position states, then the map of Eq.(\ref{map1}) should be used for momentum states. For later use we also factorize the symplectic transformations. The $Sp(2,{\mathbb Z}(d))$ is factorized as $Sp(2,{\mathbb Z}(p_1))\times Sp(2,{\mathbb Z}(p_2))$, as follows (proposition 3.1 in \cite{SV2}): \begin{equation}\label{345} S(\kappa, \lambda |\mu , \nu)=S(\kappa _1, \lambda _1r_1|\overline \mu _1, \nu _1)\otimes S(\kappa _2, \lambda _2r_2|\overline \mu _2, \nu _2) \end{equation} where the $\kappa _i$, $\lambda _i$, ${\overline \mu}_i$ , $\nu_i$ are related to $\kappa$, $\lambda$, $\mu $, $\nu$, as in Eqs(\ref{map1}),(\ref{map2}). Below we need the special cases \begin{eqnarray}\label{135} &&S(0,-\mu ^{-1}|\mu,\nu)=S(0,-1|1, \nu _1)\otimes S(0,-1|1, \nu _2)\nonumber\\ &&\nu=\nu_1s_1+\nu_2s_2;\;\;\;\;\;\mu=p_1+p_2;\;\;\;\;\;\mu ^{-1}=p_2^{-1}s_1+p_1^{-1}s_2\;({\rm mod} \;d), \end{eqnarray} and \begin{eqnarray}\label{dfg} &&S(\kappa,\lambda|\mu,\nu)={\bf 1}\otimes S(0,-1|1, \nu _2)\nonumber\\ &&\kappa= s_1;\;\;\;\;\lambda=-s_2p_1^{-1}\;\;\;\;\;\mu=p_1;\;\;\;\;\;\;\nu=s_1+\nu_2s_2 \end{eqnarray} and \begin{eqnarray}\label{dfg1} &&S(\kappa,\lambda|\mu,\nu)=S(0,-1|1, \nu _1)\otimes {\bf 1}\nonumber\\ &&\kappa= s_2;\;\;\;\;\lambda=-s_1p_2^{-1}\;\;\;\;\;\mu=p_2;\;\;\;\;\;\;\nu=s_2+\nu_1s_1. \end{eqnarray} As an example we consider the case that $d=21$, i.e., $p_1=3$ and $p_2=7$. Then \begin{eqnarray} &&r_1=7;\;\;\;t_1=1;\;\;\;\;s_1=7\nonumber\\ &&r_2=3;\;\;\;t_2=5;\;\;\;\;s_2=15\nonumber\\ &&\mu=10;\;\;\;\;-\mu ^{-1}=2 \end{eqnarray} and we get \begin{eqnarray} &&S(0,2|10 , 7\nu _1+15 \nu_2)=S(0, -1|1, \nu _1)\otimes S(0, -1|1, \nu _2)\nonumber\\ &&S(7,9|3,7+15\nu _2)={\bf 1}\otimes S(0,-1|1, \nu _2)\nonumber\\ &&S(15,-7|7, 15+7\nu _1)=S(0,-1|1,\nu_1)\otimes {\bf 1} \end{eqnarray} \subsection{Weak mutually unbiased bases} For $d=p_1\times p_2$, references \cite{W1,W2} introduced in $H(d)=H(p_1)\otimes H(p_2)$ a weaker than mutually unbiased bases concept, called weak mutually unbiased bases (WMUB). They are tensor products of mutually unbiased bases in $H(p_i)$. They are given by \begin{eqnarray} &&\ket{{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2}=\ket{X_1(\nu _1);\overline m_1} \otimes \ket{X_2(\nu _2);\overline m_2}\nonumber\\ &&\ket{X_i(\nu _i);\overline m_i}=S(0,-1|1, \nu _i)\ket{X_i;\overline m_i};\;\;\;\;\;\overline m_i\in {\mathbb Z}(p_i). \end{eqnarray} We also include the $\nu _i=-1$, in which case $\ket{X_i(-1);\overline m_i}=\ket{X_i;\overline m_i}$. Therefore $\nu _i=-1,...,p_i-1$. In the special case $\nu _1=\nu _2=-1$ we get \begin{eqnarray} \ket{{\cal X}(-1,-1);\overline m_1,\overline m_2}=\ket{X_1(-1);\overline m_1} \otimes \ket{X_2(-1);\overline m_2}=\ket{X_1;\overline m_1} \otimes \ket{X_2;\overline m_2} \end{eqnarray} In the special case $\nu_1=\nu _2=0$ we get \begin{eqnarray} \ket{{\cal X}(0,0);\overline m_1,\overline m_2}=\ket{X_{1}(0);\overline m_1} \otimes \ket{X_{2}(0);\overline m_2}=\ket{P_{1}; m_1} \otimes \ket{P_{2}; m_2} \end{eqnarray} The overlap of two vectors in two different bases, is $0$ or $1/k$ where $k$ is a divisor of $d$: \begin{eqnarray}\label{8} |\langle { {\cal X}}(\nu _1,\nu _2);\overline m_1,\overline m_2\ket{{\cal X}(\nu _1',\nu _2');\overline r_1,\overline r_2}|^2=\frac{1}{k}\;\;{\rm or}\;\;0;\;\;\;\;k|d. \end{eqnarray} The strict requirement that the square of the absolute value of the overlap is $1/d$ in mutually unbiased bases, is replaced with the weaker requirement that it is $1/k$ or $0$. And that is why we call them weak mutually unbiased bases. There are $\psi (d)=(p_1+1)(p_2+1)$ weak mutually unbiased bases. Taking into account Eq.(\ref{135}), we can relabel the $\ket{{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2}$ as follows: \begin{itemize} \item \begin{eqnarray}\label{np1} &&\ket{{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2}=S(0,-1|1,\nu _1)\ket{X;{\overline m}_1}\otimes S(0,-1|1,\nu _2)\ket{X;{\overline m}_2}\nonumber\\ &&=S(0,-\mu ^{-1}|\mu,\nu )\ket{X;m}=S(0,-1|1,\mu ^{-1}\nu )S(\mu ^{-1},0|0,\mu)\ket{X;m}\nonumber\\ &&=S(0,-1|1,\mu ^{-1}\nu )\ket{X;m\mu ^{-1}}\equiv \ket{{X}(1,\mu ^{-1}\nu );m \mu ^{-1}}\nonumber\\ &&\nu=\nu _1s_1+\nu _2s_2;\;\;\;\;\;\mu=p_1+p_2;\;\;\;\;\;\nu_i=0,..., p_i-1 \end{eqnarray} Here we have used Eq.(\ref{135}), and $m$ is related to $\overline m_1,\overline m_2$ through Eq.(\ref{map2}). \item \begin{eqnarray}\label{np2} &&\ket{{\cal X}(-1,\nu _2);\overline m_1,\overline m_2}=\ket{X;{\overline m}_1}\otimes S(0,-1|1,\nu _2)\ket{X;{\overline m}_2}\nonumber\\&&= \ket{X;{\overline m}_1}\otimes S(0,-1|1,\nu _2)\ket{X;{\overline m}_2}=S(\kappa, \lambda|\mu,\nu)\ket{X;m}= \ket{{X}(p_1, s_1+\nu _2s_2 );m}\nonumber\\ &&\kappa=s_1;\;\;\;\; \lambda=-s_2p_1^{-1};\;\;\;\; \mu=p_1;\;\;\;\; \nu=s_1+\nu_2s_2 \end{eqnarray} Here we used Eq.(\ref{dfg}). In a similar way we get \begin{eqnarray}\label{np3} \ket{{\cal X}(\nu _1,-1);\overline m_1,\overline m_2}=\ket{{X}(p_2, s_2+\nu _1s_1 );m} \end{eqnarray} \item \begin{eqnarray}\label{np4} \ket{{\cal X}(-1,-1);\overline m_1,\overline m_2}=\ket{X;{\overline m}_1}\otimes \ket{X;{\overline m}_2}=\ket{{X}(0,1);m} \end{eqnarray} \end{itemize} There are $p_1p_2$ states in Eq.(\ref{np1}) (which have already been introduced in Eq.(\ref{S100})), $p_1+p_2$ states in Eqs.(\ref{np2}),(\ref{np3}) and one state in Eq.(\ref{np4}). Together they make the set of $\psi(d)=(p_1+1)(p_2+1)$ weak mutually unbiased bases. \begin{remark}\label{zxd} From the above it is clear that we use two different notations, `the factorized notation' (for which we use calligraphic letters) and the `unfactorized notation'. In the unfactorized notation we have four different cases where different symplectic transformations act on the position states: \begin{eqnarray} &&\ket{{X}(1,\mu ^{-1}\nu);m\mu^{-1}}=S(0,-1|1,\mu ^{-1}\nu )\ket{X;m\mu ^{-1}}\nonumber\\ &&\ket{{X}(p_1, s_1+\nu _2s_2 );m}=S(s_1, -s_2p_1^{-1}|p_1,s_1+\nu _2s_2 )\ket{X;m}\nonumber\\ &&\ket{{X}(p_2, s_2+\nu _1s_1 );m}=S(s_2, -s_1p_2^{-1}|p_2,s_2+\nu _1s_1 )\ket{X;m}\nonumber\\ &&\ket{{X}(0,1);m}=\ket{X;{\overline m}_1}\otimes \ket{X;{\overline m}_2} \end{eqnarray} \end{remark} In the `factorized notation' ${\cal B}(\nu _1, \nu _2)$ is the basis \begin{eqnarray} {\cal B}(\nu _1, \nu _2)=\{\ket{{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2}\};\;\;\;\;\;\nu _i=-1,...,p_i-1. \end{eqnarray} In the `unfactorized notation' \begin{eqnarray} {B}(\mu, \nu )=\{\ket{{X}(\mu, \nu );m}\}, \end{eqnarray} where $\mu$ takes the values $1, p_1, p_2, 0$. The overlap of Eq.(\ref{8}) for vectors in two bases ${\cal B}(\nu _1, \nu _2)$ and ${\cal B}(\nu _1', \nu _2')$ takes one of the two values $\frac{r(\nu _1, \nu _2|\nu _1', \nu _2')}{d}$ or $0$. We express this as \begin{eqnarray}\label{jjj} ({\cal B}(\nu _1, \nu _2), {\cal B}(\nu _1', \nu _2'))=\frac{r(\nu _1, \nu _2|\nu _1', \nu _2')}{d}\;\;{\rm or}\;\;0 \end{eqnarray} where \begin{eqnarray}\label{200} &&r(\nu _1, \nu _2|\nu _1', \nu _2')=1\;\;{\rm if}\;\;\nu _1\ne \nu_1' \;\;{\rm and}\;\;\nu _2\ne \nu_2'\nonumber\\ &&r(\nu _1, \nu _2|\nu _1', \nu _2')=p_1\;\;{\rm if}\;\;\nu _1= \nu_1' \;\;{\rm and}\;\;\nu _2\ne \nu_2'\nonumber\\ &&r(\nu _1, \nu _2|\nu _1', \nu _2')=p_2\;\;{\rm if}\;\;\nu _1\ne \nu_1' \;\;{\rm and}\;\;\nu _2= \nu_2' \end{eqnarray} In the first case both `unprimed factor bases' are different from the corresponding `primed factor bases' and therefore the result is always $1/(p_1p_2)$ (it cannot be zero). In the second case the first `unprimed factor basis' is the same as the `primed factor basis', and therefore the result is $1/p_2$ or zero. Analogous comment can be made for the last case. \subsection{Factorization of the maximal lines in ${\cal G}(d)$ } We represent a point $(\rho ,\sigma )$ in ${\mathbb Z}(d)\times {\mathbb Z}(d)$ as \begin{eqnarray} (\rho, \sigma)=(\overline \rho_1, \sigma_1)\times(\overline \rho _2, \sigma _2);\;\;\;\;\overline \rho _i, \sigma _i \in {\mathbb Z}(p_i) \end{eqnarray} Here we used the map of Eq.(\ref{map2}) for the first variable and the the map of Eq.(\ref{map1}) for the second variable. The use of the two maps is important for the duality between maximal lines through the origin in ${\mathcal G}(d)$, and weak mutually unbiased bases in $H(d)$. A maximal line $L(\rho ,\sigma )$ in ${\mathbb Z}(d)\times {\mathbb Z}(d)$ can now be factorized as \begin{eqnarray}\label{line1} L(\rho, \sigma)=L(\overline \rho_1, \sigma_1)\times L(\overline \rho _2, \sigma _2);\;\;\;\;\overline \rho _i, \sigma _i \in {\mathbb Z}(p_i) \end{eqnarray} This is made clear in the following proposition. \begin{proposition} \mbox{} \begin{itemize} \item[(1)] if $\overline \rho _i \ne 0\;({\rm mod}\;p_i)$, the $(\overline \rho _i)^{-1}$ exists in ${\mathbb Z}(p_i)$, and the line $L(\overline\rho_i, \sigma_i)=L(1, (\overline \rho _i) ^{-1}\sigma_i)$. Also $L(\rho, \sigma )=L(1, \rho ^{-1}\sigma )$ and Eq.(\ref{line1}) can be written as \begin{eqnarray}\label{line2} &&L(1,\mu ^{-1}\nu)=L(1,\nu_1)\times L(1,\nu _2)\equiv {\cal L}(\nu _1,\nu _2)\nonumber\\ &&\nu=\nu _1s_1+\nu _2 s_2;\;\;\;\;\;\;\nu _i=(\overline\rho _i) ^{-1}\sigma_i\in {\mathbb Z}(p_i);\;\;\;\;\;\mu ^{-1}\nu =\rho ^{-1}\sigma\in {\mathbb Z}(p)\nonumber\\ &&\mu=p_1+p_2 \end{eqnarray} \item[(2)] If $\overline \rho _1=p_1=0 \;({\rm mod}\;p_1)$ then $\nu _1=-1$ by definition and \begin{eqnarray}\label{line3} &&L(p_1,s_1+s_2\nu _2)=L(0,1)\times L(1,\nu _2)\equiv {\cal L} (-1,\nu _2);\;\;\;\;\; \nu _2=(\overline \rho _2) ^{-1} \sigma_2 \end{eqnarray} Similar result holds in the case that $\rho _2=p_2=0 \;({\rm mod}\;p_2)$: \begin{eqnarray}\label{line30} &&L(p_2,s_2+s_1\nu _1)=L(1,\nu _1)\times L(0,1)\equiv {\cal L} (\nu _1, -1);\;\;\;\;\; \nu _1=(\overline\rho _1) ^{-1} \sigma_1 \end{eqnarray} \item[(3)] If $\rho _1=0 \;({\rm mod}\;p_1)$ and $\rho _2=0 \;({\rm mod}\;p_2)$ then $\nu _1=\nu _2=-1$ by definition and \begin{eqnarray}\label{line4} &&L(0,1)=L(0,1)\times L(0,1)\equiv {\cal L} (-1,-1). \end{eqnarray} \end{itemize} \end{proposition} \begin{proof} In all cases we show that the sets of points in the two sides are identical. \begin{itemize} \item[(1)] \begin{eqnarray} L(1,\nu_1)\times L(1,\nu_2)={\cal S}(0,-1|1,\nu _1)L(0,1)\times {\cal S}(0,-1|1,\nu _2)L(0,1) \end{eqnarray} Using Eq.(\ref{135}), and the fact that $L(0,1)\times L(0,1)$ is the line $L(0,1)$ in ${\cal G}(d)$, we get \begin{eqnarray} S(0,-\mu ^{-1}|\mu,\nu)L(0,1)=L(1,\mu ^{-1}\nu) \end{eqnarray} with the parameters given in Eq.(\ref{135}). We used here Eq.(\ref{efb}). \item[(2)] \begin{eqnarray} &&L(0,1)\times S(0,-1|1,\nu _2)L(0,1)=S(\kappa, \lambda|\mu,\nu)L(0,1)= L(p_1, s_1+\nu _2s_2 )\nonumber\\ &&\kappa=s_1;\;\;\;\; \lambda=-s_2p_1^{-1};\;\;\;\; \mu=p_1;\;\;\;\; \nu=s_1+\nu_2s_2 \end{eqnarray} We used here Eq.(\ref{dfg}). Comments analogous to remark \ref{zxd} are also valid for the lines. \item[(3)] This is straightforward. \end{itemize} \end{proof} Therefore in ${\cal L}(\nu _1,\nu _2)$ the $\nu _i=-1,0,...,p_i-1$. There are $\psi (d)=(p_1+1)(p_2+1)$ such lines through the origin , where $\psi(d)$ is the Dedekind $\psi$-function. An example of this factorization for ${\cal G}(21)$ ($p_1=3$ and $p_2=7$) is given in table \ref{t1}. Ref. \cite{W2} has shown that there exists a bijective map (duality) between the lines in ${\cal G}(d)$ and the weak mutually unbiased bases in $H(d)$ as follows: \begin{eqnarray}\label{aaa} {\cal B}(\nu _1, \nu _2)\;\leftrightarrow\; {\cal L} (\nu _1, \nu _2). \end{eqnarray} The finite geometry is non-near-linear geometry. The common points between two lines are described in the following proposition: \begin{proposition}\label{1234} Two maximal lines $L(1,\mu ^{-1}\nu)={\cal L}(\nu _1,\nu _2)$ and $L(1,\mu ^{-1}\nu')={\cal L}(\nu _1',\nu _2')$ (where $\nu=\nu_1s_1+\nu _2 s_2$ and $\nu'=\nu_1's_1+\nu _2' s_2$ through the origin, have in common $r(\nu _1,\nu _2|\nu _1',\nu _2')$ points where where $r(\nu _1,\nu _2|\nu _1',\nu _2')$ has been given in Eq.(\ref{200}). \end{proposition} \begin{proof} The common points in the two lines should satisfy the relation \begin{eqnarray} (\lambda, \lambda \mu ^{-1}\nu)=(\lambda, \lambda \mu ^{-1}\nu')\;\;\rightarrow\;\;\lambda [(\nu_1-nu')s_1+(\nu _2-\nu _2') s_2]=0. \end{eqnarray} We first assume that $\nu _1\ne \nu _1'$ and $\nu _2\ne \nu _2'$. In this case $(\nu_1-\nu _1')s_1+(\nu _2-\nu _2') s_2$ is always different from zero, because the map of Eq.(\ref{map1}) is bijective (and $0\leftrightarrow (0,0)$). Therefore in this case $\lambda =0$. We next consider the case $\nu _1= \nu _1'$ and $\nu _2\ne \nu _2'$. In this case any $\lambda$ which is multiple of $p_2$ gives a solution, because $p_2s_2=0$ (Eq.(\ref{20A})). Therefore there are $p_1$ values of $\lambda$ which lead to common points. The case $\nu _1 \ne \nu _1'$ and $\nu _2= \nu _2'$ is analogous to the above . \end{proof} Table \ref{t1} shows explicitly this duality for the case $d=21$. In the present paper we show that there is another bijective map between these two sets and the set of zeros, in an analytic representation approach to weak mutually unbiased bases. \begin{example}\label{exa} We give an example of two lines through the origin in ${\cal G}(21)$, which have three points in common. The lines $L(1,8)={\cal L} (2,3)$ and $L(1,11)={\cal L} (2,5)$ have in common the three points $(0,0)$, $(7,14)$, $(14,7)$, and they are shown in Fig.\ref{f1}. This example shows that our geometry is a non-near-linear geometry. The analogue of this in terms of bases is the ${\cal B} (2,3)$ and ${\cal B} (2,5)$. In this case \begin{eqnarray} ({\cal B}(2,3), {\cal B}(2,5))=\frac{3}{21}\;\;{\rm or}\;\;0. \end{eqnarray} Analogous example for two lines of zeros in ${\mathfrak Z}(21)$ is given later. \end{example} \section{Analytic representation of the weak mutually unbiased bases} We first present a lemma which is needed in the proof of the proposition below. \begin{lemma} \begin{alignat}{1}\label{27y} \prod _i\omega [\phi(\overline m_i,\overline k_i, \nu _i)]=\omega [\mu ^{-1}\phi(m,k, \nu)];\;\;\;\;\;\mu ^{-1}=p_2^{-1}s_1+p_1^{-1}s_2\;({\rm mod} \;d). \end{alignat} where $\phi(m,j,n)=-jm+2^{-1}\nu j^2$ (see Eq.(\ref{3cv})). \end{lemma} \begin{proof} We use Eqs.(\ref{20A}) to prove that \begin{equation} km\mu ^{-1}=\overline m_1 \overline k_1 p_2+\overline m_2 \overline k_2 p_1;\;\;\;\;\;\nu_1({\overline k}_1)^2d_2+\nu_2({\overline k}_2)^2d_1=\mu ^{-1}\nu k^2. \end{equation} From these relations follows Eq.(\ref{27y}). \end{proof} We have explained earlier that Theta functions are Gaussian functions wrapped on a circle. Symplectic transformations on Gaussian functions in a real line, give Gaussian functions. The proposition below proves an analogous statement for Theta functions. This is needed later in order to prove that the zeros of the analytic representation of the state $\ket{{\cal X}(\nu _1,\nu _2);\overline m _1,\overline m _2}$ are on a straight line. \begin{proposition}\label{1v9} The analytic representation (defined in Eq.(\ref{aaa1})) of the state $\ket{{\cal X}(\nu _1,\nu _2);\overline m _1,\overline m _2}$ where $\nu _i=-1,...,p_i-1$ and $\overline m _i\in {\mathbb Z}(p_i)$, is given by: \begin{itemize} \item[(1)] in the case $\nu _i=0,...,p_i-1$ \begin{eqnarray}\label{cvb} &&|{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2\rangle \quad \rightarrow \quad G(z)=\pi ^{-1/4}\exp \left (-\frac{ \pi}{d}z^2\right) \Theta_3 (u; \tau)\nonumber\\ &&\tau=\frac{i-\nu\mu ^{-1}(d+1)}{d};\;\;\;\;\;u=-\pi \mu ^{-1}\left (\frac{\overline m_1}{p_1}+\frac{\overline m_2}{p_2}\right )+i\frac{\pi z}{p_1p_2}\nonumber\\ &&\nu=\nu _1s_1+\nu _2 s_2 \end{eqnarray} where $\mu ^{-1},s_i$ are constants given in Eqs.(\ref{135}),(\ref{20}). \item[(2)] in the case $\nu _1=-1$ and $\nu _2=0,...,p_2-1$ \begin{eqnarray}\label{ggg} &&|{\cal X}(-1,\nu _2);\overline m_1,\overline m_2\rangle \quad \rightarrow \quad G(z)=\pi^{-1/4}\exp\left( \frac{-\pi p_2w^2}{p_1}\right) \Theta_3 (u;\tau)\nonumber\\ &&\tau=\frac{-\nu _2(p_2+1)+ip_1}{p_2};\;\;\;\;u=-\frac{\pi \overline m_2}{p_2}+i\pi w ;\;\;\;\;w=\frac{ z}{p_2}- \overline m_1. \end{eqnarray} Analogous result holds in the case $\nu _1=0,...,p_1-1$ and $\nu _2=-1$ \item[(3)] in the case $\nu _1=\nu _2 =-1$ \begin{eqnarray}\label{cvb10} |{\cal X}(-1,-1);m\rangle=|X;m\rangle \quad \rightarrow \quad G(z)=\pi ^{-1/4} \Theta_3\left[ \frac{\pi m}{ d}-z\left (\frac {\pi}{d}\right ) ; \frac{i}{d}\right] \end{eqnarray} \end{itemize} \end{proposition} \begin{proof} \mbox{} \begin{itemize} \item[(1)] Using Eq.(\ref{3cv}) with $\mu=1$ we get \begin{eqnarray} |X( \nu_i);\overline{m}_1\rangle = \frac{1}{ \sqrt{p_i}} \sum_{k_i=0}^{p_i-1} \omega[\phi ( \overline{m}_i,\overline{k}_i,\nu _i)]|X;\overline{k}_i\rangle \end{eqnarray} \ \\ Therefore \begin{alignat}{1} |{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2\rangle &=\sum_j|X;j\rangle\langle X;j|{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2\rangle\nonumber \\ &=\sum_j|X;j\rangle[ \langle X_1;\overline j_1|\otimes \langle X_2;\overline j_2|]| {\cal X}(\nu _1, \nu _2);\overline m_1,\overline m_2\rangle\nonumber\\ &=\sum_j|X;j\rangle\left[\langle X_1;\overline j_1|X_1(\nu _1);\overline m_1\rangle\right] \left[\langle X_2;\overline j_2|X_2(\nu _2);\overline m_2\rangle\right] \nonumber \\ &=\sum_j\frac{1}{\sqrt d}\prod _i\omega [\phi(\overline m_i,\overline j_i, \nu _i)]|X;j\rangle . \end{alignat} We then use Eq.(\ref{27y}) and we get \begin{eqnarray} |{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2\rangle =\sum_j\frac{1}{\sqrt d}\omega [\mu ^{-1}\phi(m,j, \nu)]|X;j\rangle . \end{eqnarray} Using lemma \ref{l} and Eq.(\ref{aaa1}), we represent $|X(\nu);m\rangle$ with the sum \begin{eqnarray}\label{cvb34} |{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2\rangle \quad \rightarrow \quad \frac{ \pi^{-1/4}}{ \sqrt{d}}\sum_{j} \omega [ -\mu ^{-1} \phi (m,j,\nu)] \Theta_3 \left[ \frac{\pi j}{ d}-z\left (\frac {\pi}{d}\right ); \frac{i}{d}\right] \end{eqnarray} We next show that this sum is equal to the Theta function shown on the right hand side of Eq.(\ref{cvb}). Using the property of Theta functions in Eq.(\ref{y}) and Eq.(\ref{theta}), we find that \begin{eqnarray} &&\Theta_3 \left[ \frac{ \pi j}{ d}-z\left (\frac { \pi}{d}\right ) ; \frac{i}{d}\right]=\sqrt{d}\exp\left[ \frac{-\pi j^2}{d} + 2 j\left( \frac{ \pi}{d}\right)z -\left( \frac{ \pi}{d}z^2\right)\right] \Theta_3\left[-i\pi j + i\pi z; id\right]\nonumber\\&&=\sqrt{d}\exp\left[ \frac{-\pi j^2}{d} + 2 j\left( \frac{ \pi}{d}\right)z -\left( \frac{ \pi}{d}z^2\right)\right] \sum_{n= -\infty}^{ \infty} \exp\left[-\pi d n^2 + 2n\pi j - 2n\pi z\right]. \end{eqnarray} In this paper we consider the case of odd $d$ and then $2^{-1}=\frac{d+1}{2}$. Therefore we get \begin{eqnarray} &&\frac{ \pi^{-1/4}}{ \sqrt{d}}\sum_{j} \omega [-\mu ^{-1}\phi (m,j,\nu)] \Theta_3 \left[ \frac{\pi j}{ d}-z\left (\frac {\pi}{d}\right ) ; \frac{i}{d}\right]\nonumber\\ &&= \pi^{-1/4}\exp\left[ \frac{-\pi}{d}z^2\right] \sum_{n= -\infty}^{ \infty} \sum_{j=0}^{d-1} \exp\left[\frac{-\pi}{d}(-j +nd)^2\right] \nonumber\\ && \exp\left[-\frac{i\pi\nu\mu ^{-1} (d+1)}{d}(-j +nd)^2\right] \exp\left[-\frac{2i\pi m\mu ^{-1}}{d}(-j +nd)\right] \nonumber\\ && \exp\left[-2(-j+nd)\left( \frac{ \pi}{d}\right)z\right] \end{eqnarray} We now change variable into $N=nd-j$. Since $n$ takes all integer values and $0\le j\le d-1$, the variable $N$ takes all integer values. Therefore the above sum becomes \begin{eqnarray} &&\pi^{-1/4}\exp\left[ \frac{-\pi}{d}z^2\right] \sum_{N= -\infty}^{ \infty} \exp\left[ \frac{-\pi }{d}N^2 -\frac{i\pi\nu \mu ^{-1}(d+1)}{d}N^2 - \frac{2i\pi m\mu ^{-1} N}{d} - 2N\left( \frac{ \pi}{d}\right)z\right]. \end{eqnarray} This is the result in Eq.(\ref{cvb}). \item[(2)] We first point out that \begin{eqnarray} |{\cal X}(-1,\nu_2);\overline{ m}_1,\overline{m}_2\rangle =\sum_{k} \delta(\overline{k}_1,\overline{m}_1) \omega( \phi( \overline{ m}_2,\overline{k}_2,\nu_2) ) |X;k\rangle. \end{eqnarray} where $k=\overline k_1 p_2+\overline k_2 p_1$. Summation over $k$ is equivalent to summation over both $\overline k_1, \overline k_2$. Its analytic representation is \begin{eqnarray} &&|{\cal X}(-1,\nu _2);\overline m_1,\overline m_2\rangle \quad \rightarrow \quad \frac{ \pi^{-1/4}}{ \sqrt{d}}\sum_{\overline k _1} \sum_{\overline k _2}\delta(\overline{k}_1,\overline{m}_1)\omega( -\phi( \overline{m}_2,\overline{k}_2,\nu_2) ) \Theta_3 \left[ \frac{\pi k}{ d}-z\left (\frac {\pi}{d}\right ); \frac{i}{d}\right]\nonumber\\&&= \frac{ \pi^{-1/4}}{ \sqrt{d}}\sum_{\overline k _2}\omega( -\phi( \overline{m}_2,\overline{k}_2,\nu_2) ) \Theta_3 \left[ \frac{\pi (\overline m_1p_2+\overline k_2 p_1)}{ d}-z\left (\frac {\pi}{d}\right ); \frac{i}{d}\right]\nonumber\\ &&=\frac{ \pi^{-1/4}}{ \sqrt{d}}\sum_{\overline k _2}\omega( -\phi( \overline{m}_2,\overline{k}_2,\nu_2) ) \sqrt{d}\exp\left[ \frac{-\pi (\overline{m}_1p_2 + \overline{k}_2p_1)^2}{d} + 2 (\overline{m}_1p_2 + \overline{k}_2p_1)\left( \frac{ \pi}{d}\right)z -\left( \frac{ \pi}{d}z^2\right)\right]\nonumber\\&& \times \Theta_3\left[-i\pi (\overline{m}_1p_2 + \overline{k}_2p_1) + i\pi z; id\right]\nonumber\\ &&= \pi^{-1/4}\exp\left[ \frac{-\pi(\overline m_1p_2-z)^2}{p_1p_2}\right] \sum_{n= -\infty}^{ \infty} \sum_{ \overline{k}_2} \exp\left[-\frac{i\pi\nu _2 (p_2+1)}{p_2}(np_2- \overline{k}_2 )^2-\frac{2i\pi\overline{m}_2}{p_2}(np_2 - \overline{k}_2 )\right ] \nonumber\\ && \times \exp\left[-\frac{\pi p_1}{p_2}(np_2-\overline k_2)^2+2\pi\left(\overline m_1-\frac{z}{p_2}\right)(np_2-\overline k_2)\right] \end{eqnarray} We now change variable into $N=np_2- \overline{k}_2 $, and we get the result in Eq.(\ref{ggg}). \item[(3)] Eq.(\ref{cvb10}) is obvious from the definition of the analytic representation. \end{itemize} \end{proof} \begin{remark} The $\tau$ in Eq.(\ref{cvb}) contains $\nu \mu ^{-1}$ which is an integer modulo $d$. Consequently, $\tau$ is defined up to an integer multiple of $d+1$. Since $d+1$ is an even integer, the $\Theta _3$ does not change (Eq.(\ref{theta})). \end{remark} Below we consider the states in WMUB $|{{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2} \rangle$, and using proposition \ref{1v9}, we show that the zeros of their analytic representation are on a straight line. \begin{proposition}\label{rrr} The $d$ zeros of the analytic representation of the vector $|{{\cal X}(\nu _1,\nu _2);\overline m_1,\overline m_2} \rangle$ where $\nu _i=-1,...,p_i-1$ and $\overline m_i\in {\mathbb Z}(p_i)$, are on a straight line and they are given by: \begin{itemize} \item[(1)] in the case $\nu_i =0,...,p_i-1$ for $i=1,2$ \begin{eqnarray}\label{MM1} &&\zeta (\nu_1,\nu _2 ;\overline m_1,\overline m_2; N)=\alpha(1-i\beta)+\gamma\nonumber\\ &&\alpha=N-\frac{1}{2};\;\;\;\;\beta=-\mu^{-1} \nu(d+1);\;\;\;\;\;\gamma=-idM+i\frac{d}{2}-i\mu ^{-1}m\nonumber\\ &&N=K+1,...,K+d;\;\;\;\;\;\;m={\overline m}_1p_2+{\overline m}_2p_1;\;\;\;\;\;\nu=\nu_1s_1+\nu_2 s_2 \end{eqnarray} where $\mu ^{-1}, s_i$ are constants given in Eqs.(\ref{135}), (\ref{20}). Appropriate choices of the `winding integers' $K,M$, locate the zeros in the desirable cell. \item[(2)] in the case $\nu _1=-1$ and $\nu _2=0,...,p_2-1$ \begin{eqnarray}\label{620} &&\zeta (-1,\nu _2; \overline m_1,\overline m_2; N)=\alpha (p_1-i\beta ')+\gamma '\nonumber\\ &&\alpha=N-\frac{1}{2};\;\;\;\;\beta '=-\nu _2(1+p_2);\;\;\;\;\;\gamma'={\overline m}_1p_2- i{\overline m}_2-ip_2\left( M -\frac{1}{2}\right)\nonumber\\ &&N=K_1+1,...,K_1+p_2;\;\;\;\;\;M=K_2+1,...,K_2+p_1;\;\;\;\;\;\;m={\overline m}_1p_2+{\overline m}_2p_1. \end{eqnarray} Appropriate choices of the `winding integers' $K_1, K_2$, locate the zeros in the desirable cell. Similar result holds for the case $\nu _2=-1$ and $\nu _1=0,...,p_1-1$. \item[(3)] in the case $\nu _1=\nu _2=-1$ \begin{eqnarray}\label{619} &&\zeta (-1, -1;\overline m_1,\overline m_2; N)=-i\alpha +\gamma '';\;\;\;\; \gamma ''=m-Md +\frac{d}{2}+id;\;\;\;\;\;N=K+1,...,K+d\nonumber\\ &&\alpha=N-\frac{1}{2};\;\;\;\;\;\;m={\overline m}_1p_2+{\overline m}_2p_1 \end{eqnarray} Appropriate choices of the `winding integers' $K,M$, locate the zeros in the desirable cell. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(1)] From Eq.(\ref{cvb}) we see that $|{\cal X}(\nu_1,\nu_2);\overline m_1,\overline m_2\rangle$ is represented by a single Theta function. Therefore the zeros in the case $\nu =\nu_1s_1+\nu_2s_2 =0,...,d-1$ are: \begin{eqnarray}\label{eq3} -\frac{\pi \mu^{-1}m}{ d}+i\zeta \left (\frac {\pi}{d}\right )=(2M-1)\frac{\pi}{2}+(2N-1)\frac{\pi \tau}{2} \end{eqnarray} where $M,N$ are integers, and $\tau$ is given in Eq.(\ref{cvb}). From this we get the result of Eqs.(\ref{MM1}). \item[(2)] In Eq.(\ref{ggg}) $|{\cal X}(-1,\nu_2);\overline m_1,\overline m_2\rangle$ is represented by a single Theta function. Therefore the zeros in the case $\nu_2 =0,...,p_2-1$ are: \begin{eqnarray}\label{eq2} -\frac{\pi \overline {m}_2}{ p_2}+i\zeta \left (\frac {\pi}{p_2}\right )- i\pi\overline{m}_1=(2M-1)\frac{\pi}{2}+(2N-1)\frac{\pi \tau}{2} \end{eqnarray} where $M,N$ are integers, and $\tau$ is given in Eq.(\ref{ggg}). From this we get the result of Eqs.(\ref{620}). \item[(3)] in the case $\nu_1=\nu_2 =-1$, the zeros of the Theta function in Eq.(\ref{cvb10}) give \begin{eqnarray}\label{eq1} \frac{\pi m}{ d}-\zeta \left (\frac {\pi}{d}\right )=(2M-1)\frac{\pi}{2}+(2N-1)\frac{i\pi}{2d} \end{eqnarray} and from this follows Eq.(\ref{619}). \end{itemize} \end{proof} In $\zeta (\nu_1,\nu _2;\overline m_1,\overline m_2; N)$ we used the `factorized notation' for the zeros corresponding to the vector $|{\cal X}(\nu_1,\nu_2);\overline m_1,\overline m_2\rangle$. The correspondence between the two notations is given in Eqs(\ref{np1}),(\ref{np2}),(\ref{np3}),(\ref{np4}) for the various vectors, and from this follows that in the zeros in the unfactorized notation are \begin{eqnarray} &&\zeta (\nu _1,\nu _2;\overline m_1,\overline m_2)= {\cal \zeta} '(1,\mu ^{-1}\nu ;m \mu ^{-1})\nonumber\\ &&\nu=\nu _1s_1+\nu _2s_2;\;\;\;\;\;;m={\overline m}_1p_2+{\overline m}_2p_1;\;\;\;\;\nu_i=0,...,p_i-1 \end{eqnarray} and \begin{eqnarray} &&\zeta(-1,\nu _2;\overline m_1,\overline m_2)=\zeta '(p_1, s_1+\nu _2s_2 ;m);\;\;\;\;\;m={\overline m}_1p_2+{\overline m}_2p_1\nonumber\\ &&\zeta(\nu _1,-1;\overline m_1,\overline m_2)=\zeta '(p_2, s_2+\nu _1s_1 ;m)\nonumber\\ &&\zeta(-1,-1;\overline m_1,\overline m_2)=\zeta '(0,1;m) \end{eqnarray} We refer to the following set of $d$ zeros \begin{eqnarray} {\cal Z}(\nu _1,\nu_2;\overline m_1,\overline m_2)=\{\zeta (\nu _1,\nu_2; \overline m_1,\overline m_2; N);\;N=1,...,d\};\;\;\;\nu_i\in {\mathbb Z}(p_i) \end{eqnarray} as the `line' of the $d$ zeros corresponding to $\ket{{\cal X}(\nu _1,\nu_2);\overline m_1,\overline m_2}$. In the unfactorized notation this is \begin{eqnarray} &&{\cal Z}'(1,\mu ^{-1}\nu ;m )=\{{\cal \zeta} '(1,\mu ^{-1}\nu ; m; N);\;N=1,...,d\}\nonumber\\ &&{\cal Z}'(p_1, s_1+\nu _2s_2 ;m)=\{{\cal \zeta} '(p_1, s_1+\nu _2s_2 ; m; N);\;N=1,...,d\}\nonumber\\ &&{\cal Z}'(p_2, s_2+\nu _1s_1 ;m)=\{\zeta '(p_2, s_2+\nu _1s_1 ; m; N);\;N=1,...,d\}\nonumber\\ &&{\cal Z}'(0,1;m)=\{{\cal \zeta} '(0,1;m;N);\;N=1,...,d\} \end{eqnarray} \begin{proposition} The $d^2$ zeros in the cell ${\mathfrak S}$, of all $d$ vectors in the basis ${\cal B}(\nu _1,\nu _2)$ are \begin{eqnarray}\label{62} {\mathfrak z}(r,s)=(r+is)+\frac{1}{2}(1+i);\;\;\;\;\;r,s=0,...,d-1. \end{eqnarray} and they do not depend on $(\nu _1, \nu_2)$. We denote as ${\mathfrak Z}(d)$ the lattice of these zeros. \end{proposition} \begin{proof} We consider three cases: \begin{itemize} \item[(1)] In the case $\nu_i =0,...,p_i-1$ for $i=1,2$ we use Eq. (\ref{MM1}). $N$ takes all values $1,...,d$ in the real axis. For each $N$, the $i\mu ^{-1}m$ gives all required values $1,...,d$ in the imaginary axis. We note that when $m$ takes all values in ${\mathbb Z}(d)$, the $\mu ^{-1}m$ also takes all values in ${\mathbb Z}(d)$, because $\mu ^{-1}$ is invertible. \item[(2)] In the case $\nu _1=-1$ and $\nu _2=0,...,p_2-1$, we use Eq.(\ref{620}). $Np_1+{\overline m}_1p_2$ takes all values $1,...,d$ in the real axis. Indeed, $Np_1$ gives the integer multiples of $p_1$ and ${\overline m}_1p_2$ gives the `in between' values. It is important here that $p_2$ is an invertible element within ${\mathbb Z}(p_1)$. For each $Np_1+{\overline m}_1p_2$, the $i(p_2 M+{\overline m}_2)$ gives all required values $1,...,d$ in the imaginary axis. Indeed, $Mp_2$ gives the integer multiples of $p_2$ and ${\overline m}_2$ gives the `in between' values. Similar result holds for the case $\nu _2=-1$ and $\nu _1=0,...,p_1-1$. \item[(3)] In the case $\nu _1=\nu _2=-1$ we use Eq.(\ref{619}). The $m$ takes all values $1,...,d$ in the real axis. For each $m$, the $N$ gives all required values $1,...,d$ in the imaginary axis. \end{itemize} The above arguments do not depend on the value of $(\nu _1, \nu_2)$. \end{proof} \section{Triality between lines in finite geometries, WMUBs, and the zeros of their analytic representations} \begin{definition} ${\cal A}(\nu _1,\nu _2)$ is the set of the $d$ parallel lines of zeros in ${\mathfrak S}$, of the $d$ vectors in a weak mutually unbiased basis: \begin{eqnarray} {\cal A}(\nu _1,\nu _2)=\{{\cal Z}(m;\nu _1,\nu_2)|m\in {\mathbb Z}(d)\};\;\;\;\nu_i\in{\mathbb Z}(p_i) \end{eqnarray} In the `unfactorized notation' this is \begin{eqnarray} &&A(1,\mu ^{-1}\nu )=\{{\cal Z} '(1,\mu ^{-1}\nu ;m )|m\in {\mathbb Z}(d)\}\nonumber\\ &&A(p_1, s_1+\nu _2s_2 )=\{{\cal Z} '(p_1, s_1+\nu _2s_2 ;m)|m\in {\mathbb Z}(d)\}\nonumber\\ &&A(p_2, s_2+\nu _1s_1 )=\{{\cal Z} '(p_2, s_2+\nu _1s_1 ;m)|m\in {\mathbb Z}(d)\}\nonumber\\ &&A(0,1)=\{{\cal Z} '(0,1;m)|m\in {\mathbb Z}(d)\}. \end{eqnarray} \end{definition} Each of these sets is characterized by the slope of the lines it contains. In the proposition below, we use the slopes of these lines. We also define slopes of a line $L(\rho, \sigma)$ in ${\cal G}(d)$ as $\frac{\sigma}{\rho}$. Two lines $L(\rho, \sigma)$ and $L(\rho ', \sigma ')$ have the same slope if \begin{eqnarray}\label{sdf} \rho \sigma'-\rho '\sigma=0\;({\rm mod}\;d). \end{eqnarray} \begin{theorem} \mbox{} \begin{itemize} \item[(1)] There is a triality between \begin{itemize} \item the weak mutually unbiased bases in $H(d)$ \item the non-near linear finite geometry ${\mathcal G}(d)$ associated with the phase space ${\mathbb Z}(d)\times {\mathbb Z}(d)$ \item the lattice ${\mathfrak Z}(d)$ in the cell ${\mathfrak S}$, which we also regard as a non-near linear finite geometry ${\mathbb Z}(d)$ \end{itemize} as follows: \begin{eqnarray}\label{trio} {\cal B}(\nu _1,\nu _2)\;\leftrightarrow\;{\cal L}(\nu _1,\nu _2)\;\leftrightarrow\;{\cal A}(\nu _1,\nu _2) \end{eqnarray} \item[(2)] In this triality \begin{itemize} \item the overlap between vectors in the WMUBs is $({\cal B}(\nu_1,\nu_2),{\cal B}(\nu_1',\nu_2'))=r(\nu _1, \nu _2|\nu _1', \nu _2')/d$, where $r(\nu _1, \nu _2|\nu _1', \nu _2')$ has been given in Eq.(\ref{200}). \item two lines ${\cal L}(\nu_1,\nu_2)$ and ${\cal L}(\nu_1',\nu_2')$ have in common $r(\nu _1, \nu _2|\nu _1', \nu _2')$ points \item for any $m$, the lines ${\cal Z}(m; \nu_1,\nu_2)$ in ${\cal A}(\nu_1,\nu_2)$ and ${\cal Z}(m; \nu_1',\nu_2')$ in ${\cal A}(\nu_1',\nu_2')$ have $r(\nu _1, \nu _2|\nu _1', \nu _2')$ points in common \end{itemize} \end{itemize} \end{theorem} \begin{proof} \mbox{} \begin{itemize} \item[(1)] We have explained earlier (Eq.(\ref{aaa})) that ${\cal B}(\nu _1,\nu _2)\;\leftrightarrow\;{\cal L}(\nu _1,\nu _2)$ and we now prove that ${\cal L}(\nu _1,\nu _2)\;\leftrightarrow\;{\cal A}(\nu _1,\nu _2)$. The proof is based on showing that the corresponding slopes are equal. We consider the following three cases: \begin{itemize} \item In the case $\nu _i=0,...,p_i-1$, Eq.(\ref{MM1}) shows that the slope of ${\cal A}(\nu _1,\nu _2)$ in ${\mathfrak Z}(d)$ is $\mu ^{-1}\nu(d+1)$. Eq.(\ref{line2}) shows that the slope of the line ${\cal L}(\nu _1,\nu_2)=L(1,\mu ^{-1}\nu)$ in ${\cal G}(d)$ is also $\mu ^{-1}\nu$. The two slopes are equal (modulo $d$). \item In the case $\nu _1=-1$ and $\nu _2=0,...,p_2-1$, Eq.(\ref{620}) shows that the slope of ${\cal A}(-1,\nu _2)$ in ${\mathfrak Z}(d)$ is $\frac{\nu _2(1+p_2)}{p_1}$. Eq.(\ref{line3}) shows that the slope of the line ${\cal L}(-1,\nu_2)=L(p_1,s_1+s_2\nu_2)$ in ${\cal G}(d)$ is $\frac{s_1+s_2\nu _2}{p1}$. These two slopes are equal according to Eq.(\ref{sdf}). Analogous result holds for the case $\nu _2=-1$ and $\nu _1=0,...,p_1-1$. \item In the case $\nu_1=\nu _2=-1$, Eq.(\ref{619}) shows that the ${\cal A}(-1,-1)$ in ${\mathfrak Z}(d)$ is vertical. The line ${\cal L}(-1,-1)=L(0,1)$ in in ${\cal G}(d)$ is also vertical \end{itemize} \item[(2)] In Eq.(\ref{jjj}), we have explained that $({\cal B}(\nu_1,\nu_2),{\cal B}(\nu_1',\nu_2'))=r(\nu _1, \nu _2|\nu _1', \nu _2')/d$. Also in proposition \ref{1234} we have shown that two lines ${\cal L}(\nu_1,\nu_2)$ and ${\cal L}(\nu_1',\nu_2')$ have in common $r(\nu _1, \nu _2|\nu _1', \nu _2')$ points. Below we prove analogous result for the lines of zeros. We consider the lines ${\cal L}(\nu _1, \nu_2)$ and ${\cal L}(\nu _1', \nu_2')$ and assume that they have $r$ points in common (where $r|d$). We show that the lines ${\cal Z}(m; \nu_1,\nu_2)$ and ${\cal Z}(m; \nu_1',\nu_2')$ also have $r$ points in common, i.e., that $\zeta (m,\nu_1,\nu _2,N)=\zeta (m,\nu_1',\nu _2',N')$ for $r$ pairs $(N,N')$. We give explicit proof only for the case that all $\nu_i,\nu_i'=0,...,d-1$. The proof in the other cases is similar. In this case, using Eq.(\ref{line2}) we conclude that there exist $r$ pairs $(\lambda , \lambda ')$ such that \begin{eqnarray} (\lambda , \lambda \mu ^{-1}\nu )=(\lambda', \lambda ' \mu ^{-1}\nu ');\;\;\;\;\;\nu=\nu _1s_1+\nu _2s_2. \end{eqnarray} This leads to $\lambda =\lambda '\;({\rm mod}\;d)$ and $\lambda \mu ^{-1}\nu =\lambda' \mu ^{-1}\nu ' \;({\rm mod}\;d)$. We then use Eq.(\ref{MM1}) to prove that \begin{eqnarray} \zeta (m,\nu_1,\nu _2,\lambda )=\zeta (m,\nu_1',\nu _2',\lambda '). \end{eqnarray} for each of the $r$ pairs $(\lambda , \lambda ')$. \end{itemize} \end{proof} Table \ref{t1} shows explicitly this triality for the case $d=21$. The precise correspondence of the various quantities involved in this triality, is summarized in table \ref{t2}. \begin{example} We consider an example of two sets of lines of zeros in ${\mathfrak Z}(21)$, which is analogous to example \ref{exa} (in this case $p_1=3$ and $p_2=7$). They are the ${\cal A} (2,3)$ and ${\cal A} (2,5)$. We take the line of zeros ${\cal Z}(4;2,3)$ from the set ${\cal A} (2,3)$, and the line of zeros ${\cal Z}(4;2,5)$ from the set ${\cal A} (2,5)$ (i.e., we take as an example, $m=4$). The lines ${\cal Z}(4;2,3)$ and ${\cal Z}(4;2,5)$ are shown in Fig.\ref{f2}, and they have in common the $p_1=3$ zeros: \begin{eqnarray} N=4\;\rightarrow\;{\zeta}(4;2,3,N)={\zeta}(4;2,5;N)=3.5+i4.5\nonumber\\ N=4+p_2=11\;\rightarrow\;{\zeta}(4;2,3,N)={\zeta}(4;2,5;N)=10.5+i18.5\nonumber\\ N=4+2p_2=18\;\rightarrow\;{\zeta}(4;2,3,N)={\zeta}(4;2,5;N)=17.5+i11.5 \end{eqnarray} If we regard the $3.5+i4.5$ as `origin', these three points have coordinates $(0,0)$, $(7,14)$ and $(14,7)$, which are exactly the same as in the example \ref{exa}. Comparison of Figs.\ref{f1},\ref{f2} shows this. We also consider the case $m=5$. The lines ${\cal Z}(5;2,3)$ and ${\cal Z}(5;2,5)$ have in common the $p_1=3$ zeros: \begin{eqnarray} N=4\;\rightarrow\;{\zeta}(5;2,3,N)={\zeta}(5;2,5;N)=3.5+i6.5\nonumber\\ N=4+p_2=11\;\rightarrow\;{\zeta}(5;2,3,N)={\zeta}(5;2,5;N)=10.5+i20.5\nonumber\\ N=4+2p_2=18\;\rightarrow\;{\zeta}(5;2,3,N)={\zeta}(5;2,5;N)=17.5+i13.5 \end{eqnarray} Again we regard the $3.5+i6.5$ as `origin', and these three points have coordinates $(0,0)$, $(7,14)$ and $(14,7)$, as above and as in the example \ref{exa}. It is seen that for any $m$, the lines ${\cal Z}(m; \nu_1,\nu_2)$ in ${\cal A}(\nu_1,\nu_2)$ and ${\cal Z}(m; \nu_1',\nu_2')$ in ${\cal A}(\nu_1',\nu_2')$ have $r$ points in common (where $r=1,p_1,p_2$). \end{example} \section{Discussion} The objective of this paper is to use analytic representations and their zeros, in the study of the general area of mutually unbiased bases. Quantum states are represented with the analytic functions of Eq.(\ref{aaa1}). The zeros of these analytic functions determine uniquely the quantum state of the system. We have shown that there is a triality that links lines in ${\cal G}(d)$, WMUBs in $H(d)$, and lines of zeros of WMUBs in ${\mathfrak Z}(d)$. The duality between lines in ${\cal G}(d)$, and WMUBs in $H(d)$ is surprising, but with hindsight it might be argued that quantum states in the Hilbert space inherit the properties of the underline phase space. But the appearance of lines of zeros of WMUBs in ${\mathfrak Z}(d)$ as a third component in this triality is certainly very surprising, and it reaffirms the important (albeit counterintuitive) role of analytic functions in the description of quantum systems. The general methodology is the factorization of a $d$-dimensional system into subsystems with prime dimension (for simplicity we have taken $d=p_1p_2$). Tensor products of mutually unbiased bases in each subsystem lead to weak mutually unbiased bases, with overlaps given in Eq.(\ref{8}). There is a duality between weak mutually unbiased bases, and maximal lines through the origin in the ${\cal G}(d)={\mathbb Z}(d) \times {\mathbb Z}(d)$ phase space. This duality has been extended in this paper into a triality, with the involvement of the zeros of analytic functions that represent the quantum states. The method can also be used when $d=p_1\times...\times p_N$. In this case, there is an isomorphism between $H(d)$ and $H(p_1)\otimes...\otimes H(p_N)$, and the weak mutually unbiased bases are tensor products of mutually unbiased bases in each $H(p_i)$. Bijective maps between ${\mathbb Z}(d)$ and ${\mathbb Z}(p_1)\times ...\times {\mathbb Z}(p_N)$ (generalizations of Eqs.(\ref{map1}), (\ref{map2})) can be found in \cite{2} (and in \cite{Good}). Using them we can factorize the lines in the finite geometry ${\mathbb Z}(d)\times {\mathbb Z}(d)$. We can also define analytic representations in a cell in the complex plane, and factorize the lines of their zeros. The methodology here is analogous to the one that we presented, but the technical details are more complicated. Existing work in the general area of mutually unbiased bases is based on discrete Mathematics. The present work links them with the theory of analytic functions. \begin{table} \centering \caption{Correspondence between WMUBs in the Hilbert space $H(21)$, lines in the ${\cal G}(21)$ phase space, and sets of lines of zeros in the lattice ${\mathfrak Z}(21)$. Both the `factorized notation' and `unfactorized notation' are shown.} \begin{tabular}{|p{4cm}|p{4cm}|p{4cm}|} \hline $H(21)$ & ${\cal G}(21)$ & $ \mathfrak {Z}(21)$ \\ \hline { \begin{align*} B(0,1) &= \mathcal{B}(-1,-1) \\ B(1,0) &= \mathcal{B}(0,0) \\ B(1,1) &= \mathcal{B}(1,3) \\ B(1,2) &= \mathcal{B}(2,6) \\ B(1,3) &= \mathcal{B}(0,2) \\ B(1,4) &= \mathcal{B}(1,5) \\ B(1,5) &= \mathcal{B}(2,1) \\ B(1,6) &= \mathcal{B}(0,4) \\ B(1,7)&= \mathcal{B}(1,0) \\ B(1,8) &= \mathcal{B}(2,3) \\ B(1,9) &= \mathcal{B}(0,6) \\ B(1,10) &= \mathcal{B}(1,2) \\ {B}(1,11) &= \mathcal{B}(2,5) \\ {B}(1,12) &= \mathcal{B}(0,1) \\ B(1,13) &= \mathcal{B}(1,4) \\ B(1,14) &= \mathcal{B}(2,0) \\ B(1,15) &= \mathcal{B}(0,3) \\ B(1,16) &= \mathcal{B}(1,6) \\ B(1,17) &= \mathcal{B}(2,2) \\ B(1,18) &= \mathcal{B}(0,5) \\ B(1,19) &= \mathcal{B}(1,1) \\ B(1,20) &= \mathcal{B}(2,4) \\ B(3,7) &= \mathcal{B}(-1,0) \\ B(3,1) &= \mathcal{B}(-1,1) \\ B(3,16) &= \mathcal{B}(-1,2) \\ B(3,10) &= \mathcal{B}(-1,3) \\ B(3,4) &= \mathcal{B}(-1,4) \\ B(3,19) &= \mathcal{B}(-1,5) \\ B(3,13) &= \mathcal{B}(-1,6) \\ B(7,15) &= \mathcal{B}(0,-1) \\ B(7,1)&= \mathcal{B}(1,-1) \\ B(7,8) &= \mathcal{B}(2,-1) \\ \end{align*}} & { \begin{align*} L(0,1) &= {\cal L}(-1,-1) \\ L(1,0) &= {\cal L}(0,0) \\ L(1,1) &= {\cal L}(1,3) \\ L(1,2) &= {\cal L}(2,6) \\ L(1,3) &= {\cal L}(0,2) \\ L(1,4) &= {\cal L}(1,5) \\ L(1,5) &= {\cal L}(2,1) \\ L(1,6) &= {\cal L}(0,4) \\ L(1,7)&= {\cal L}(1,0) \\ L(1,8) &= {\cal L}(2,3) \\ L(1,9) &= {\cal L}(0,6) \\ L(1,10) &= {\cal L}(1,2) \\ L(1,11) &= {\cal L}(2,5) \\ L(1,12) &= {\cal L}(0,1) \\ L(1,13) &= {\cal L}(1,4) \\ L(1,14) &= {\cal L}(2,0) \\ L(1,15) &= {\cal L}(0,3) \\ L(1,16) &= {\cal L}(1,6) \\ L(1,17) &= {\cal L}(2,2) \\ L(1,18) &= {\cal L}(0,5) \\ L(1,19) &={\cal L}(1,1) \\ L(1,20) &= {\cal L}(2,4) \\ L(3,7) &= {\cal L}(-1,0) \\ L(3,1) &= {\cal L}(-1,1) \\ L(3,16) &= {\cal L}(-1,2) \\L(3,10) &= {\cal L}(-1,3) \\ L(3,4) &= {\cal L}(-1,4) \\ L(3,19) &= {\cal L}(-1,5) \\ L(3,13) &= {\cal L}(-1,6) \\ L(7,15) &= {\cal L}(0,-1) \\ L(7,1)&= {\cal L}(1,-1) \\ L(7,8) &= {\cal L}(2,-1) \\ \end{align*}} & { \begin{align*} A(0,1) &= {\cal A}(-1,-1) \\ A(1,0) &= {\cal A}(0,0) \\ A(1,1) &= {\cal A}(1,3) \\ A(1,2) &= {\cal A}(2,6) \\ A(1,3) &= {\cal A}(0,2) \\ A(1,4) &= {\cal A}(1,5) \\ A(1,5) &= {\cal A}(2,1) \\ A(1,6) &= {\cal A}(0,4) \\ A(1,7)&= {\cal A}(1,0) \\ A(1,8) &= {\cal A}(2,3) \\ A(1,9) &= {\cal A}(0,6) \\ A(1,10) &= {\cal A}(1,2) \\ A(1,11) &= {\cal A}(2,5) \\ A(1,12) &= {\cal A}(0,1) \\ A(1,13) &= {\cal A}(1,4) \\ A(1,14) &= {\cal A}(2,0) \\ A(1,15) &= {\cal A}(0,3) \\ A(1,16) &= {\cal A}(1,6) \\ A(1,17) &= {\cal A}(2,2) \\ A(1,18) &= {\cal A}(0,5) \\ A(1,19) &={\cal A}(1,1) \\ A(1,20) &= {\cal A}(2,4) \\ A(3,7) &= {\cal A}(-1,0) \\ A(3,1) &= {\cal A}(-1,1) \\ A(3,16) &= {\cal A}(-1,2) \\A(3,10) &= {\cal A}(-1,3) \\ A(3,4) &= {\cal A}(-1,4) \\ A(3,19) &= {\cal A}(-1,5) \\ A(3,13) &= {\cal A}(-1,6) \\ A(7,15) &= {\cal A}(0,-1) \\ A(7,1)&= {\cal A}(1,-1) \\ A(7,8) &= {\cal A}(2,-1) \\ \end{align*}} \\ \hline \end{tabular}\label{t1} \end{table} \begin{table}[h] \renewcommand{2}{2} \centering \caption{Correspondence between the various quantities in the triality of Eq.(\ref{trio}).} \begin{tabular}{|p{0.33\columnwidth}|p{0.33\columnwidth}|p{0.33\columnwidth}|} \hline {\bf WMUBs in $H(d)$}&{\bf Lines in ${\cal G}(d)$} & {\bf Lines of zeros in ${\mathfrak Z}(d)$}\\ \hline $\psi(d)$ WMUB ${\cal B}(\nu_1,\nu_2)$&$\psi(d)$ maximal lines through the origin ${\cal L}(\nu_1,\nu_2)$& $\psi (d)$ sets ${\cal A}(\nu_1,\nu_2)$ of parallel lines of zeros \\\hline $d$ orthogonal vectors in each WMUB ${\cal B}(\nu_1,\nu_2)$& $d$ points in each ${\cal L}(\nu_1,\nu_2)$& $d$ parallel lines of zeros in each set ${\cal A}(\nu_1,\nu_2)$. Each line contains $d$ zeros. \\\hline {\small $({\cal B}(\nu_1,\nu_2),{\cal B}(\nu_1',\nu_2'))=\frac{r(\nu _1, \nu _2|\nu _1', \nu _2')}{d}$ } (Eq.(\ref{200}))& two lines ${\cal L}(\nu_1,\nu_2)$ and ${\cal L}(\nu_1',\nu_2')$ have in common $r(\nu _1, \nu _2|\nu _1', \nu _2')$ points& for any $m$, the lines ${\cal Z}(m; \nu_1,\nu_2)$ in ${\cal A}(\nu_1,\nu_2)$ and ${\cal Z}(m; \nu_1',\nu_2')$ in ${\cal A}(\nu_1',\nu_2')$ have $r(\nu _1, \nu _2|\nu _1', \nu _2')$ points in common\\\hline \end{tabular}\label{t2} \end{table} \begin{figure} \caption{The lines ${\cal L}(2,3)$ (circles), and ${\cal L}(2,5)$ (crosses), in the ${\cal G} (21)$ finite geometry. The two lines have in common the three points $(0,0)$, $(7,14)$, $(14,7)$.} \label{f1} \end{figure} \begin{figure} \caption{The lines of zeros ${\cal Z}(4;2,3)$ (circles), and ${\cal Z}(4;2,5)$ (crosses), in the cell ${\mathfrak S}$ in the complex plane . The two lines have in common the zeros $3.5+i4.5$, $10.5+i18.5$, $17.5+i11.5$.} \label{f2} \end{figure} \end{document}
arXiv
A Property of Equiangular Polygons A Mathematical Droodle Created with GeoGebra (Note: the side lines of the depicted polygon which is initially regular can be dragged by thick points. The side lines always remain parallel to their original position. The operation therefore affects the side lengths but preserves the angles of the polygon. The result is an equiangular polygon, i.e., a polygon with equal angles. This is at least true as long as the polygon remain simple, or, which in this case is the same, convex.) |Activities| |Contact| |Front page| |Contents| |Geometry| The applet suggests a generalization of Viviani's theorem: The sum of distances from a point to the side lines of an equiangular polygon does not depend on the point and is that polygon's invariant. An equilateral triangle is also equiangular (by SSS), and vice versa. The two properties do not necessarily come together if the number of sides (and, therefore, angles) exceeds $3.\,$ A polygon which is both equiangular and equilateral is regular. Viviani's theorem is easily extended to regular and, in fact, equilateral polygons. The proof is simple: connect a point to the vertices of the polygon to obtain a series of triangles with the apex at the given point and the base on a side of the given polygon. Let $h_{i}\,$ be the (signed) distance from the given point to side $\text{#}i.\,$ If the common side length is denoted $a,\,$ the area of the polygon will be given as $\displaystyle \sum \frac{ah_{i}}{2} = \frac{a}{2}\cdot\sum h_{i}.$ (Here the area $\displaystyle \frac{ah_{i}}{2}\,$ is positive if the orientations of the triangle $\text{#}i\,$ and the polygon coincide and is negative otherwise.) Since neither $S\,$ nor $\displaystyle \frac{a}{2}\,$ depend on point selection, $\sum h_{i}\,$ is also independent of the point. It's less obvious that the property claimed by Viviani's theorem is also carried over to the equiangular polygons. The proof is simple and elegant. Quite obviously, an equiangular polygon can be embedded into a regular polygon with parallel sides. This can be done in a continuum of ways. But fix a suitable regular polygon, so that the distances between the corresponding sides of the two polygons remain constant. Then clearly if (1) holds for one of them, it also holds for the other. We have just seen that it does hold for regular polygons, so it holds for the equiangular ones. A converse is true for triangles: if the sum of distances from a point to the sides of a triangle is independent of the point, the triangle is equilateral. To see this, place the point at the vertices of the triangle. We then have that in the triangle all three altitudes coincide. From $\displaystyle S = \frac{ah}{2}\,$ (for a triangle) we conclude that the sides are also equal. The converse does not hold for other polygons. Even for a quadrilateral, parallelogram serves as a counterexample. Michel Cabart offered a different approach. (This is an extention of a proof without words of Viviani's theorem by Hans Samelson). Let $A\,$ be a point inside the polygon, $\mathbf{n}_{i}\,$ unit vectors perpendicular to the $i^{th}\,$ side of the polygon, and $H_{i}\,$ the feet of the perpendicular from $A\,$ to the $i^{th}\,$ side. Since the polygon is equiangular, the angles between successive vectors $\mathbf{n}_{i}\,$ are equal, so that $\sum \mathbf{n}_{i} = 0.\,$ The scalar product $(AX, \mathbf{n}_{i}),\,$ with $X\,$ on the $i^{th}\,$ side, does not depend on the position of $X.$ The sum of distances from $A\,$ to the sides of the polygon is $S_{A} = \sum AH_{i} = \sum (AH_{i}, \mathbf{n}_{i}).$ For another point $B\,$ with the pedals $G_{i},\,$ the distance is $S_{B} = \sum BG_{i} = \sum (BG_{i}, \mathbf{n}_{i}) = \sum (BH_{i}, \mathbf{n}_{i}).$ $S_{A} - S_{B} = (AB, \sum \mathbf{n}_{i}) = 0.$ T. Andreescu, B. Enescu, Mathematical Olympiad Treasures, Birkhäuser, 2004 Equilateral and 3-4-5 Triangles Rusty Compass Construction of Equilateral Triangle Equilateral Triangle on Parallel Lines Equilateral Triangle on Parallel Lines II When a Triangle is Equilateral? Viviani's Theorem Viviani's Theorem (PWW) Tony Foster's Proof of Viviani's Theorem Viviani in Isosceles Triangle Viviani by Vectors Slanted Viviani Slanted Viviani, PWW Morley's Miracle Triangle Classification Napoleon's Theorem Sum of Squares in Equilateral Triangle Fixed Point in Isosceles and Equilateral Triangles Parallels through the Vertices of Equilateral Triangle
CommonCrawl
Integer factorization In number theory, integer factorization is the decomposition, when possible, of a positive integer into a product of smaller integers. If the factors are further restricted to be prime numbers, the process is called prime factorization, and includes the test whether the given integer is prime (in this case, one has a "product" of a single factor). Unsolved problem in computer science: Can integer factorization be solved in polynomial time on a classical computer? (more unsolved problems in computer science) When the numbers are sufficiently large, no efficient non-quantum integer factorization algorithm is known. However, it has not been proven that such an algorithm does not exist. The presumed difficulty of this problem is important for the algorithms used in cryptography such as RSA public-key encryption and the RSA digital signature.[1] Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing. In 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé and Paul Zimmermann factored a 240-digit (795-bit) number (RSA-240) utilizing approximately 900 core-years of computing power.[2] The researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.[3] Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any computer increases drastically. Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure. Prime decomposition By the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. (By convention, 1 is the empty product.) Testing whether the integer is prime can be done in polynomial time, for example, by the AKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors. Given a general algorithm for integer factorization, any integer can be factored into its constituent prime factors by repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if n = 171 × p × q where p < q are very large primes, trial division will quickly produce the factors 3 and 19 but will take p divisions to find the next factor. As a contrasting example, if n is the product of the primes 13729, 1372933, and 18848997161, where 13729 × 1372933 = 18848997157, Fermat's factorization method will begin with $\left\lceil {\sqrt {n}}\right\rceil =18848997159$ which immediately yields $ b={\sqrt {a^{2}-n}}={\sqrt {4}}=2$ and hence the factors a − b = 18848997157 and a + b = 18848997161. While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of $ \left\lceil {\sqrt {18848997157}}\,\right\rceil =137292$ for a is nowhere near 1372933. Current state of the art See also: Integer factorization records Among the b-bit numbers, the most difficult to factor in practice using existing algorithms are those that are products of two primes of similar size. For this reason, these are the integers used in cryptographic applications. The largest such semiprime yet factored was RSA-250, an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using Intel Xeon Gold 6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the general number field sieve run on hundreds of machines. Difficulty and complexity No algorithm has been published that can factor all integers in polynomial time, that is, that can factor a b-bit number n in time O(bk) for some constant k. Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist and hence that the problem is not in class P.[4][5] The problem is clearly in class NP, but it is generally suspected that it is not NP-complete, though this has not been proven.[6] There are published algorithms that are faster than O((1 + ε)b) for all positive ε, that is, sub-exponential. As of 2022, the algorithm with best theoretical asymptotic running time is the general number field sieve (GNFS), first published in 1993,[7] running on a b-bit number n in time: $\exp \left(\left({\sqrt[{3}]{\frac {64}{9}}}+o(1)\right)(\ln n)^{\frac {1}{3}}(\ln \ln n)^{\frac {2}{3}}\right).$ For current computers, GNFS is the best published algorithm for large n (more than about 400 bits). For a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time. This will have significant implications for cryptography if quantum computation becomes scalable. Shor's algorithm takes only O(b3) time and O(b) space on b-bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by using NMR techniques on molecules that provide 7 qubits.[8] It is not known exactly which complexity classes contain the decision version of the integer factorization problem (that is: does n have a factor smaller than k?). It is known to be in both NP and co-NP, meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorization n = d(n/d) with d ≤ k. An answer of "no" can be certified by exhibiting the factorization of n into distinct primes, all larger than k; one can verify their primality using the AKS primality test, and then multiply them to obtain n. The fundamental theorem of arithmetic guarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in both UP and co-UP.[9] It is known to be in BQP because of Shor's algorithm. The problem is suspected to be outside all three of the complexity classes P, NP-complete, and co-NP-complete. It is therefore a candidate for the NP-intermediate complexity class. If it could be proved to be either NP-complete or co-NP-complete, this would imply NP = co-NP, a very surprising result, and therefore integer factorization is widely suspected to be outside both these classes. In contrast, the decision problem "Is n a composite number?" (or equivalently: "Is n a prime number?") appears to be much easier than the problem of specifying factors of n. The composite/prime problem can be solved in polynomial time (in the number b of digits of n) with the AKS primality test. In addition, there are several probabilistic algorithms that can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease of primality testing is a crucial part of the RSA algorithm, as it is necessary to find large prime numbers to start with. Factoring algorithms Special-purpose A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms. An important subclass of special-purpose factoring algorithms is the Category 1 or First Category algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors.[10] For example, naive trial division is a Category 1 algorithm. • Trial division • Wheel factorization • Pollard's rho algorithm, which has two common flavors to identify group cycles: one by Floyd and one by Brent. • Algebraic-group factorization algorithms, among which are Pollard's p − 1 algorithm, Williams' p + 1 algorithm, and Lenstra elliptic curve factorization • Fermat's factorization method • Euler's factorization method • Special number field sieve General-purpose A general-purpose factoring algorithm, also known as a Category 2, Second Category, or Kraitchik family algorithm,[10] has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factor RSA numbers. Most general-purpose factoring algorithms are based on the congruence of squares method. • Dixon's algorithm • Continued fraction factorization (CFRAC) • Quadratic sieve • Rational sieve • General number field sieve • Shanks's square forms factorization (SQUFOF) Other notable algorithms • Shor's algorithm, for quantum computers Heuristic running time In number theory, there are many integer factoring algorithms that heuristically have expected running time $L_{n}\left[{\tfrac {1}{2}},1+o(1)\right]=e^{(1+o(1)){\sqrt {(\log n)(\log \log n)}}}$ in little-o and L-notation. Some examples of those algorithms are the elliptic curve method and the quadratic sieve. Another such algorithm is the class group relations method proposed by Schnorr,[11] Seysen,[12] and Lenstra,[13] which they proved only assuming the unproved Generalized Riemann Hypothesis (GRH). Rigorous running time The Schnorr–Seysen–Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance[14] to have expected running time $L_{n}\left[{\tfrac {1}{2}},1+o(1)\right]$ by replacing the GRH assumption with the use of multipliers. The algorithm uses the class group of positive binary quadratic forms of discriminant Δ denoted by GΔ. GΔ is the set of triples of integers (a, b, c) in which those integers are relative prime. Schnorr–Seysen–Lenstra Algorithm Given an integer n that will be factored, where n is an odd positive integer greater than a certain constant. In this factoring algorithm the discriminant Δ is chosen as a multiple of n, Δ = −dn, where d is some positive multiplier. The algorithm expects that for one d there exist enough smooth forms in GΔ. Lenstra and Pomerance show that the choice of d can be restricted to a small set to guarantee the smoothness result. Denote by PΔ the set of all primes q with Kronecker symbol $\left({\tfrac {\Delta }{q}}\right)=1$. By constructing a set of generators of GΔ and prime forms fq of GΔ with q in PΔ a sequence of relations between the set of generators and fq are produced. The size of q can be bounded by $c_{0}(\log |\Delta |)^{2}$ for some constant $c_{0}$. The relation that will be used is a relation between the product of powers that is equal to the neutral element of GΔ. These relations will be used to construct a so-called ambiguous form of GΔ, which is an element of GΔ of order dividing 2. By calculating the corresponding factorization of Δ and by taking a gcd, this ambiguous form provides the complete prime factorization of n. This algorithm has these main steps: Let n be the number to be factored. 1. Let Δ be a negative integer with Δ = −dn, where d is a multiplier and Δ is the negative discriminant of some quadratic form. 2. Take the t first primes $p_{1}=2,p_{2}=3,p_{3}=5,\ldots ,p_{t}$, for some $t\in {\mathbb {N} }$. 3. Let $f_{q}$ be a random prime form of GΔ with $ \left({\frac {\Delta }{q}}\right)=1$. 4. Find a generating set X of GΔ 5. Collect a sequence of relations between set X and {fq : q ∈ PΔ} satisfying: $ \left(\prod _{x\in X_{}}x^{r(x)}\right).\left(\prod _{q\in P_{\Delta }}f_{q}^{t(q)}\right)=1$ 6. Construct an ambiguous form $(a,b,c)$ that is an element f ∈ GΔ of order dividing 2 to obtain a coprime factorization of the largest odd divisor of Δ in which $\Delta =-4ac{\text{ or }}a(a-4c){\text{ or }}(b-2a)(b+2a)$ 7. If the ambiguous form provides a factorization of n then stop, otherwise find another ambiguous form until the factorization of n is found. In order to prevent useless ambiguous forms from generating, build up the 2-Sylow group Sll2(Δ) of G(Δ). To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and the Jacobi sum test. Expected running time The algorithm as stated is a probabilistic algorithm as it makes random choices. Its expected running time is at most $L_{n}\left[{\tfrac {1}{2}},1+o(1)\right]$.[14] See also • Aurifeuillean factorization • Bach's algorithm for generating random numbers with their factorizations • Canonical representation of a positive integer • Factorization • Multiplicative partition • $p$-adic valuation • Partition (number theory) – a way of writing a number as a sum of positive integers. Notes 1. Lenstra, Arjen K. (2011), "Integer Factoring", in van Tilborg, Henk C. A.; Jajodia, Sushil (eds.), Encyclopedia of Cryptography and Security, Boston, MA: Springer US, pp. 611–618, doi:10.1007/978-1-4419-5906-5_455, ISBN 978-1-4419-5905-8, retrieved 2022-06-22 2. "[Cado-nfs-discuss] 795-bit factoring and discrete logarithms". Archived from the original on 2019-12-02. 3. Kleinjung; et al. (2010-02-18). "Factorization of a 768-bit RSA modulus" (PDF). International Association for Cryptologic Research. Retrieved 2010-08-09. {{cite journal}}: Cite journal requires |journal= (help) 4. Krantz, Steven G. (2011), The Proof is in the Pudding: The Changing Nature of Mathematical Proof, New York: Springer, p. 203, doi:10.1007/978-0-387-48744-1, ISBN 978-0-387-48908-7, MR 2789493 5. Arora, Sanjeev; Barak, Boaz (2009), Computational complexity, Cambridge: Cambridge University Press, p. 230, doi:10.1017/CBO9780511804090, ISBN 978-0-521-42426-4, MR 2500087 6. Goldreich, Oded; Wigderson, Avi (2008), "IV.20 Computational Complexity", in Gowers, Timothy; Barrow-Green, June; Leader, Imre (eds.), The Princeton Companion to Mathematics, Princeton, New Jersey: Princeton University Press, pp. 575–604, ISBN 978-0-691-11880-2, MR 2467561. See in particular p. 583. 7. Buhler, J. P.; Lenstra, H. W. Jr.; Pomerance, Carl (1993). Factoring integers with the number field sieve (Lecture Notes in Mathematics, vol 1554 ed.). Springer. pp. 50–94. doi:10.1007/BFb0091539. hdl:1887/2149. ISBN 978-3-540-57013-4. Retrieved 12 March 2021. 8. Vandersypen, Lieven M. K.; et al. (2001). "Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance". Nature. 414 (6866): 883–887. arXiv:quant-ph/0112176. Bibcode:2001Natur.414..883V. doi:10.1038/414883a. PMID 11780055. S2CID 4400832. 9. Lance Fortnow (2002-09-13). "Computational Complexity Blog: Complexity Class of the Week: Factoring". 10. David Bressoud and Stan Wagon (2000). A Course in Computational Number Theory. Key College Publishing/Springer. pp. 168–69. ISBN 978-1-930190-10-8. 11. Schnorr, Claus P. (1982). "Refined analysis and improvements on some factoring algorithms". Journal of Algorithms. 3 (2): 101–127. doi:10.1016/0196-6774(82)90012-8. MR 0657269. Archived from the original on September 24, 2017. 12. Seysen, Martin (1987). "A probabilistic factorization algorithm with quadratic forms of negative discriminant". Mathematics of Computation. 48 (178): 757–780. doi:10.1090/S0025-5718-1987-0878705-X. MR 0878705. 13. Lenstra, Arjen K (1988). "Fast and rigorous factorization under the generalized Riemann hypothesis" (PDF). Indagationes Mathematicae. 50 (4): 443–454. doi:10.1016/S1385-7258(88)80022-2. 14. Lenstra, H. W.; Pomerance, Carl (July 1992). "A Rigorous Time Bound for Factoring Integers" (PDF). Journal of the American Mathematical Society. 5 (3): 483–516. doi:10.1090/S0894-0347-1992-1137100-0. MR 1137100. References • Richard Crandall and Carl Pomerance (2001). Prime Numbers: A Computational Perspective. Springer. ISBN 0-387-94777-9. Chapter 5: Exponential Factoring Algorithms, pp. 191–226. Chapter 6: Subexponential Factoring Algorithms, pp. 227–284. Section 7.4: Elliptic curve method, pp. 301–313. • Donald Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89684-2. Section 4.5.4: Factoring into Primes, pp. 379–417. • Samuel S. Wagstaff Jr. (2013). The Joy of Factoring. Providence, RI: American Mathematical Society. ISBN 978-1-4704-1048-3.. • Warren, Henry S. Jr. (2013). Hacker's Delight (2 ed.). Addison Wesley - Pearson Education, Inc. ISBN 978-0-321-84268-8. External links • msieve - SIQS and NFS - has helped complete some of the largest public factorizations known • Richard P. Brent, "Recent Progress and Prospects for Integer Factorisation Algorithms", Computing and Combinatorics", 2000, pp. 3–22. download • Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "PRIMES is in P." Annals of Mathematics 160(2): 781-793 (2004). August 2005 version PDF • Eric W. Weisstein, “RSA-640 Factored” MathWorld Headline News, November 8, 2005 • Dario Alpern's Integer factorization calculator - A web app for factoring large integers Computational hardness assumptions Number theoretic • Integer factorization • Phi-hiding • RSA problem • Strong RSA • Quadratic residuosity • Decisional composite residuosity • Higher residuosity Group theoretic • Discrete logarithm • Diffie-Hellman • Decisional Diffie–Hellman • Computational Diffie–Hellman Pairings • External Diffie–Hellman • Sub-group hiding • Decision linear Lattices • Shortest vector problem (gap) • Closest vector problem (gap) • Learning with errors • Ring learning with errors • Short integer solution Non-cryptographic • Exponential time hypothesis • Unique games conjecture • Planted clique conjecture Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms Divisibility-based sets of integers Overview • Integer factorization • Divisor • Unitary divisor • Divisor function • Prime factor • Fundamental theorem of arithmetic Factorization forms • Prime • Composite • Semiprime • Pronic • Sphenic • Square-free • Powerful • Perfect power • Achilles • Smooth • Regular • Rough • Unusual Constrained divisor sums • Perfect • Almost perfect • Quasiperfect • Multiply perfect • Hemiperfect • Hyperperfect • Superperfect • Unitary perfect • Semiperfect • Practical • Erdős–Nicolas With many divisors • Abundant • Primitive abundant • Highly abundant • Superabundant • Colossally abundant • Highly composite • Superior highly composite • Weird Aliquot sequence-related • Untouchable • Amicable (Triple) • Sociable • Betrothed Base-dependent • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith Other sets • Arithmetic • Deficient • Friendly • Solitary • Sublime • Harmonic divisor • Descartes • Refactorable • Superperfect Authority control: National • Germany
Wikipedia
# Representation of Markov decision processes A Markov decision process is defined by the following components: - States: A finite set of states $S = \{s_1, s_2, ..., s_N\}$ representing all possible states in the system. - Actions: A finite set of actions $A = \{a_1, a_2, ..., a_M\}$ representing all possible actions that can be taken at each state. - Transition model: A transition probability matrix $P_{ss'}^{a}$ that describes the probability of transitioning from state $s$ to state $s'$ after taking action $a$. - Reward function: A reward function $R_s^a$ that assigns a numerical value to each state-action pair. Consider a simple MDP with three states $S = \{s_1, s_2, s_3\}$ and two actions $A = \{a_1, a_2\}$. The transition probability matrix $P$ and the reward function $R$ are defined as follows: $$ P = \begin{bmatrix} 0.8 & 0.2 \\ 0.5 & 0.5 \\ 0.1 & 0.9 \end{bmatrix} $$ $$ R = \begin{bmatrix} -1 & -1 \\ 0 & 10 \\ -2 & -2 \end{bmatrix} $$ ## Exercise Define a Markov decision process with four states and three actions. Create a transition probability matrix and a reward function for this MDP. # Policy iteration algorithm for solving Markov decision processes The policy iteration algorithm is a method for solving Markov decision processes (MDPs). It involves iteratively improving a policy until it is optimal. The policy iteration algorithm consists of two steps: policy evaluation and policy improvement. - Policy evaluation: In this step, the value function of the current policy is computed using dynamic programming. The value function $V(s)$ is defined as the expected cumulative reward starting from state $s$ and following the current policy. - Policy improvement: In this step, the policy is updated to be greedy with respect to the value function. This means that for each state, the action that maximizes the value function is chosen as the new action for that state. The policy iteration algorithm continues until a convergence criterion is met, such as a maximum number of iterations or a change in the value function smaller than a predefined threshold. Consider the MDP defined in the previous section. Let's solve it using the policy iteration algorithm. First, we initialize a random policy and evaluate its value function using dynamic programming. Then, we update the policy to be greedy with respect to the value function and repeat the process until convergence. ## Exercise Solve the MDP defined in the previous exercise using the policy iteration algorithm. # Value iteration algorithm for solving Markov decision processes The value iteration algorithm is another method for solving Markov decision processes (MDPs). It is an alternative to the policy iteration algorithm and can be more efficient in certain cases. The value iteration algorithm consists of two steps: value evaluation and policy extraction. - Value evaluation: In this step, the value function of all states is computed using dynamic programming. The value function is updated iteratively using the Bellman equation: $$ V(s) = \max_{a \in A} \sum_{s' \in S} P_{ss'}^{a} [R_s^a + \gamma V(s')] $$ where $\gamma$ is the discount factor. - Policy extraction: Once the value function has converged, the policy is extracted by choosing the action that maximizes the value function for each state. Consider the MDP defined in the previous section. Let's solve it using the value iteration algorithm. First, we initialize the value function to be zero for all states. Then, we update the value function iteratively using the Bellman equation and repeat the process until convergence. Finally, we extract the optimal policy. ## Exercise Solve the MDP defined in the previous exercise using the value iteration algorithm. # Implementing dynamic programming solutions in Python To implement dynamic programming in Python, we can use the following steps: 1. Initialize the value function and policy. 2. Iterate over the states and actions, updating the value function and policy using the Bellman equation and policy extraction, respectively. 3. Repeat the process until convergence or a maximum number of iterations is reached. Here is a simple Python implementation of the policy iteration algorithm for solving MDPs: ```python import numpy as np def policy_iteration(P, R, gamma): N, M = P.shape V = np.zeros(N) policy = np.zeros(N, dtype=int) while True: V_new = np.zeros(N) for s in range(N): V_new[s] = np.max(sum(P[s, s_prime] * (R[s, a] + gamma * V[s_prime]) for s_prime in range(N)) for a in range(M)) if np.max(np.abs(V - V_new)) < 1e-9: break V = V_new policy = np.argmax(R + gamma * P * V, axis=1) return policy, V ``` ## Exercise Implement the value iteration algorithm for solving MDPs in Python. # Solving Markov decision processes with Python code examples Consider the MDP defined in the previous section. Here is a Python implementation of the policy iteration algorithm for solving this MDP: ```python P = np.array([[0.8, 0.2], [0.5, 0.5], [0.1, 0.9]]) R = np.array([[-1, -1], [0, 10], [-2, -2]]) gamma = 0.9 policy, V = policy_iteration(P, R, gamma) print("Policy:", policy) print("Value function:", V) ``` ## Exercise Solve the MDP defined in the previous exercise using the value iteration algorithm in Python. # Analyzing the convergence and optimality of dynamic programming algorithms - Convergence: Dynamic programming algorithms, such as the policy iteration and value iteration algorithms, are guaranteed to converge to an optimal solution under certain conditions. These conditions typically include the existence of a unique optimal solution and the convergence of the value function to this solution. - Optimality: The optimal policy is a policy that maximizes the expected cumulative reward for all possible state-action sequences. Dynamic programming algorithms, such as the policy iteration and value iteration algorithms, are designed to find the optimal policy. Consider the MDP defined in the previous section. Let's analyze the convergence and optimality of the policy iteration and value iteration algorithms for this MDP. ## Exercise Analyze the convergence and optimality of the dynamic programming algorithms for the MDP defined in the previous exercise. # Extensions and variations of dynamic programming for Markov decision processes - Model-free methods: These methods do not require a full model of the environment. Instead, they rely on trial and error to learn the optimal policy. Examples include Monte Carlo methods and temporal difference learning. - Partial observation: In some cases, the agent has incomplete information about the environment. Dynamic programming can be extended to handle partial observation by using belief states and belief updates. - Infinite horizon: In some problems, the agent's goal is to maximize the expected cumulative reward over an infinite horizon. Dynamic programming can be extended to handle infinite horizons by using discounted infinite horizon value functions. Consider the MDP defined in the previous section. Let's discuss extensions and variations of dynamic programming for this MDP. ## Exercise Discuss extensions and variations of dynamic programming for the MDP defined in the previous exercise. # Applications of dynamic programming in Markov decision processes - Robotics: Dynamic programming can be used to plan and control the motion of robots in complex environments. - Finance: Dynamic programming can be used to solve optimal control problems in finance, such as portfolio optimization and option pricing. - Reinforcement learning: Dynamic programming is the foundation for reinforcement learning, a machine learning approach to learning optimal decision-making policies. Consider the MDP defined in the previous section. Let's discuss applications of dynamic programming in this MDP. ## Exercise Discuss applications of dynamic programming for the MDP defined in the previous exercise. # Evaluating the performance of dynamic programming algorithms - Computational complexity: The time and space complexity of dynamic programming algorithms can be analyzed to determine their efficiency for solving MDPs. - Convergence rate: The rate of convergence of dynamic programming algorithms can be analyzed to determine their efficiency for solving MDPs. - Optimality: The optimal policy can be used to evaluate the performance of dynamic programming algorithms for solving MDPs. Consider the MDP defined in the previous section. Let's evaluate the performance of the policy iteration and value iteration algorithms for this MDP. ## Exercise Evaluate the performance of the dynamic programming algorithms for the MDP defined in the previous exercise. # Comparing policy iteration and value iteration algorithms - Convergence: The policy iteration algorithm converges to a fixed point of the value function, while the value iteration algorithm converges to the optimal value function. - Optimality: The policy iteration algorithm is guaranteed to find the optimal policy, while the value iteration algorithm is not. However, the value iteration algorithm can be used to extract the optimal policy. - Computational complexity: The policy iteration algorithm has a higher computational complexity than the value iteration algorithm, as it requires iterating over the states and actions multiple times. Consider the MDP defined in the previous section. Let's compare the policy iteration and value iteration algorithms for this MDP. ## Exercise Compare the performance of the policy iteration and value iteration algorithms for the MDP defined in the previous exercise.
Textbooks
Isothermal coordinates In mathematics, specifically in differential geometry, isothermal coordinates on a Riemannian manifold are local coordinates where the metric is conformal to the Euclidean metric. This means that in isothermal coordinates, the Riemannian metric locally has the form $g=\varphi (dx_{1}^{2}+\cdots +dx_{n}^{2}),$ where $\varphi $ is a positive smooth function. (If the Riemannian manifold is oriented, some authors insist that a coordinate system must agree with that orientation to be isothermal.) Isothermal coordinates on surfaces were first introduced by Gauss. Korn and Lichtenstein proved that isothermal coordinates exist around any point on a two dimensional Riemannian manifold. By contrast, most higher-dimensional manifolds do not admit isothermal coordinates anywhere; that is, they are not usually locally conformally flat. In dimension 3, a Riemannian metric is locally conformally flat if and only if its Cotton tensor vanishes. In dimensions > 3, a metric is locally conformally flat if and only if its Weyl tensor vanishes. Isothermal coordinates on surfaces In 1822, Carl Friedrich Gauss proved the existence of isothermal coordinates on an arbitrary surface with a real-analytic Riemannian metric, following earlier results of Joseph Lagrange in the special case of surfaces of revolution.[1] The construction used by Gauss made use of the Cauchy–Kowalevski theorem, so that his method is fundamentally restricted to the real-analytic context.[2] Following innovations in the theory of two-dimensional partial differential equations by Arthur Korn, Leon Lichtenstein found in 1916 the general existence of isothermal coordinates for Riemannian metrics of lower regularity, including smooth metrics and even Hölder continuous metrics.[3] Given a Riemannian metric on a two-dimensional manifold, the transition function between isothermal coordinate charts, which is a map between open subsets of R2, is necessarily angle-preserving. The angle-preserving property together with orientation-preservation is one characterization (among many) of holomorphic functions, and so an oriented coordinate atlas consisting of isothermal coordinate charts may be viewed as a holomorphic coordinate atlas. This demonstrates that a Riemannian metric and an orientation on a two-dimensional manifold combine to induce the structure of a Riemann surface (i.e. a one-dimensional complex manifold). Furthermore, given an oriented surface, two Riemannian metrics induce the same holomorphic atlas if and only if they are conformal to one another. For this reason, the study of Riemann surfaces is identical to the study of conformal classes of Riemannian metrics on oriented surfaces. By the 1950s, expositions of the ideas of Korn and Lichtenstein were put into the language of complex derivatives and the Beltrami equation by Lipman Bers and Shiing-shen Chern, among others.[4] In this context, it is natural to investigate the existence of generalized solutions, which satisfy the relevant partial differential equations but are no longer interpretable as coordinate charts in the usual way. This was initiated by Charles Morrey in his seminal 1938 article on the theory of elliptic partial differential equations on two-dimensional domains, leading later to the measurable Riemann mapping theorem of Lars Ahlfors and Bers.[5] Beltrami equation The existence of isothermal coordinates can be proved[6] by applying known existence theorems for the Beltrami equation, which rely on Lp estimates for singular integral operators of Calderón and Zygmund.[7][8] A simpler approach to the Beltrami equation has been given more recently by Adrien Douady.[9] If the Riemannian metric is given locally as $ds^{2}=E\,dx^{2}+2F\,dx\,dy+G\,dy^{2},$ then in the complex coordinate $z=x+iy$, it takes the form $ds^{2}=\lambda |\,dz+\mu \,d{\overline {z}}|^{2},$ where $\lambda $ and $\mu $ are smooth with $\lambda >0$ and $\left\vert \mu \right\vert <1$. In fact $\lambda ={1 \over 4}(E+G+2{\sqrt {EG-F^{2}}}),\,\,\,\mu ={(E-G+2iF) \over 4\lambda }}.$ In isothermal coordinates $(u,v)$ the metric should take the form $ds^{2}=e^{\rho }(du^{2}+dv^{2})$ with ρ smooth. The complex coordinate $w=u+iv$ satisfies $e^{\rho }\,|dw|^{2}=e^{\rho }|w_{z}|^{2}|\,dz+{w_{\overline {z}} \over w_{z}}\,d{\overline {z}}|^{2},$ so that the coordinates (u, v) will be isothermal if the Beltrami equation ${\partial w \over \partial {\overline {z}}}=\mu {\partial w \over \partial z}$ has a diffeomorphic solution. Such a solution has been proved to exist in any neighbourhood where $\lVert \mu \rVert _{\infty }<1$. Existence via local solvability for elliptic partial differential equations The existence of isothermal coordinates on a smooth two-dimensional Riemannian manifold is a corollary of the standard local solvability result in the analysis of elliptic partial differential equations. In the present context, the relevant elliptic equation is the condition for a function to be harmonic relative to the Riemannian metric. The local solvability then states that any point p has a neighborhood U on which there is a harmonic function u with nowhere-vanishing derivative.[10] Isothermal coordinates are constructed from such a function in the following way.[11] Harmonicity of u is identical to the closedness of the differential 1-form $\star du,$ defined using the Hodge star operator $\star $ associated to the Riemannian metric. The Poincaré lemma thus implies the existence of a function v on U with $dv=\star du.$ By definition of the Hodge star, $du$ and $dv$ are orthogonal to one another and hence linearly independent, and it then follows from the inverse function theorem that u and v form a coordinate system on some neighborhood of p. This coordinate system is automatically isothermal, since the orthogonality of $du$ and $dv$ implies the diagonality of the metric, and the norm-preserving property of the Hodge star implies the equality of the two diagonal components. Gaussian curvature In the isothermal coordinates $(u,v)$, the Gaussian curvature takes the simpler form $K=-{\frac {1}{2}}e^{-\rho }\left({\frac {\partial ^{2}\rho }{\partial u^{2}}}+{\frac {\partial ^{2}\rho }{\partial v^{2}}}\right).$ See also • Conformal map • Liouville's equation • Quasiconformal map Notes 1. Gauss 1825; Lagrange 1779. 2. Spivak 1999, Theorem 9.18. 3. Korn 1914; Lichtenstein 1916; Spivak 1999, Addendum 1 to Chapter 9; Taylor 2000, Proposition 3.9.3. 4. Bers 1958; Chern 1955; Ahlfors 2006, p. 90. 5. Morrey 1938. 6. Imayoshi & Taniguchi 1992, pp. 20–21 7. Ahlfors 2006, pp. 85–115 8. Imayoshi & Taniguchi 1992, pp. 92–104 9. Douady & Buff 2000 10. Taylor 2011, pp. 440–441; Bers, John & Schechter 1979, pp. 228–230 11. DeTurck & Kazdan 1981 References • Ahlfors, Lars V. (1952), "Conformality with respect to Riemannian metrics.", Ann. Acad. Sci. Fenn. Ser. A I, 206: 1–22 • Ahlfors, Lars V. (2006). Lectures on quasiconformal mappings. University Lecture Series. Vol. 38. With supplemental chapters by C. J. Earle, I. Kra, M. Shishikura and J. H. Hubbard (Second edition of 1966 original ed.). Providence, RI: American Mathematical Society. doi:10.1090/ulect/038. ISBN 0-8218-3644-7. MR 2241787. • Bers, Lipman (1958). Riemann surfaces. Notes taken by Rodlitz, Esther and Pollack, Richard. Courant Institute of Mathematical Sciences at New York University. pp. 15–35. • Bers, Lipman; John, Fritz; Schechter, Martin (1979). Partial differential equations. Lectures in Applied Mathematics. Vol. 3A. American Mathematical Society. ISBN 0-8218-0049-3. • Chern, Shiing-shen (1955). "An elementary proof of the existence of isothermal parameters on a surface". Proceedings of the American Mathematical Society. 6 (5): 771–782. doi:10.2307/2032933. JSTOR 2032933. • DeTurck, Dennis M.; Kazdan, Jerry L. (1981). "Some regularity theorems in Riemannian geometry". Annales Scientifiques de l'École Normale Supérieure. Série 4. 14 (3): 249–260. doi:10.24033/asens.1405. ISSN 0012-9593. MR 0644518.. • do Carmo, Manfredo P. (2016). Differential geometry of curves & surfaces (Revised and updated second edition of 1976 original ed.). Mineola, NY: Dover Publications, Inc. ISBN 978-0-486-80699-0. MR 3837152. Zbl 1352.53002. • Douady, Adrien; Buff, X. (2000), Le théorème d'intégrabilité des structures presque complexes. [Integrability theorem for almost complex structures], London Mathematical Society Lecture Note Series, vol. 274, Cambridge University Press, pp. 307–324 • Gauss, C. F. (1825). "Allgemeine Auflösung der Aufgabe die Theile einer gegebenen Flache auf einer andern gegebnen Fläche so abzubilden, dass die Abbildung dem Abgebildeten in den kleinsten Theilen ähnlich wird" [General solution of the problem of mapping the parts of a given surface on another given surface in such a way that the mapping resembles what is depicted in the smallest parts]. In Schumacher, H. C. (ed.). Astronomische Abhandlungen, Drittes Heft. Altona: Hammerich und Heineking. pp. 1–30. Reprinted in: • Gauss, Carl Friedrich (2011) [1873]. Werke: Volume 4. Cambridge Library Collection (in German). New York: Cambridge University Press. doi:10.1017/CBO9781139058254.005. ISBN 978-1-108-03226-1. Translated to English in: • Gauss (1929). "On conformal representation". In Smith, David Eugene (ed.). A source book in mathematics. Source Books in the History of the Sciences. Translated by Evans, Herbert P. New York: McGraw-Hill Book Co. pp. 463–475. JFM 55.0583.01. • Imayoshi, Y.; Taniguchi, M. (1992). An introduction to Teichmüller spaces. Tokyo: Springer-Verlag. doi:10.1007/978-4-431-68174-8. ISBN 0-387-70088-9. MR 1215481. Zbl 0754.30001. • Korn, A. (1914). "Zwei Anwendungen der Methode der sukzessiven Annäherungen". In Carathéodory, C.; Hessenberg, G.; Landau, E.; Lichtenstein, L. (eds.). Mathematische Abhandlungen Hermann Amandus Schwarz. Berlin, Heidelberg: Springer. pp. 215–229. doi:10.1007/978-3-642-50735-9_16. ISBN 978-3-642-50426-6. • Lagrange, J. (1779). "Sur la construction des cartes géographiques". Nouveaux mémoires de l'Académie royale des sciences et belles-lettres de Berlin: 161–210. Reprinted in: • Serret, J.-A., ed. (1867). Œuvres de Lagrange: tome 4 (in French). Paris: Gauthier-Villars. • Lichtenstein, Léon (1916). "Zur Theorie der konformen Abbildung. Konforme Abbildung nichtanalytischer, singularitätenfreier Flächenstücke auf ebene Gebiete". Bulletin International de l'Académie des Sciences de Cracovie: Classe des Sciences Mathématiques et Naturelles. Série A: Sciences Mathématiques: 192–217. JFM 46.0547.01. • Morrey, Charles B. (1938). "On the solutions of quasi-linear elliptic partial differential equations". Transactions of the American Mathematical Society. 43 (1): 126–166. doi:10.2307/1989904. JSTOR 1989904. • Spivak, Michael (1999). A comprehensive introduction to differential geometry. Volume four (Third edition of 1975 original ed.). Publish or Perish, Inc. ISBN 0-914098-73-X. MR 0532833. Zbl 1213.53001. • Taylor, Michael E. (2000). Tools for PDE. Pseudodifferential operators, paradifferential operators, and layer potentials. Mathematical Surveys and Monographs. Vol. 81. Providence, RI: American Mathematical Society. doi:10.1090/surv/081. ISBN 0-8218-2633-6. MR 1766415. Zbl 0963.35211. • Taylor, Michael E. (2011). Partial differential equations I. Basic theory. Applied Mathematical Sciences. Vol. 115 (Second edition of 1996 original ed.). New York: Springer. doi:10.1007/978-1-4419-7055-8. ISBN 978-1-4419-7054-1. MR 2744150. Zbl 1206.35002. External links • "Isothermal coordinates", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia
\begin{document} \title{$F$-rationality of the ring of modular invariants} \footnote[0] {2010 \textit{Mathematics Subject Classification}. Primary 13A50, 13A35. Key Words and Phrases. $F$-rational, $F$-regular, dual $F$-signature, Frobenius limit. } \begin{abstract} Using the description of the Frobenius limit of modules over the ring of invariants under an action of a finite group on a polynomial ring over a field of characteristic $p>0$ developed by Symonds and the author, we give a characterization of the ring of invariants with a positive dual $F$-signature. Combining this result and Kemper's result on depths of the ring of invariants under an action of a permutation group, we give an example of an $F$-rational, but non-$F$-regular ring of invariants under the action of a finite group. \end{abstract} \section{Introduction} Let $k$ be an algebraically closed field of characteristic $p>0$. Let $V=k^d$, and $G$ a finite subgroup of $\text{\sl{GL}}(V)$ without psuedo-reflections. Let $B=\Sym V$, the symmetric algebra of $V$, and $A=B^G$. If the order $\abs{G}$ of $G$ is not divisible by $p$, then $A$ is a direct summand subring of $B$, and is strongly $F$-regular. Let $p$ divides $\abs{G}$. Broer \cite{Broer} proved that $A$ is not a direct summand subring of $B$ hence $A$ is not weakly $F$-regular (as $A$ is not a splinter). In this paper, we study when $A$ is $F$-rational. In \cite{Glassbrenner}, Glassbrenner showed that the invariant subring $k[x_1,\ldots,x_n]^{A_n}$, where $A_n$ is the alternating group which acts on the polynomial ring by permutation of variables, is Gorenstein $F$-pure but not $F$-rational when $p=\charac(k)\geq 3$ and $n\equiv 0,1 \mod p$. In this paper, we give an example of $A$ which is $F$-rational (but not $F$-regular). Sannai \cite{Sannai} defined the dual $F$-signature $s(M)$ of a finite module $M$ over an $F$-finite local ring $R$ of characteristic $p$. He proved that $R$ is $F$-rational if and only if $R$ is Cohen--Macaulay and the dual $F$-signature $s(\omega_R)$ of the canonical module $\omega_R$ of $R$ is positive. Utilizing the description of the Frobenius limit of modules over $\hat A$ (the completion of $A$) by Symonds and the author, we give a characterization of $V$ such that $s(\omega_{\hat A})>0$, see Theorem~\ref{main.thm}. The characterization is purely representation theoretic in the sense that the characterization depends only on the structure of $B$ as a $G$-module, rather than a $G$-algebra. Using the characterization and Kemper's result on the depth of the ring of invariants under the action of certain groups of permutations \cite[(3.3)]{Kemper}, we give an example of $F$-rational $A$ for $p\geq 5$. We also get an example of $A$ such that the dual $F$-signature $s_{\omega_{\hat A}}$ of the canonical module of the completion $\hat A$ is positive, but $A$ (or equivalently, $\hat A$) is not Cohen--Macaulay. See Theorem~\ref{main-example.thm}. In section~\ref{asn.sec}, we introduce the {\em asymptotic surjective number} $\asn_N(M)$ for two finitely generated modules $M$ and $N$ ($N\neq 0$) over a Noetherian ring $R$, see Lemma~\ref{asn.lem}. In section~\ref{dual-F-signature.sec}, using the definition and some basic results developed in section~\ref{asn.sec}, we prove the formula $s(M)=\asn_M(\FL([M]))$, where $\FL$ denotes the Frobenius limit defined in \cite{SH}. Thus $s(M)$ depends only on $\FL([M])$. Using this, we give a characterization of a module $M$ to have positive $s(M)$ in terms of $\FL([M])$ (Corollary~\ref{s-positive.cor}). Using this result and the description of the Frobenius limits of certain modules over $\hat A$ proved in \cite{SH}, we give a characterization of $V$ such that $s(\omega_{\hat A})>0$ in section~\ref{main.sec}. In section~\ref{main-example.sec}, we give the examples. Acknowledgments. The author is grateful to Professor Anurag Singh and Professor Kei-ichi Watanabe for valuable discussion. \section{Asymptotic surjective number}\label{asn.sec} \paragraph This paper heavily depends on \cite{SH}. \paragraph Let $R$ be a Noetherian commutative ring. Let $\md R$ denote the category of finite $R$-modules. \paragraph For $M,N\in\md R$, we set \begin{multline*} \surj_N^R(M)= \surj_N(M):= \\ \sup\{n\in\Bbb Z_{\geq 0}\mid \text{There is a surjective $R$-linear map $M\rightarrow N^{\oplus n}$}\}, \end{multline*} and call $\surj_N(M)$ the surjective number of $M$ with respect to $N$. If $N=0$, this is understood to be $\infty$. \begin{lemma}\label{surj.lem} Let $M,M',N\in\md R$. Then we have the following. \begin{enumerate} \item[\bf 1] If $R'$ is any Noetherian $R$-algebra, then \[ \surj_N^R(M)\leq \surj_{R'\otimes_R N}^{R'}(R'\otimes_R M). \] \item[\bf 2] If $(R,\mathfrak{m})$ is local and $N\neq 0$, then $\surj^R_N(M)\leq \mu_R(M)/\mu_R(N)$, where $\mu_R=\ell_R(R/\mathfrak{m}\otimes_R ?)$ denotes the number of generators. \item[\bf 3] If $N\neq 0$, then $\surj_N(M)<\infty$, and is a non-negative integer. \item[\bf 4] If $N\neq 0$, then $\surj_N(M)+\surj_N(M')\leq \surj_N(M\oplus M')$. \item[\bf 5] If $N\neq 0$ and $r\geq 0$, then $r\surj_N(M)\leq \surj_N(M^{\oplus r})$. \end{enumerate} \end{lemma} \begin{proof} {\bf 1} If there is a surjective $R$-linear map $M\rightarrow N^{\oplus n}$, then there is a surjective $R'$-linear map $R'\otimes_R M \rightarrow (R'\otimes_R N)^{\oplus n}$, and hence $n\leq \surj_{R'\otimes_R N}^{R'}( R'\otimes_R M)$. {\bf 2} By {\bf 1}, $\surj^R_N(M)\leq \surj^{R/\mathfrak{m}}_{N/\mathfrak{m} N}(M/\mathfrak{m} M)\leq \mu_R(M)/\mu_R(N)$ by dimension counting. {\bf 3} Take $\mathfrak{m}\in \supp_R N$. Then \[ \surj^R_N(M)\leq \surj^{R_\mathfrak{m}}_{N_\mathfrak{m}}(M_\mathfrak{m})\leq \mu_{R_\mathfrak{m}}(M_\mathfrak{m})/\mu_{R_\mathfrak{m}}(N_\mathfrak{m}) <\infty. \] {\bf 4} Let $n=\surj_N(M)$ and $n'=\surj_N(M')$. Then there are surjective $R$-linear maps $M\rightarrow N^{\oplus n}$ and $M'\rightarrow N^{\oplus n'}$. Summing them, we get a surjective map $M\oplus M'\rightarrow N^{\oplus(n+n')}$. {\bf 5} follows from {\bf 4}. \end{proof} \paragraph Let $N,M\in\md R$. Assume that $N$ is nonzero. We define \[ \nsurj_N(M;r):=\frac{1}{r}\surj_N(M^{\oplus r}) \] for $r\geq 1$. \begin{lemma}\label{nsurj.lem} Let $r\geq 1$, and $M,M',N\in\md R$. Assume that $N\neq 0$. Then \begin{enumerate} \item[\bf 1] $\nsurj_N(M;1)=\surj_N(M)$. \item[\bf 2] $\nsurj_N(M;kr)\geq \nsurj_N(M;r)$ for $k\geq 0$. \item[\bf 3] $\nsurj_N(M;r)\geq \surj_N(M)\geq 0$. \item[\bf 4] $\nsurj_N(M;r)+\nsurj_N(M';r)\leq \nsurj_N(M\oplus M';r)$. \item[\bf 5] If $R\rightarrow R'$ is a homomorphism of Noetherian rings, then $\nsurj_N(M;r)\leq \nsurj_{R'\otimes_R N}(R'\otimes_R M;r)$. \item[\bf 6] If $(R,\mathfrak{m})$ is local, $\nsurj_N(M;r)\leq \mu_R(M)/\mu_R(N)$. In general, $\nsurj_N(M;r)$ is bounded. \end{enumerate} \end{lemma} \begin{proof} {\bf 1} is by definition. {\bf 2}. $kr\nsurj_N(M;kr)=\surj_N(M^{\oplus kr})\geq k\surj_N(M^{\oplus r})$ by Lemma~\ref{surj.lem}, {\bf 5}. Dividing by $kr$, we get the desired inequality. {\bf 3}. This is immediate by {\bf 1} and {\bf 2}. {\bf 4} follows from Lemma~\ref{surj.lem}, {\bf 4}. {\bf 5} follows from Lemma~\ref{surj.lem}, {\bf 1}. {\bf 6} The first assertion is by Lemma~\ref{surj.lem}, {\bf 2}. The second assertion follows from the first assertion and {\bf 5} applied to $R\rightarrow R'=R_\mathfrak{m}$, where $\mathfrak{m}$ is any element of $\supp_R N$. \end{proof} \begin{lemma}\label{asn.lem} Let $M,N\in\md R$. Assume that $N\neq 0$. Then the limit \[ \lim_{r\rightarrow\infty}\nsurj_N(M;r) = \lim_{r\rightarrow\infty}\frac{1}{r}\surj_N(M^{\oplus r}) \] exists. \end{lemma} We call the limit the {\em asymptotic surjective number of $M$ with respect to $N$}, and denote it by $\asn_N(M)$. \begin{proof} As $\nsurj_N(M;r)$ is bounded, $S=\limsup_{r\rightarrow\infty}\nsurj_N(M;r)$ and $I=\liminf_{r\rightarrow\infty}\nsurj_N(M;r)$ exist. Assume for contradiction that the limit does not exist. Then $S>I$. Set $\varepsilon=(S-I)/2>0$. There exists some $r_0\geq 1$ such that $\nsurj_N(M;r_0)>S-\varepsilon/2$. Take $n_0\geq 1$ sufficiently large so that $\nsurj_N(M;r_0)/n_0<\varepsilon/2$. Let $r\geq r_0n_0$, and set $n:=\floor{r/r_0}$. Note that $nr_0\leq r<(n+1)r_0$ and $n\geq n_0$. Then \begin{multline*} \nsurj_N(M;r)\geq \frac{1}{(n+1)r_0}\surj_N(M^{\oplus nr_0}) \geq \frac{n}{(n+1)r_0}\surj_N(M^{\oplus r_0})\\ =(1-\frac{1}{n+1})\nsurj_N(M;r_0)\geq \nsurj_N(M;r_0)-\varepsilon/2 >S-\varepsilon. \end{multline*} Hence \[ I\geq \inf_{r\geq r_0n_0}\nsurj_N(M;r)\geq S-\varepsilon>S-2\varepsilon=I, \] and this is a contradiction. \end{proof} \begin{lemma} Let $M,M',N\in\md R$, and $N\neq 0$. Then \begin{enumerate} \item[\bf 1] $\asn_N(M^{\oplus r})=r\asn_N(M)$. \item[\bf 2] $0\leq \surj_N(M)\leq\nsurj_N(M;r)\leq \asn_N(M)$ for any $r\geq 1$. \item[\bf 3] $\asn_N(M)+\asn_N(M')\leq \asn_N(M\oplus M')$. \end{enumerate} \end{lemma} \begin{proof} {\bf 1}. \[ r^{-1}\asn_N(M^{\oplus r})=\lim_{r'\rightarrow \infty}\frac{1}{rr'}\surj_N(M^{\oplus rr'}) =\asn_N(M). \] {\bf 2}. $0\leq \surj_N(M)\leq \nsurj_N(M;r)$ is Lemma~\ref{nsurj.lem}, {\bf 3}. So taking the limit, $\surj_N(M)\leq \asn_N(M)$. So $\surj_N(M^{\oplus r})\leq \asn_N(M^{\oplus r})=r\asn_N(M)$. Dividing by $r$, $\nsurj_N(M;r)\leq \asn_N(M)$. \end{proof} \begin{lemma}\label{easy-vector-space.lem} Let $k$ be a field, and $V$ a $k$-vector space, and $n\geq 0$. Assume that $\dim_k V\leq n$. Let $\Gamma$ be a set of subspaces of $V$ such that $\sum_{U\in\Gamma}U=V$. Then there exist some $U_1,\ldots,U_{n'}\in\Gamma$ with $n'\leq n$ such that $U_1+\cdots+U_{n'}=V$. \end{lemma} \begin{proof} Trivial. \end{proof} \begin{lemma}\label{easy-vector-space2.lem} Let $k$ be a field, $V$ a $k$-vector space, and $\Gamma$ a set of subspaces of $V$. Let $W$ and $W'$ be subspaces of $V$ such that $W+W'=V$. Assume that $W'\subset \sum_{U\in\Gamma}U$. If $\dim_k W'\leq n$, then there exist some $U_1,\ldots,U_{n'}\in\Gamma$ with $n'\leq n$ such that $W+U_1+\cdots+U_{n'}=V$. \end{lemma} \begin{proof} Apply Lemma~\ref{easy-vector-space.lem} to the vector space $V/W$. \end{proof} \begin{lemma}\label{surj-mu.lem} Let $(R,\mathfrak{m})$ be a Noetherian local ring. Let $M,M',N\in\md R$ with $N\neq 0$. Then \[ \surj_N(M')\leq \surj_N(M\oplus M')-\surj_N(M)\leq \mu_R(M'). \] \end{lemma} \begin{proof} The first inequality is Lemma~\ref{surj.lem}, {\bf 4}. We prove the second inequality. Let $m=\surj_N(M\oplus M')$ and $n=\mu_R(M')$. There is a surjective map $\varphi:M\oplus M'\rightarrow N^{\oplus m}$. Let $N_i=N$ be the $i$th summand of $N^{\oplus m}$. Let $\bar ?$ denote the functor $R/\mathfrak{m}$. Set $V=\bar N^{\oplus m}$, $W=\bar\varphi(\bar M)$, and $W'=\bar\varphi(\bar M')$. Then by Lemma~\ref{easy-vector-space2.lem}, there exists some index set $I\subset \{1,2,\ldots,m\}$ such that $\#I\leq n$ and $W+\sum_{i\in I}\bar N_i=V$. By Nakayama's lemma, $\varphi(M)+\sum_{i\in I}N_i=N^{\oplus m}$. This shows that \[ M\hookrightarrow M\oplus M'\xrightarrow \varphi N^{\oplus m}\rightarrow N^{\oplus m}/\sum_{i\in I}N_i\cong N^{\oplus (m-\#I)} \] is surjective. Hence $\surj_N(M)\geq m-\#I \geq m-n$, and the result follows. \end{proof} \paragraph Let $(R,\mathfrak{m})$ be a Henselian local ring. Let $\Cal C:=\md R$. As in \cite{SH}, we define \[ [\Cal C]:=(\bigoplus_{M\in\Cal C}{\mathbb Z}\cdot M) /(M-M_1-M_2\mid M\cong M_1\oplus M_2), \] and $[\Cal C]_{\mathbb R}:={\mathbb R}\otimes_{\mathbb Z}[\Cal C]$. In \cite{SH}, $[\Cal C]_{\mathbb R}$ is also written as $\Theta^{\wedge}(R)$ or $\Theta(R)$ (considering that $R$ is trivially graded). In this paper, we write it as $\Theta(R)$. For $M\in\Cal C$, we denote by $[M]$ the class of $M$ in $\Theta(R)$. For an isomorphism class $N$ of modules, $[N]$ is a well-defined element of $\Theta(R)$. Let $\Ind(R)$ denote the set of isomorphism classes of indecomposable modules in $\Cal C$. The set $[\Ind(R)]:=\{[M]\mid M\in\Ind(R)\}$ is an ${\mathbb R}$-basis of $\Theta(R)=[\Cal C]_{\mathbb R}$. So $\alpha\in\Theta(R)$ can be written $\alpha=\sum_{M\in\Ind(R)}c_M[M]$ with $c_M\in{\mathbb R}$ uniquely. We say that $\alpha\geq 0$ if $c_M\geq 0$ for any $M\in\Ind(R)$. For $\alpha,\beta\in\Theta(R)$, we define $\alpha\geq \beta$ if $\alpha-\beta\geq 0$. This gives an ordering on $\Theta(R)$. \paragraph For $\alpha=\sum_{M\in\Ind(R)}c_M[M]\in\Theta(R)$, we define \[ \langle \alpha\rangle:=\sum_{M\in\Ind(R)}\max(0,\floor{c_M})[M]. \] So there exists some $M_\alpha\in\Cal C$, unique up to isomorphisms, such that $\langle\alpha\rangle=[M_\alpha]$. For $N\in\md R$ with $N\neq 0$, we define $\surj_N\alpha$ to be $\surj_N M_\alpha$. \paragraph For $\alpha=\sum_{M\in\Ind(R)}c_M M \in\Theta(R)$, we define $\supp \alpha=\{M\in\Ind(R)\mid c_M>0\}$. We define $Y(\alpha)=\bigoplus_{W\in\supp\alpha}W$ and $\nu(\alpha):=\mu_R(Y(\alpha))$. \begin{lemma}\label{alpha-surj.lem} Let $N\in\md R$, $N\neq 0$, and $\alpha,\beta \in \Theta(R)$. \begin{enumerate} \item[\bf 1] If $\alpha,\beta\geq 0$, then $0\leq \surj_N\alpha\leq \surj_N(\alpha+\beta)-\surj_N\beta$. \item[\bf 2] $\abs{\surj_N\alpha-\surj_N\beta}\leq \norm{\alpha-\beta}+\nu(\inf\{\alpha,\beta\})$. \end{enumerate} \end{lemma} \begin{proof} {\bf 1}. As $\alpha,\beta\geq 0$, we have that $\langle \alpha\rangle+\langle \beta\rangle\leq\langle\alpha+\beta\rangle$. So by Lemma~\ref{surj.lem}, {\bf 4}, $\surj_N\alpha+\surj_N\beta \leq \surj_N(\langle\alpha+\beta\rangle)\leq \surj_N(\alpha+\beta)$. {\bf 2}. Replacing $\alpha$ by $\sup\{\alpha,0\}$ and $\beta$ by $\sup\{\beta,0\}$, we may assume that $\alpha,\beta\geq 0$. Moreover, replacing $\alpha$ by $\sup\{\alpha,\beta\}$ and $\beta$ by $\inf\{\alpha,\beta\}$, we may assume that $\alpha\geq \beta$. As we have $\langle\alpha\rangle-\langle\beta\rangle\leq \alpha-\beta+[Y(\beta)]$, by Lemma~\ref{surj-mu.lem} we have that \begin{multline*} \surj_N\alpha-\surj_N\beta\leq\norm{\langle\alpha\rangle-\langle\beta\rangle} \leq \norm{\alpha-\beta+[Y(\beta)]}\\ \leq\norm{\alpha-\beta}+\norm{[Y(\beta)]} =\norm{\alpha-\beta}+\nu(\beta). \end{multline*} This is what we wanted to prove. \end{proof} \begin{lemma} The limit \[ \lim_{t\rightarrow \infty}\frac{1}{t}\surj_N(t\alpha) \] exists for $N\in\md R$, $N\neq 0$ and $\alpha\in\Theta(R)$. \end{lemma} We denote the limit by $\asn_N(\alpha)$. \begin{proof} Replacing $\alpha$ by $\sup\{0,\alpha\}$, we may assume that $\alpha\geq 0$. Let $\varepsilon>0$. We can take $W\in\md R$ and an integer $n>0$ such that $\alpha-n^{-1}[W]\geq 0$ and $\norm{\alpha-n^{-1}[W]}<\varepsilon/8$. As $\asn_N W$ exists, there exists some $r_0\geq 1$ such that for any $r\geq r_0$, $\abs{\nsurj_N(W;r)-\asn_N W}<n\varepsilon/8$. Set $R:=\max\{r_0n,16\mu_R(W)/\varepsilon,8n\norm{\alpha}/\varepsilon\}$. Let $t>R$. Let $r:=\floor{t/n}$. Then $0\leq t-rn < n$ and $r\geq r_0$. We have \begin{multline*} \abs{t^{-1}\surj_N(t\alpha)- n^{-1}\asn_N W} \leq t^{-1}\abs{\surj_N(t\alpha)-\surj_N(W^{\oplus r})}\\ +((rn)^{-1}-t^{-1})\surj_N(W^{\oplus r}) +\abs{(rn)^{-1}\surj_N(W^{\oplus r})-n^{-1}\asn_N W} \\ < t^{-1}\norm{t\alpha-r[W]} + t^{-1}\mu_R(W) + (rt)^{-1}\mu_R(W^{\oplus r}) + \varepsilon/8 \\ \leq (n/t)\norm{\alpha}+(nr/t)\norm{\alpha-n^{-1}[W]}+\varepsilon/16+\varepsilon/16+\varepsilon/8 \\ <\varepsilon/8+\varepsilon/8+\varepsilon/16+\varepsilon/16+\varepsilon/8=\varepsilon/2. \end{multline*} So for $t_1,t_2>R$, \[ \abs{t_1^{-1}\surj_N(t_1\alpha)-t_2^{-1}\surj_N(t_2\alpha)}<\varepsilon, \] and $\lim_{t\rightarrow\infty}t^{-1}\surj_N(t\alpha)$ exists, as desired. \end{proof} \begin{lemma} Let $\alpha,\beta\in\Theta(R)$ and $N\in\md R$ with $N\neq 0$. \begin{enumerate} \item[\bf 1] For $k\geq 0$, we have $\asn_N(k\alpha)=k\asn_N(\alpha)$. \item[\bf 2] For $k\geq 0$, $0\leq \surj_N(k\alpha)\leq k\asn_N(\alpha)\leq k\norm{\alpha}/\mu_R(N)$. \item[\bf 3] If $\alpha,\beta\geq 0$, then $\asn_N(\alpha+\beta)\geq \asn_N(\alpha)+\asn_N(\beta)$. \item[\bf 4] $\abs{\asn_N(\alpha)-\asn_N(\beta)}\leq \norm{\alpha-\beta}$. \item[\bf 5] $\asn_N$ is continuous. \end{enumerate} \end{lemma} \begin{proof} {\bf 1}. If $k=0$, then both-hand sides are zero, and the assertion is clear. So we may assume that $k>0$. Then \[ \asn_N(k\alpha)=\lim_{t\rightarrow\infty}\frac{1}{t}\surj(tk\alpha) =k\lim_{t\rightarrow\infty}\frac{1}{tk}\surj(tk\alpha)=k\asn_N(\alpha). \] {\bf 2}. We may assume that $k>0$. By {\bf 1}, replacing $k\alpha$ by $\alpha$, we may assume that $k=1$. Replacing $\alpha$ by $\sup\{0,\alpha\}$, we may assume that $\alpha\geq 0$. For $n\geq 0$, $n\langle \alpha\rangle \leq \langle n\alpha\rangle$. Hence, $n\surj_N(\alpha)\leq \surj_N(n\langle\alpha\rangle) \leq\surj_N(n\alpha)$. So $\surj_N(\alpha)\leq n^{-1}\surj_N(n\alpha)$. Passing to the limit, $\surj_N(\alpha)\leq \asn_N(\alpha)$. Similarly, \[ \frac{1}{n}\surj_N(n\alpha)\leq \frac{\norm{\langle n\alpha\rangle}}{n\mu_R(N)} \leq \frac{\norm{n\alpha}}{n\mu_R(N)}=\frac{\norm{\alpha}}{\mu_R(N)}. \] Passing to the limit, $\asn_N(\alpha)\leq \frac{\norm{\alpha}}{\mu_R(N)}$, as desired. {\bf 3}. By Lemma~\ref{alpha-surj.lem}, {\bf 1}, for $t>0$, \[ \frac{1}{t}\surj_N(t\alpha)+\frac{1}{t}\surj_N(t\beta) \leq \frac{1}{t}\surj_N(t(\alpha+\beta)). \] Passing to the limit, $\asn_N(\alpha)+\asn_N(\beta)\leq \asn_N(\alpha+\beta)$. {\bf 4}. By Lemma~\ref{alpha-surj.lem}, {\bf 2}, \begin{multline*} \abs{\frac{1}{t}\surj_N(t\alpha)-\frac{1}{t}\surj_N(t\beta)} \leq \frac{1}{t}(\norm{t(\alpha-\beta)}+\nu(\inf\{t\alpha,t\beta\}))\\ = \norm{\alpha-\beta}+\nu(\inf\{\alpha,\beta\})/t. \end{multline*} Passing to the limit, $\abs{\asn_N(\alpha)-\asn_N(\beta)}\leq \norm{\alpha-\beta}$, as desired. {\bf 5} is an immediate consequence of {\bf 4}. \end{proof} \section{Sannai's dual $F$-signature}\label{dual-F-signature.sec} \paragraph In this section, let $p$ be a prime number, and $(R,\mathfrak{m},k)$ be an $F$-finite local ring of characteristic $p$ of dimension $d$. Let $\mathfrak{d} =\log_p[k:k^p]$, and $\delta=d+\mathfrak{d}$. \paragraph In \cite{Sannai}, for $M\in\md R$, Sannai defined the dual $F$-signature of $M$ by \[ s_R(M)=s(M):=\limsup_{e\rightarrow\infty}\frac{\surj_M({}^eM)}{p^{\delta e}}. \] $s(R)$ is the (usual) $F$-signature \cite{HL}, which is closely related to the strong $F$-regularity of $R$ \cite{AL}. While $s(\omega_R)$ measures the $F$-rationality of $R$, provided $R$ is Cohen--Macaulay. \begin{theorem}[{\cite[(3.16)]{Sannai}}]\label{Sannai.thm} $R$ is $F$-rational if and only if $R$ is Cohen--Macaulay and $s(\omega_R)>0$. \end{theorem} Now we connect the Frobenius limit defined in \cite{SH} with dual $F$-signature. \begin{theorem} Let $R$ be Henselian, and $M\in\md R$. Assume that the Frobenius limit \[ \FL([M])=\lim_{e\rightarrow \infty}\frac{1}{p^{\delta e}}[{}^eM]\in\Theta(R) \] exists. Then \[ s_R(M)=\lim_{e\rightarrow\infty}\frac{\surj_M({}^eM)}{p^{\delta e}}=\asn_M(\FL([M])). \] \end{theorem} \begin{proof} By Lemma~\ref{alpha-surj.lem}, \begin{multline*} p^{-\delta e} \abs{\surj_M(p^{\delta e}\FL([M]))-\surj_M([{}^eM])} \\ \leq \norm{\FL([M])-p^{-\delta e}[{}^eM]}+p^{-\delta e}\nu(\supp(\FL([M]))). \end{multline*} Taking the limit $e\rightarrow\infty$, we get the desired result. \end{proof} \begin{corollary}\label{s-positive.cor} Let the assumption be as in the theorem. Then the following are equivalent. \begin{enumerate} \item[\bf 1] $s(M)>0$. \item[\bf 2] For any $N\in\md R$ such that $\supp([N])=\supp(\FL(M))$, there exists some $r\geq 1$ and a surjective $R$-linear map $N^{\oplus r}\rightarrow M$. \item[\bf 3] There exist some $N\in\md R$ such that $\supp([N])\subset\supp(\FL(M))$ and a surjective $R$-linear map $N\rightarrow M$. \end{enumerate} \end{corollary} \begin{proof} {\bf 1$\Rightarrow$2}. As $\asn_M(\FL(M))>0$, there exists some $t>0$ such that $\surj_M(t\FL(M))>0$. By the choice of $N$, there exists some $r\geq 1$ such that $r[N]\geq t\FL(M)$ and so $\surj_M N^{\oplus r}\geq \surj_M(t\FL(M))>0$. {\bf 2$\Rightarrow$3}. Let $N=W_1\oplus\cdots\oplus W_r$, where $\{W_1,\ldots,W_r\}=\supp(\FL(M))$. Then there exists some $r\geq 1$ and a surjective $R$-linear map $N^{\oplus r}\rightarrow M$, and $\supp[N^{\oplus r}]\subset \supp(\FL(M))$. {\bf 3$\Rightarrow$1}. By the choice of $N$, there exists some $k>0$ such that $k\FL(M)\geq [N]$. Then $s(M)=\asn_M(\FL(M))\geq k^{-1}\asn_M[N]\geq k^{-1}\surj_M[N]>0$. \end{proof} \section{The dual $F$-signature of the ring of invariants}\label{main.sec} Utilizing the result in \cite{SH} and the last section, we give a criterion for the condition $s(\omega_{\hat A})>0$ for the ring of invariants $A$, where $\hat A$ is the completion. \paragraph Let $k$ be an algebraically closed field, $V=k^d$, $G$ a finite subgroup of $\text{\sl{GL}}(V)$. In this section, we assume that $G$ does not have a pseudo-reflection, where we say that $g\in\text{\sl{GL}}(V)$ is a pseudo-reflection if $\rank(g-1_V)=1$. Let $v_1,\ldots,v_d$ be a fixed $k$-basis of $V$. Let $B:=\Sym V=k[v_1,\ldots,v_d]$, and $A=B^G$. Let $\mathfrak{m}$ and $\ensuremath{\mathfrak n}$ be the irrelevant ideals of $A$ and $B$, respectively. Let $\hat A$ and $\hat B$ be the completion of $A$ and $B$, respectively. For a $G$-module $W$, we define $M_W:=(B\otimes_k W)^G$. Let $k=V_0,V_1,\ldots,V_n$ be the irreducible representations of $G$. Let $P_i\rightarrow V_i$ be the projective cover. Set $M_i:=M_{P_i}=(B\otimes_k P_i)^G$. For a finite dimensional $G$-module $W$, $\det_W$ denote the determinant representation ${\textstyle\bigwedge}^{\dim W} W$ of $W$. Let $V_\nu=\det_V$ be the determinant representation of $V$. \begin{lemma}\label{canonical.lem} The canonical module $\omega_A$ of $A$ is isomorphic to $M_\nu=M_{\det_V}$. \end{lemma} \begin{proof} See \cite[(14.28)]{Hashimoto12} and references therein. \end{proof} \begin{lemma}\label{selfinjective.lem} Let $\Lambda$ be a selfinjective finite dimensional $k$-algebra, $L$ a simple \(left\) $\Lambda$-module, and $h:P\rightarrow L$ its projective cover. Let $M$ be a finitely generated indecomposable $\Lambda$-module. Then the following are equivalent. \begin{enumerate} \item[\bf 1] $\Ext^1_\Lambda(M,\rad P)=0$. \item[\bf 2] $h_*:\Hom_\Lambda(M,P)\rightarrow \Hom_\Lambda(M,L)$ is surjective. \item[\bf 3] $M$ is either projective, or $M/\rad M$ does not contain $L$. \end{enumerate} \end{lemma} \begin{proof} {\bf 1$\Leftrightarrow$2}. This is because \[ \Hom_\Lambda(M,P)\xrightarrow{h_*} \Hom_\Lambda(M,L)\rightarrow \Ext^1_\Lambda(M,\rad P)\rightarrow \Ext^1_\lambda(M,P) \] is exact and $\Ext^1_\Lambda(M,P)=0$ (since $P$ is injective). {\bf 2$\Rightarrow$3}. Assume the contrary. Then as $M/\rad M$ contains $L$, there is a surjective map $M\rightarrow L$. By assumption, this map lifts to $M\rightarrow P$, and this is surjective by Nakayama's lemma. As $P$ is projective, this map splits. As $M$ is indecomposable, $M\cong P$, and this is a contradiction. {\bf 3$\Rightarrow$2}. If $M$ is projective, then $h_*$ is obviously surjective. If $M/\rad M$ does not contain $L$, then $\Hom_\Lambda(M,L)=0$, and $h_*$ is obviously surjective. \end{proof} \begin{theorem}\label{main.thm} Let $p$ divide the order $\abs{G}$ of $G$. Then the following are equivalent. \begin{enumerate} \item[\bf 1] $s(\omega_{\hat A})>0$. \item[\bf 2] The canonical map $M_\nu\rightarrow M_{V_\nu}=\omega_A$ is surjective. \item[\bf 3] $H^1(G,B\otimes_k \rad P_\nu)=0$. \item[\bf 4] For any non-projective finitely generated indecomposable $G$-summand $M$ of $B$, $M$ does not contain $\det^{-1}_V$, the $k$-dual of $\det_V$. \end{enumerate} If these conditions hold, then $s(\omega_{\hat A})\geq 1/\abs{G}$. \end{theorem} \begin{proof} We prove the equivalence of {\bf 2} and {\bf 3} first. Let $B=\bigoplus_j N_j$ be a decomposition into finitely generated indecomposable $G$-modules. Such a decomposition exists, since $B$ is a direct sum of finitely generated $G$-modules. The map $M_\nu\rightarrow M_{V_\nu}$ in {\bf 2} is the map \[ (B\otimes P_\nu)^G\rightarrow (B\otimes \det_V)^G \] induced by the projective cover $P_\nu\rightarrow \det_V$. By the isomorphism $\Ext^i_G(N_j^*,?)\cong H^i(G,N_j\otimes ?)$, this map can be identified with the sum of \[ \Hom_G(N_j^*,P_\nu)\rightarrow \Hom_G(N_j^*,\det_V). \] On the other hand, {\bf 3} is equivalent to say that $\Ext^1_G(N_j^*,\rad P_\nu)=0$ for any $j$. So the equivalence {\bf 2$\Leftrightarrow$3} follows from Lemma~\ref{selfinjective.lem}. Similarly, {\bf 4} is equivalent to say that each $N_j^*$ is injective (or equivalently, projective, as $kG$ is selfinjective) or $N_j^*/\rad N_j^*\cong (\soc N_j)^*$ does not contain $\det_V$. This is equivalent to say that $N_j$ is either projective, or $N_j$ (or equivalently, $\soc N_j$) does not contain $\det^{-1}$. So {\bf 4$\Leftrightarrow$2} follows from Lemma~\ref{selfinjective.lem}. We prove {\bf 2$\Rightarrow$1}. As there is a surjective map $M_\nu\rightarrow \omega_A$ and \[ \FL([\omega_{\hat A}])=\frac{1}{|G|}\sum_{i=0}^n(\dim V_i)[\hat M_i] \] by \cite[(5.1)]{SH}, $s(\omega_{\hat A})>0$ by Corollary~\ref{s-positive.cor}. Moreover, \[ s(\omega_{\hat A})=\asn_{\omega_{\hat A}}(\FL([\omega_{\hat A}])) \geq \frac{\dim V_\nu}{\abs{G}}\asn_{\omega_{\hat A}}(\hat M_\nu) \geq \frac{1}{\abs{G}}\surj_{\omega_A}(M_\nu)\geq \frac{1}{\abs{G}}, \] and the last assertion has been proved. We prove {\bf 1$\Rightarrow$2}. By \cite[(4.16)]{SH}, \[ \FL([\omega_{\hat A}])=\frac{1}{\abs{G}}[\hat B]. \] So by Corollary~\ref{s-positive.cor}, there is some $r>0$ and a surjective map $h:\hat B^r\rightarrow \omega_{\hat A}$. By the equivalence $\gamma=(\hat B\otimes_{\hat A}?)^{**}: \Ref(\hat A)\rightarrow \Ref(G,\hat B)$ (see \cite[(2.4)]{HN} and \cite[(5.4)]{SH}), there corresponds \[ \tilde h=\gamma(h): (\hat B\otimes_k kG)^r\rightarrow \hat B\otimes_k \det. \] As $\hat B\otimes_k kG$ is a projective object in the category of $(G,B)$-modules, $\tilde h$ factors through the surjection \[ \hat B\otimes_k P_\nu\rightarrow \hat B\otimes_k \det. \] Returning to the category $\Ref\hat A$, $h$ factors through $\hat M_\nu =(\hat B\otimes_{\hat A} P_\nu)^G\rightarrow \omega_{\hat A}$. So this map must be surjective, and {\bf 2} follows. \end{proof} \begin{corollary} Assume that $p$ divides $\abs{G}$. If $s(\omega_{\hat A})>0$, then $\det^{-1}_V$ is not a direct summand of $B$. \end{corollary} \begin{proof} Being a one-dimensional representation, $\det^{-1}_V$ is not projective by assumption. Thus the result follows from {\bf 1$\Rightarrow$4} of the theorem. \end{proof} \begin{lemma} Let $M$ and $N$ be in $\Ref(G,B)$. There is a natural isomorphism \[ \gamma:\Hom_A(M^G,N^G)\rightarrow \Hom_B(M,N)^G. \] \end{lemma} \begin{proof} This is simply because $\gamma=(B\otimes_A?)^{**}:\Ref(A)\rightarrow \Ref(G,B)$ is an equivalence, and $\Hom_B(M,N)^G = \Hom_{G,B}(M,N)$. \end{proof} \begin{theorem} $A$ is $F$-rational if and only if the following three conditions hold. \begin{enumerate} \item[\bf 1] $A$ is Cohen--Macaulay. \item[\bf 2] $H^1(G,B)=0$. \item[\bf 3] $(B\otimes_k (I/k))^G$ is a maximal Cohen--Macaulay $A$-module, where $I$ is the injective hull of $k$. \end{enumerate} \end{theorem} \begin{proof} If the order $|G|$ of $G$ is not divisible by $p$, then $A$ is $F$-rational, and the three conditions hold. So we may assume that $|G|$ is divisible by $p$. Assume that $A$ is $F$-rational. Then $A$ is Cohen--Macaulay. As $s(\omega_{\hat A})>0$, we have that $H^1(G,B\otimes_k \rad P_\nu)=0$, and \begin{equation}\label{F-rational.eq} 0\rightarrow (B\otimes\rad P_\nu)^G \rightarrow (B\otimes P_\nu)^G \rightarrow (B\otimes\det_V)^G \rightarrow 0 \end{equation} is exact. As $M_\nu=(B\otimes P_\nu)^G$ is a direct summand of $B=M_{kG}=(B\otimes kG)^G$, it is a maximal Cohen--Macaulay module. As $(B\otimes\det)^G=\omega_A$, it is also a maximal Cohen--Macaulay module. So the canonical dual of the exact sequence (\ref{F-rational.eq}) is still exact. As there is an identification \[ \Hom_A((B\otimes_k ?)^G,\omega_A)=\Hom_B(B\otimes_k ?,B\otimes_k \det_V)^G =(B\otimes_k ?^*\otimes_k \det_V)^G, \] we get the exact sequence of maximal Cohen--Macaulay $A$-modules \begin{equation}\label{F-rational2.eq} 0\rightarrow A\rightarrow (B\otimes_k P_\nu^*\otimes_k \det_V)^G \rightarrow (B\otimes_k (\rad P_\nu)^*\otimes_k \det_V)^G \rightarrow 0. \end{equation} As $(\rad P_\nu)^*\otimes\det_V\cong I/k$, $(B\otimes(I/k))^G$ is maximal Cohen--Macaulay. As $I$ is an injective $G$-module, $B\otimes_k I$ is so as a $G$-module, and hence $H^1(G,B\otimes_k I)=0$. By the long exact sequence of the $G$-cohomology, we get $H^1(G,B)=0$. The converse is similar. Dualizing (\ref{F-rational2.eq}), we have that (\ref{F-rational.eq}) is exact. \end{proof} \begin{corollary} If $A$ is $F$-rational, then $H^1(G,k)=0$. \end{corollary} \begin{proof} $k$ is a direct summand of $B$, and $H^1(G,B)=0$. \end{proof} \begin{example} If $p=2$ and $G=S_2$ or $S_3$, the symmetric groups, then $H^1(G,k)\neq 0$. So $A$ is not $F$-rational, provided $G$ does not have a pseudo-reflection. \end{example} \section{An example of $F$-rational ring of invariants which are not $F$-regular} \label{main-example.sec} \paragraph Let $p$ be an odd prime number, and $k$ an algebraically closed field of characteristic $p$. \paragraph Let us identify $\Map(\Bbb F_p,\Bbb F_p)^\times$ with the symmetric group $S_p$. We write $\Bbb F_p=\{0,1,\ldots,p-1\}$. Define \begin{eqnarray*} G & := & \{\phi\in S_p\mid \exists a\in\Bbb F_p^\times\;\exists b\in\Bbb F_p\; \forall x\in\Bbb F_p\;\phi(x)=ax+b\}\subset S_p;\\ Q & := & \{\phi\in Q\mid \exists b\in\Bbb F_p\; \forall x\in\Bbb F_p\;\phi(x)=x+b\}\subset G;\\ \Gamma & := & \{\phi\in S_p\mid \exists a\in\Bbb F_p^\times\; \forall x\in\Bbb F_p\;\phi(x)=ax\}\subset G. \end{eqnarray*} $G$ is a subgroup of $S_p$, $Q$ is a normal subgroup of $G$, and $\Gamma$ is a subgroup of $G$ such that $G=Q\rtimes \Gamma$. Note that $Q$ is cyclic of order $p$. $\Gamma$ is cyclic of order $p-1$. So $G$ is of order $p(p-1)$. \paragraph Let $\alpha$ be a primitive element of $\Bbb F_p$ (that is, a generator of the cyclic group $\Bbb F_p^\times$), and let $\tau\in\Gamma$ be the element given by $\tau(x)=\alpha x$. The only involution of $\Gamma$ is $\tau^{(p-1)/2}$, the multiplication by $-1$. As a permutation, it is \[ (1\;(p-1))(2\;(p-2))\cdots((p-1)/2\;(p+1)/2), \] which is a transposition if and only if $p=3$. As $\Gamma$ contains a Sylow $2$-subgroup, a transposition of $G$, if any, is conjugate to an element of $\Gamma$, and it must be a transposition again. It follows that $G$ has a transposition if and only if $p=3$. \paragraph Now let $G\subset S_p$ act on $P=k^p=\langle w_0,w_1,\ldots,w_{p-1}\rangle$ by the permutation action, that is, $\phi w_i=w_{\phi(i)}$ for $\phi\in G$ and $i\in\Bbb F_p$. $g\in G\subset \text{\sl{GL}}(P)$ is a pseudo-reflection if and only if it is a transposition. So $G$ has a pseudo-reflection if and only if $p=3$. Let $r\geq 1$, and set $V=P^{\oplus r}$. $G\subset \text{\sl{GL}}(V)$ has a pseudo-reflection if and only if $p=3$ and $r=1$. \paragraph Let $S=\Sym P$. \begin{lemma}\label{S-decomposition.lem} Let $M$ be any finitely generated non-projective indecomposable $G$-summand of $S$. Then $M\cong k$. \end{lemma} \begin{proof} Let $\Omega=\{w^\lambda=w_0^{\lambda_0}\cdots w_{p-1}^{\lambda_{p-1}}\mid \lambda =(\lambda_0,\ldots,\lambda_{p-1})\in\Bbb Z_{\geq 0}^p\}$ be the set of monomials of $S$. $G$ acts on the set $\Omega$. Let $\Theta$ be the set of orbits of this action of $G$ on $\Omega$. Let $G w^\lambda\in\Theta$. If $\lambda=(r,r,\ldots,r)$ for some $r\geq 0$, then $G w^\lambda=\{w^\lambda\}$, and hence $(kG)w^\lambda\cong k$. Otherwise, $Q$ does not have a fixed point on the action on $G w^\lambda$. As the order of $Q$ is $p$, $Q$ acts freely on $Gw^\lambda$. Hence $(kG)w^\lambda$ is $kQ$-free. Since the order of $G/Q\cong\Gamma$ is $p-1$, the Lyndon--Hochschild--Serre spectral sequence collapses, and we have $H^i(G,M)\cong H^i(Q,M)^\Gamma$ for any $G$-module $M$. So a $Q$-injective (or equivalently, $Q$-projective) $G$-module is $G$-injective (or equivalently, $G$-projective). As we have $S=\bigoplus_{\theta\in\Theta}k\theta$ as a $G$-module, $S$ is a direct sum of $G$-projective modules and copies of $k$. Using Krull-Schmidt theorem, it is easy to see that $M\cong k$. \end{proof} \begin{lemma}\label{projective-tensor.lem} Let $U$ and $W$ be $G$-modules. \begin{enumerate} \item[\bf 1] $kG\otimes_k W\cong kG\otimes_k W'$, where $W'$ is the $k$-vector space $W$ with the trivial $G$-action. \item[\bf 2] If $U$ is $G$-projective, then $U\otimes_k W$ is $G$-projective. \end{enumerate} \end{lemma} \begin{proof} {\bf 1}. $g\otimes w \mapsto g\otimes g^{-1}w$ gives such an isomorphism. {\bf 2} follows from {\bf 1}. \end{proof} \paragraph Let $B:=\Sym V=\Sym P^{\oplus r}\cong S^{\otimes r}$. \begin{lemma}\label{B-decomposition.lem} Let $M$ be any finitely generated non-projective indecomposable $G$-summand of $B$. Then $M\cong k$. \end{lemma} \begin{proof} Follows immediately from Lemma~\ref{S-decomposition.lem} and Lemma~\ref{projective-tensor.lem}. \end{proof} \begin{lemma}\label{det_V.lem} Let $k_-$ denote the sign representation. Then $\det_V\cong k_-$ if $r$ is odd, and $\det_V\cong k$ if $r$ is even. $k_-$ is not isomorphic to $k$. \end{lemma} \begin{proof} As the determinant of a sign matrix is the signature of the permutation, $\det_P\cong k_-$. Hence $\det_V \cong (\det_P)^{\otimes r}\cong (k_-)^{\otimes r}$, and we get the desired result. The last assertion is clear, since $\tau=(x\mapsto \alpha x)\in\Gamma$ is a cyclic permutation of order $p-1$, and is an odd permutation. \end{proof} \begin{theorem}\label{kemper.thm} We have \[ \depth_A =\min\{rp,2(p-1)+r\}. \] Hence $A$ is Cohen--Macaulay if and only if $r\leq 2$. \end{theorem} \begin{proof} This is an immediate consequence of \cite[(3.3)]{Kemper}. \end{proof} \begin{theorem}\label{main-example.thm} Let $p$, $r$, $G$, $P$, $V=P^{\oplus r}$, $B=\Sym V$ be as above, and $A:=B^G$. Then \begin{enumerate} \item[\bf 1] $G$ is a finite subgroup of $\text{\sl{GL}}(V)$ of order $p(p-1)$. \item[\bf 2] $G\subset \text{\sl{GL}}(V)$ has a pseudo-reflection if and only if $p=3$ and $r=1$. If so, $G=S_3$ is the symmetric group acting regularly on $B=k[w_0,w_1,w_2]$ by permutations on $w_0,w_1,w_2$. The ring of invariants $A$ is the polynomial ring. Otherwise, $A$ is not weakly $F$-regular. \item[\bf 3] If $p\geq 5$ and $r=1$, then $A$ is $F$-rational, but not weakly $F$-regular. \item[\bf 4] If $r=2$, then $A$ is Gorenstein, but not $F$-rational. \item[\bf 5] If $r\geq 3$ and $r$ is odd, then $s(\omega_{\hat A})>0$ but $A$ is not Cohen--Macaulay. \item[\bf 6] If $r\geq 4$ and even, then $A$ is quasi-Gorenstein, but not Cohen--Macaulay. \end{enumerate} \end{theorem} \begin{proof} We have already seen {\bf 1} and the first statement of {\bf 2}. If $p=3$ and $r=1$, then $G\subset S_3$ has order $6$, and $G=S_3$. So $A$ is the polynomial ring generated by the symmetric polynomials. Otherwise, as $G$ does not have a pseudo-reflection and the order $\abs{G}$ of $G$ is divisible by $p$, $A$ is not weakly $F$-regular, see \cite{Broer}, \cite{Yasuda}, and \cite[(5.8)]{SH}. The only non-projective finitely generated indecomposable $G$-summand of $B$ is $k$ by Lemma~\ref{B-decomposition.lem}, and $\det_V^{-1}\subset k$ if and only if $r$ is even by Lemma~\ref{det_V.lem}. Hence we have that $s(\omega_{\hat A})>0$ if and only if $r$ is odd by Theorem~\ref{main.thm}. {\bf 3}. $A$ is not weakly $F$-regular by {\bf 2}. As $r=1$ is odd, $s(\omega_{\hat A})>0$. On the other hand, $A$ is Cohen--Macaulay by Theorem~\ref{kemper.thm}. Hence $A$ is $F$-rational by Theorem~\ref{Sannai.thm}. {\bf 4}. By Theorem~\ref{kemper.thm}, $A$ is Cohen--Macaulay. On the other hand, by Lemma~\ref{det_V.lem}, $\det_V\cong k$, and hence $\omega_A\cong (B\otimes_k \det_V)^G\cong B^G\cong A$ by Lemma~\ref{canonical.lem}. So $A$ is Gorenstein. As $A$ is Gorenstein but not weakly $F$-regular, it is not $F$-rational by \cite[(4.7)]{HH2}. {\bf 5} and {\bf 6} are easy. \end{proof} \end{document}
arXiv
Golden rectangle In geometry, a golden rectangle is a rectangle whose side lengths are in the golden ratio, $1:{\tfrac {1+{\sqrt {5}}}{2}}$, which is $1:\varphi $ (the Greek letter phi), where $\varphi $ is approximately 1.618. Golden rectangles exhibit a special form of self-similarity: All rectangles created by adding or removing a square from an end are golden rectangles as well. Construction A golden rectangle can be constructed with only a straightedge and compass in four simple steps: 1. Draw a square. 2. Draw a line from the midpoint of one side of the square to an opposite corner. 3. Use that line as the radius to draw an arc that defines the height of the rectangle. 4. Complete the golden rectangle. A distinctive feature of this shape is that when a square section is added—or removed—the product is another golden rectangle, having the same aspect ratio as the first. Square addition or removal can be repeated infinitely, in which case corresponding corners of the squares form an infinite sequence of points on the golden spiral, the unique logarithmic spiral with this property. Diagonal lines drawn between the first two orders of embedded golden rectangles will define the intersection point of the diagonals of all the embedded golden rectangles; Clifford A. Pickover referred to this point as "the Eye of God".[2] History The proportions of the golden rectangle have been observed as early as the Babylonian Tablet of Shamash (c. 888–855 BC),[3][4] though Mario Livio calls any knowledge of the golden ratio before the Ancient Greeks "doubtful".[5] According to Livio, since the publication of Luca Pacioli's Divina proportione in 1509, "the Golden Ratio started to become available to artists in theoretical treatises that were not overly mathematical, that they could actually use."[6] The 1927 Villa Stein designed by Le Corbusier, some of whose architecture utilizes the golden ratio, features dimensions that closely approximate golden rectangles.[7] Relation to regular polygons and polyhedra Construction of half-golden rectangle (central right triangle) from polygons Three golden rectangles in an icosahedron Euclid gives an alternative construction of the golden rectangle using three polygons circumscribed by congruent circles: a regular decagon, hexagon, and pentagon. The respective lengths a, b, and c of the sides of these three polygons satisfy the equation a2 + b2 = c2, so line segments with these lengths form a right triangle (by the converse of the Pythagorean theorem). The ratio of the side length of the hexagon to the decagon is the golden ratio, so this triangle forms half of a golden rectangle.[8] The convex hull of two opposite edges of a regular icosahedron forms a golden rectangle. The twelve vertices of the icosahedron can be decomposed in this way into three mutually-perpendicular golden rectangles, whose boundaries are linked in the pattern of the Borromean rings.[9] See also • Fibonacci number – Numbers obtained by adding the two previous onesPages displaying short descriptions of redirect targets • Golden rhombus – Rhombus with diagonals in the golden ratio • Kepler triangle – Right triangle related to the golden ratio • Rabatment of the rectangle – Cutting a square from a rectangle • Silver ratio – Ratio of numbers, approximately 1:2.4 • Plastic number – Algebraic number, approximately 1.325 • Golden Angle -- Circle with sectors in golden ratio Notes 1. $({\tfrac {1}{2}})^{2}+1^{2}={\tfrac {5}{2^{2}}}$ References 1. Posamentier, Alfred S.; Lehmann, Ingmar (2011). The Glorious Golden Ratio. Prometheus Books. p. 11. ISBN 9-781-61614-424-1. 2. Livio, Mario (2003) [2002]. The Golden Ratio: The Story of Phi, The World's Most Astonishing Number. New York City: Broadway Books. p. 85. ISBN 0-7679-0816-3. 3. Olsen, Scott (2006). The Golden Section: Nature's Greatest Secret. Glastonbury: Wooden Books. p. 3. ISBN 978-1-904263-47-0. 4. Van Mersbergen, Audrey M., Rhetorical Prototypes in Architecture: Measuring the Acropolis with a Philosophical Polemic, Communication Quarterly, Vol. 46, 1998 ("a 'Golden Rectangle' has a ratio of the length of its sides equal to 1:1.61803+. The Parthenon is of these dimensions.") 5. Livio, Mario. "The Golden Ratio in Art: Drawing heavily from The Golden Ratio" (PDF). p. 6. Retrieved 11 September 2019. 6. Livio, Mario (2003) [2002]. The Golden Ratio: The Story of Phi, The World's Most Astonishing Number. New York City: Broadway Books. p. 136. ISBN 0-7679-0816-3. 7. Le Corbusier, The Modulor, p. 35, as cited in Padovan, Richard, Proportion: Science, Philosophy, Architecture (1999), p. 320. Taylor & Francis. ISBN 0-419-22780-6: "Both the paintings and the architectural designs make use of the golden section". 8. Euclid, Euclid's Elements|Elements, Book XIII, Proposition 10. 9. Burger, Edward B.; Starbird, Michael P. (2005). The Heart of Mathematics: An Invitation to Effective Thinking. Springer. p. 382. ISBN 9781931914413.. External links Wikimedia Commons has media related to Golden rectangle. • Weisstein, Eric W. "Golden Rectangle". MathWorld. • Weisstein, Eric W. "Golden Ratio". MathWorld. • The Golden Mean and the Physics of Aesthetics • Golden rectangle demonstration With interactive animation • From golden rectangle to golden quadrilaterals Explores some different possible golden quadrilaterals Metallic means • Pisot number • Gold • Angle • Base • Fibonacci sequence • Kepler triangle • Rectangle • Rhombus • Section search • Spiral • Triangle • Silver • Pell number • Bronze • Copper • Nickel • etc...
Wikipedia
Stability analysis of traveling wave solutions for lattice reaction-diffusion equations DCDS-B Home The effect of noise intensity on parabolic equations May 2020, 25(5): 1729-1755. doi: 10.3934/dcdsb.2019249 A gradient-type algorithm for constrained optimization with application to microstructure optimization Cristian Barbarosie 1,, , Anca-Maria Toader 1, and Sérgio Lopes 1,2, CMAF-CIO, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa, Portugal Dep. Area of Mathematics, ISEL, Instituto Politécnico de Lisboa, Rua Conselheiro Emídio Navarro, 1959-007 Lisboa, Portugal * Corresponding author: [email protected] Received May 2019 Revised June 2019 Published May 2020 Early access November 2019 Fund Project: Supported by National Funding from FCT - Fundação para a Ciência e a Tecnologia, under the project UID/MAT/04561/2019. Figure(8) We propose a method to optimize periodic microstructures for obtaining homogenized materials with negative Poisson ratio, using shape and/or topology variations in the model hole. The proposed approach employs worst case design in order to minimize the Poisson ratio of the (possibly anisotropic) homogenized elastic tensor in several prescribed directions. We use a minimization algorithm for inequality constraints based on an active set strategy and on a new algorithm for solving minimization problems with equality constraints, belonging to the class of null-space gradient methods. It uses first order derivatives of both the objective function and the constraints. The step is computed as a sum between a steepest descent step (minimizing the objective functional) and a correction step related to the Newton method (aiming to solve the equality constraints). The linear combination between these two steps involves coefficients similar to Lagrange multipliers which are computed in a natural way based on the Newton method. The algorithm uses no projection and thus the iterates are not feasible; the constraints are only satisfied in the limit (after convergence). A local convergence result is proven for a general nonlinear setting, where both the objective functional and the constraints are not necessarily convex functions. Keywords: Nonlinear programming, constrained minimization, worst case design, optimization of microstructures, porous materials, microstructure, auxetic materials. Mathematics Subject Classification: Primary: 65K10, 49M15; Secondary: 90C52. Citation: Cristian Barbarosie, Anca-Maria Toader, Sérgio Lopes. A gradient-type algorithm for constrained optimization with application to microstructure optimization. Discrete & Continuous Dynamical Systems - B, 2020, 25 (5) : 1729-1755. doi: 10.3934/dcdsb.2019249 G. Allaire, E. Bonnetier, G. Francfort and F. Jouve, Shape optimization by the homogenization method, Numer. Math., 76 (1997), 27-68. doi: 10.1007/s002110050253. Google Scholar G. Allaire, Shape Optimization by the Homogenization Method, Applied Mathematical Sciences, 146, Springer-Verlag, New York, 2002. doi: 10.1007/978-1-4684-9286-6. Google Scholar G. Allaire, F. Jouve and A.-M. Toader, Structural optimization using sensitivity analysis and a level-set method, J. Comput. Phys., 194 (2004), 363-393. doi: 10.1016/j.jcp.2003.09.032. Google Scholar F. Feppon, G. Allaire and C. Dapogny, Null space gradient flows for constrained optimization with applications to shape optimization, preprint, 2019. Google Scholar K. Atkinson and W. Han, Theoretical Numerical Analysis: A Functional Analysis Framework, Texts in Applied Mathematics, 39, Springer, Dordrecht, 2009. doi: 10.1007/978-1-4419-0458-4. Google Scholar C. Barbarosie, Shape optimization of periodic structures, Computational Mechanics, 30 (2003), 235–246. doi: 10.1007/s00466-002-0382-3. Google Scholar C. Barbarosie and A.-M. Toader, Shape and topology optimization for periodic problems, part I: The shape and the topological derivative, Struct. Multidiscip. Optim., 40 (2010), 381-391. doi: 10.1007/s00158-009-0378-0. Google Scholar C. Barbarosie and A.-M. Toader, Shape and topology optimization for periodic problems, part II: Optimization algorithm and numerical examples, Struct. Multidiscip. Optim., 40 (2010), 393-408. doi: 10.1007/s00158-009-0377-1. Google Scholar C. Barbarosie and S. Lopes, A gradient-type algorithm for optimization with constraints, preprint Pre-2011-001, available from http://cmaf.fc.ul.pt/preprints.html, 2011. Google Scholar C. Barbarosie and S. Lopes, A generalized notion of compliance, Comptes Rendus Mécanique, 339 (2011), 641–648. doi: 10.1016/j.crme.2011.07.002. Google Scholar D. Bertsekas, Nonlinear Programming, 2$^\rm { nd }$ edition, Athena Scientific Optimization and Computation Series, Athena Scientific, Belmont, MA 1999. Google Scholar J. Bonnans, J. Gilbert, C. Lemaréchal and C. Sagastizábal, Numerical Optimization – Theoretical and Practical Aspects, Universitext, Springer-Verlag, Berlin, 2003. doi: 10.1007/978-3-662-05078-1. Google Scholar A. Boresi, R. Schmidt and O. Sidebottom, Advanced Mechanics of Materials, Wiley, 1993. Google Scholar P. W. Christensen and A. Klarbring, An Introduction to Structural Optimization, Solid Mechanics and Its Applications, Springer, New York, 2009. doi: 10.1007/978-1-4020-8666-3. Google Scholar P. G. Ciarlet, Introduction à l'Analyse Numérique Matricielle et à l'Optimisation, Masson, Paris, 1990. Google Scholar R. Fletcher, Practical Methods of Optimization, Constrained optimization, John Wiley & Sons, Chichester, 2013. doi: 10.1002/9781118723203. Google Scholar J. Haslinger and R. A. E. Mäkinen, Introduction to Shape Optimization: Theory, Approximation and Computation, Advances in Design and Control, 7, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2003. doi: 10.1137/1.9780898718690. Google Scholar A. Henrot and M. Pierre, Shape Variation and Optimization: A Geometrical Analysis, European Mathematical Society (EMS), Zürich, 2018. doi: 10.4171/178. Google Scholar E. Kreyszig, Introductory Functional Analysis with Applications, John Wiley & Sons, 1978. Google Scholar T. C. Lim, Auxetic Materials and Structures, Engineering Materials, Springer, 2015. doi: 10.1007/978-981-287-275-3. Google Scholar D. G. Luenberger and Y. Ye, Linear and Nonlinear Programming, 3$^\rm { rd }$ edition, International Series in Operations Research & Management Science, 116, Springer, New York, 2008. Google Scholar [22] G. Milton, The Theory of Composites, Cambridge Monographs on Applied and Computational Mathematics, 6, Cambridge University Press, Cambridge, 2002. doi: 10.1017/CBO9780511613357. Google Scholar J. Nocedal and S. Wright, Numerical Optimization, 2$^\rm { nd }$ edition, Springer Series in Operations Research and Financial Engineering, Springer, New York, 2006. doi: 10.1007/b98874. Google Scholar A. A. Novotny and J. Sokołowski, Topological Derivatives in Shape Optimization, Interaction of Mechanics and Mathematics, Springer, Heidelberg, 2013. doi: 10.1007/978-3-642-35245-4. Google Scholar A. A. Novotny, J. Sokołowski and A. Żochowski, Applications of the Topological Derivative Method, Studies in Systems, Decision and Control, 188, Springer, 2019. doi: 10.1007/978-3-030-05432-8. Google Scholar A. Rothwell, Optimization Methods in Structural Design, Solid Mechanics and its Applications, 242, Springer, 2017. doi: 10.1007/978-3-319-55197-5. Google Scholar W. R. Spillers and K. M. MacBain, Structural Optimization, in Computers & Structures, Springer, Dordrecht, 2009. doi: 10.1016/j.compstruc.2011.05.006. Google Scholar C. Van Hooricks, O. Sigmund, M. Schevenels, B. S. Lazarov and G. Lombaert, Topology optimization of two-dimensional elastic wave barriers, Journal of Sound and Vibration, 376 (2016), 95–111. Google Scholar J. Sokołowski and J.P. Zolesio, Introduction to Shape Optimization: Shape Sensitivity Analysis, Springer Series in Computational Mathematics, 16, Springer-Verlag, Berlin, 1992. Google Scholar A.-M. Toader, The topological derivative of homogenized elastic coefficients of periodic microstructures, J. Control Optim., 49 (2011), 1607-1628. doi: 10.1137/100782772. Google Scholar H. Walker and L. Watson, Least-change secant update methods for underdetermined systems, SIAM J. Numer. Anal., 27 (1990), 1227-1262. doi: 10.1137/0727071. Google Scholar M. Wormser, F. Wein, M. Stingl and C. Korner, Design and additive manufacturing of 3D phononic band gap structures based on gradient based optimization, Materials, 10 (2017). doi: 10.3390/ma10101125. Google Scholar H. Yamashita, A differential equation approach to nonlinear programming, Math. Programming, 18 (1980), 155-168. doi: 10.1007/BF01588311. Google Scholar [34] Y. Yuan, A review of trust region algorithms for optimization, ICIAM, Oxford Univ. Press, Oxford, 2000. Google Scholar Z. Zhu, X. Cai and J. Jian, An improved SQP algorithm for solving minimax problems, Appl. Math. Lett., 22 (2009), 464-469. doi: 10.1016/j.aml.2008.06.017. Google Scholar Figure 1. Periodicity cell with model hole (zoomed) Figure 2. Periodically perforated plane $ {\mathbb{R}}^2 _{\hbox{ perf}} $ Figure 3. Optimized microstructures with respect to one direction only Figure 4. Algorithm Figure 5. Initial guess and final microstructure for square periodicity Figure 7. Initial guess and final microstructure for hexagonal periodicity Figure 6. History of convergence, zoom of the first 40 iterations and zoom of the last 6 iterations Figure 8. History of convergence, zoom of the first 40 iterations, zoom of the last 8 iterations Stan Chiriţă. Spatial behavior in the vibrating thermoviscoelastic porous materials. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2027-2038. doi: 10.3934/dcdsb.2014.19.2027 K. A. Ariyawansa, Leonid Berlyand, Alexander Panchenko. A network model of geometrically constrained deformations of granular materials. Networks & Heterogeneous Media, 2008, 3 (1) : 125-148. doi: 10.3934/nhm.2008.3.125 Haolei Wang, Lei Zhang. Energy minimization and preconditioning in the simulation of athermal granular materials in two dimensions. Electronic Research Archive, 2020, 28 (1) : 405-421. doi: 10.3934/era.2020023 Ying Ji, Shaojian Qu, Yeming Dai. A new approach for worst-case regret portfolio optimization problem. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 761-770. doi: 10.3934/dcdss.2019050 Faustino Maestre, Pablo Pedregal. Dynamic materials for an optimal design problem under the two-dimensional wave equation. Discrete & Continuous Dynamical Systems, 2009, 23 (3) : 973-990. doi: 10.3934/dcds.2009.23.973 Paolo Paoletti. Acceleration waves in complex materials. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 637-659. doi: 10.3934/dcdsb.2012.17.637 Edward Della Torre, Lawrence H. Bennett. Analysis and simulations of magnetic materials. Conference Publications, 2005, 2005 (Special) : 854-861. doi: 10.3934/proc.2005.2005.854 Qun Lin, Antoinette Tordesillas. Towards an optimization theory for deforming dense granular materials: Minimum cost maximum flow solutions. Journal of Industrial & Management Optimization, 2014, 10 (1) : 337-362. doi: 10.3934/jimo.2014.10.337 Zhiqiang Yang, Junzhi Cui, Qiang Ma. The second-order two-scale computation for integrated heat transfer problem with conduction, convection and radiation in periodic porous materials. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 827-848. doi: 10.3934/dcdsb.2014.19.827 John Murrough Golden. Constructing free energies for materials with memory. Evolution Equations & Control Theory, 2014, 3 (3) : 447-483. doi: 10.3934/eect.2014.3.447 Luca Deseri, Massiliano Zingales, Pietro Pollaci. The state of fractional hereditary materials (FHM). Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2065-2089. doi: 10.3934/dcdsb.2014.19.2065 Mariano Giaquinta, Paolo Maria Mariano, Giuseppe Modica. A variational problem in the mechanics of complex materials. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 519-537. doi: 10.3934/dcds.2010.28.519 Merab Svanadze. On the theory of viscoelasticity for materials with double porosity. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2335-2352. doi: 10.3934/dcdsb.2014.19.2335 Yongjian Yang, Zhiyou Wu, Fusheng Bai. A filled function method for constrained nonlinear integer programming. Journal of Industrial & Management Optimization, 2008, 4 (2) : 353-362. doi: 10.3934/jimo.2008.4.353 Ardeshir Ahmadi, Hamed Davari-Ardakani. A multistage stochastic programming framework for cardinality constrained portfolio optimization. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 359-377. doi: 10.3934/naco.2017023 Songqiang Qiu, Zhongwen Chen. An adaptively regularized sequential quadratic programming method for equality constrained optimization. Journal of Industrial & Management Optimization, 2020, 16 (6) : 2675-2701. doi: 10.3934/jimo.2019075 Donald L. Brown, Vasilena Taralova. A multiscale finite element method for Neumann problems in porous microstructures. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1299-1326. doi: 10.3934/dcdss.2016052 Bin Li, Kok Lay Teo, Cheng-Chew Lim, Guang Ren Duan. An optimal PID controller design for nonlinear constrained optimal control problems. Discrete & Continuous Dynamical Systems - B, 2011, 16 (4) : 1101-1117. doi: 10.3934/dcdsb.2011.16.1101 David G. Ebin. Global solutions of the equations of elastodynamics for incompressible materials. Electronic Research Announcements, 1996, 2: 50-59. Giuseppina Autuori, Patrizia Pucci. Entire solutions of nonlocal elasticity models for composite materials. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 357-377. doi: 10.3934/dcdss.2018020 Cristian Barbarosie Anca-Maria Toader Sérgio Lopes
CommonCrawl
\begin{document} \begin{frontmatter} \title {A Family of Multistep Methods with Zero Phase-Lag and Derivatives for the Numerical Integration of Oscillatory ODEs} \author[UoP]{Z. A. Anastassi} \ead{[email protected]} \author[UoP]{D. S. Vlachos} \ead{[email protected]} \author[UoP]{T. E. Simos\fnref{simos}} \fntext[simos]{Highly Cited Researcher, Active Member of the European Academy of Sciences and Arts, Address: Dr. T.E. Simos, 26 Menelaou Street, Amfithea - Paleon Faliron, GR-175 64 Athens, GREECE, Tel: 0030 210 94 20 091} \ead{[email protected], [email protected]} \address[UoP]{Laboratory of Computer Sciences,\\ Department of Computer Science and Technology,\\ Faculty of Sciences and Technology, University of Peloponnese\\ GR-22 100 Tripolis, GREECE} \begin{abstract} \noindent In this paper we develop a family of three 8-step methods, optimized for the numerical integration of oscillatory ordinary differential equations. We have nullified the phase-lag of the methods and the first $r$ derivatives, where $r=\{1,2,3\}$. We show that with this new technique, the method gains efficiency with each derivative of the phase-lag nullified. This is the case for the integration of both the Schr\"odinger equation and the N-body problem. A local truncation error analysis is performed, which, for the case of the Schr\"odinger equation, also shows the connection of the error and the energy, revealing the importance of the zero phase-lag derivatives. Also the stability analysis shows that the methods with more derivatives vanished, have a bigger interval of periodicity. \end{abstract} \begin{keyword} Schr\"{o}dinger equation \sep N-body problem \sep phase-lag \sep deri\-vatives \sep initial value problems \sep oscillating solution \sep symmetric \sep multistep \sep explicit \PACS 0.260 \sep 95.10.E \end{keyword} \end{frontmatter} \section{Introduction} The numerical integration of systems of ordinary differential equations with oscillatory solutions has been the subject of research during the past decades. This type of ODEs is often met in real problems, like the N-body problem and the Schr\"odinger equation. There are some special techniques for optimizing numerical methods. Trigonometrical fitting and phase-fitting are some of them, producing methods with variable coefficients, which depend on $v=\omega h$, where $\omega$ is the dominant frequency of the problem and $h$ is the step length of integration. For example Raptis and Allison have developed a two-step exponentially-fitted method of order four in \cite{rallison} and Kalogiratou and Simos have constructed a two-step P-stable exponentially-fitted method of order four in \cite{kalogiratou}. Also Panopoulos, Anastassi and Simos have constructed two optimized eight-step methods with high or infinite order of phase-lag in \cite{panopoulos_match}. Some other notable multistep methods for the numerical solution of oscillating IVPs have been developed by Chawla and Rao in \cite{chawla}, who produced a three-stage, two-Step P-stable method with minimal phase-lag and order six and by Henrici in \cite{henrici}, who produced a four-step symmetric method of order six. Also some recent research work in numerical methods can be found in \cite{anastassi1}, \cite{anastassi2}, \cite{anastassi3}, \cite{Meyer}, \cite{ref_12}, \cite{ref_14}, \cite{vanden_jnaiam}, \cite{cash_jnaiam1}, \cite{cash_jnaiam2}, \cite{iavernaro_jnaiam1}, \cite{simos_cole1}, \cite{simos_cole2} and \cite{Psihoyios_cole}. Trigonometrically fitted methods of high trigonometric order are well known for their high efficiency in the integration of the Schr\"odinger equation, especially when using a high value of energy. However higher trigonometric order is not rendering them more efficient for all types of oscillatory problems. On the other hand, phase-lag does not give us the opportunity to provide such methods, that for example perform well when integrating the Schr\"odinger equation for high values of energy. In this paper we present a methodology for optimizing numerical methods, through the use of phase-lag and its derivatives with respect to $v$. More specifically, given a classical (i.e. with constant coefficients) numerical method, we can provide a family of optimized methods, each of which has zero $\{PL\}$ or zero $\{PL$ and $PL'\}$ or zero $\{PL$, $PL'$ and $PL''\}$ etc. With this new technique we provide methods that perform well during the integration of the Schr\"odinger equation for high values of energy, but also that perform well on other real problems with oscillatory solution, like the N-body problem. \section{Phase-lag and stability analysis of symmetric multistep methods} For the numerical solution of the initial value problem \begin{equation} \label{ivp_definition} y'' = f(x,y) \end{equation} \noindent multistep methods of the form \begin{equation} \label{multistep_definition} \sum\limits_{i=0}^{m}{a_{i}y_{n+i}} = h^{2}\sum\limits_{i=0}^{m}{b_{i}f(x_{n+i},y_{n+i})} \end{equation} with $m$ steps can be used over the equally spaced intervals $\left\{x_{i}\right\}^{m}_{i=0} \in [a,b]$ and $h=|x_{i+1}-x_{i}|$, \, $i=0(1)m-1$. If the method is symmetric then $a_i=a_{m-i}$ and $b_i=b_{m-i}$, \, $i=0(1)\lfloor \frac{m}{2} \rfloor$. Method \eqref{multistep_definition} is associated with the operator \begin{eqnarray} \label{exp_operator} L(x) = \sum\limits_{i=0}^{m}{a_{i}u(x+ih)} - h^{2}\sum\limits_{i=0}^{m}{b_{i}u''(x+ih)} \end{eqnarray} \noindent where $u \in C^2$. \begin{defn} \label{defn_exp1} \emph{The multistep method \eqref{exp_operator} is called algebraic of order $p$ if the associated linear operator $L$ vanishes for any linear combination of the linearly independent functions $\,1,\, x,\,x^2,\, \ldots ,\, x^{p+1}$.} \end{defn} When a symmetric $2k$-step method, that is for $i=-k(1)k$, is applied to the scalar test equation \begin{equation} \label{stab_eq} y''=-\omega^2 y \end{equation} a difference equation of the form \begin{eqnarray} \label{phl_multi_de} \nonumber A_{k} (v)y_{n + k} + ... + A_{1} (v)y_{n + 1} + A_{0} (v)y_{n}\\ + A_{1}(v)y_{n - 1} + ... + A_{k} (v)y_{n - k} = 0 \end{eqnarray} \noindent is obtained, where $v = \omega h$, $h$ is the step length and $A_{0} (v)$, $A_{1} (v),\ldots$, $ A_{k} (v)$ are polynomials of $v$. The characteristic equation associated with \eqref{phl_multi_de} is \begin{eqnarray} \label{phl_multi_ce} A_{k} (v)s^{k} + ... + A_{1} (v)s + A_{0} (v) + A_{1} (v)s^{ - 1} + ... + A_{k} (v)s^{ - k} = 0 \end{eqnarray} \begin{theo} \emph{\cite{royal}} The symmetric $2k$-step method with characteristic equation given by \eqref{phl_multi_ce} has phase-lag order $q$ and phase-lag constant $c$ given by \begin{equation} \label{phl_multi_defn} - c v ^{q + 2} + O(v^{q + 4}) = {\frac{{2A_{k} (v)\cos (k v ) + ... + 2A_{j} (v)\cos (j v ) + ... + A_{0} (v)}}{{2k^{2}A_{k} (v) + ... + 2j^{2}A_{j} (v) + ... + 2A_{1} (v)}}} \end{equation} \end{theo} The formula proposed from the above theorem gives us a direct method to calculate the phase-lag of any symmetric $2k$- step method. The characteristic equation has $m$ characteristic roots $\lambda_{i}, \; i=0(1)m-1$. \begin{defn} \emph{ \cite{la_wa} If the characteristic roots satisfy the conditions $|\lambda_{i}|\leqslant 1, \; i=0(1)m-1$ for all $s=\theta h$, then we say that the method is} unconditionally stable. \end{defn} \begin{defn} \label{sta_int} \emph{ \cite{la_wa} If the characteristic roots satisfy the conditions $\lambda_1=e^{I\,\phi(s)}$, $\lambda_2=e^{-I\,\phi(s)}$ $|\lambda_{i}|\leqslant 1, \; i=3(1)m-1$ for all $s<s_{0}$, where $s=\theta h$ and $\phi(s)$ is a real function of $s$, then we say that the method has interval of periodicity $(0,s_{0}^2)$.} \end{defn} \begin{defn} \label{p_stability} \emph{ \cite{la_wa} Method \eqref{multistep_definition} is called P-stable if its \emph{interval of periodicity} is $(0,\infty)$.} \end{defn} \section{Construction of the new optimized multistep methods} \label{Construction} We consider the multistep symmetric method of Quinlan-Tremaine \cite{qt8}, with eight steps and eighth algebraic order: \begin{equation} \begin{array}{c} \label{table_qt8} y_{{4}} = -y_{{-4}} -a_{{3}}(y_{{3}}+y_{{-3}}) -a_{{2}}(y_{{2}}+y_{{-2}}) -a_{{1}}(y_{{1}}+y_{{-1}})\\ +{h}^{2}\left(b_{{3}}(f_{{3}}+f_{{-3}}) +b_{{2}}(f_{{2}}+f_{{-2}}) +b_{{1}}(f_{{1}}+f_{{-1}}) +b_{{0}}f_{{0}}\right) \end{array} \end{equation} \noindent where \begin{equation} \begin{array}{l} a_{3}=-2, \qquad a_{2}=2, \qquad a_{1}=-1, \\ \displaystyle b_{3}=\frac{17671}{12096}, \qquad b_{2}=-\frac{23622}{12096}, \qquad b_{1}=\frac{61449}{12096}, \qquad b_{0}=-\frac{50516}{12096}, \\ y_{i} = y(x+ih) \mbox{ and } f_{i} = f(x+ih,y(x+ih)) \end{array} \end{equation} We also consider the optimized method, that is based on the above one, with zero phase-lag constructed by Panopoulos, Anastassi and Simos in \cite{panopoulos_match}. The coefficients are given below: \begin{equation} \label{meth_PL_0} \begin{array}{l} \displaystyle b_{{0}}=-20\,b_{{3}}+{\frac {601}{24}}, \qquad b_{{2}}=-6\,b_{{3}}+{ \frac {109}{16}}, \qquad b_{{1}}=15\,b_{{3}}-{\frac{101}{6}}\\ \displaystyle \nonumber b_{3} = {\frac {1}{96}}\,{\frac {C}{D}}, \qquad \mbox{where}\\ \nonumber C=-192\, \left( \cos \left( v \right) \right) ^{4}+192\, \left( \cos \left( v \right) \right) ^{3}+ \left( 96-327\,{v}^{2} \right) \left( \cos \left( v \right) \right) ^{2}\\ \nonumber + \left( - 120+404\,{v}^{2} \right) \cos \left( v \right) -137\,{v}^{2}+24\\ D={{v}^{ 2} \left( \cos \left( v \right) -1 \right) ^{3}}, \end{array} \end{equation} \noindent where $v=\omega h$ and the $a_{i}$ coefficients remain the same. The Taylor series expansions of the coefficients are: \bc \displaystyle b_{{0}}=-{\frac {12629}{3024}}+{\frac {45767}{36288}}\,{v}^{2}-{\frac{164627}{2395008}}\,{v}^{4}+{\frac {520367}{792529920}}\,{v}^{6} \\ \displaystyle -{\frac {76873}{4483454976}}\,{v}^{8}+{\frac {9190171}{160059342643200}}\,{v}^{10}+{\frac {6662921}{1703031405723648}}\,{v}^{12}+\ldots\\ \\ \displaystyle b_{{1}}={\frac {20483}{4032}}-{\frac {45767}{48384}}\,{v}^{2}+{\frac {164627}{3193344}}\,{v}^{4}-{\frac {520367}{1056706560}}\,{v}^{6} \\ \displaystyle +{\frac {76873}{5977939968}}\,{v}^{8}-{\frac {9190171}{213412456857600}}\,{v}^{10}-{\frac {6662921}{2270708540964864}}\,{v}^{12}-\ldots\\ \\ \displaystyle b_{{2}}=-{\frac {3937}{2016}}+{\frac {45767}{120960}}\,{v}^{2}-{\frac{164627}{7983360}}\,{v}^{4}+{\frac {520367}{2641766400}}\,{v}^{6} \\ \displaystyle -{\frac {76873}{14944849920}}\,{v}^{8}+{\frac {9190171}{533531142144000}}\,{v}^{10}+{\frac {6662921}{5676771352412160}}\,{v}^{12}+\ldots\\ \\ \displaystyle b_{{3}}={\frac {17671}{12096}}-{\frac {45767}{725760}}\,{v}^{2}+{\frac {164627}{47900160}}\,{v}^{4}-{\frac {520367}{15850598400}}\,{v}^{6} \\ \displaystyle +{\frac {76873}{89669099520}}\,{v}^{8}-{\frac {9190171}{3201186852864000}}\,{v}^{10}-{\frac {6662921}{34060628114472960}}\,{v}^{12}-\ldots \end{array}\end{equation*} We want to produce three new methods that, apart from zero phase-lag, will also have zero $r$ derivatives of the phase-lag, where $r=\{1,2,3\}$. In particular the three new methods must satisfy these equations: \begin{itemize} \item First method: $\{PL=0,PL'=0\}$ \item Second method: $\{PL=0,PL'=0,PL''=0\}$ \item Third method: $\{PL=0,PL'=0,PL''=0,PL'''=0\}$ \end{itemize} Since we have four free coefficients $b_i$, $i=\{0,1,2,3\}$ ($a_{i}$ remain the same), the rest of the coefficients for each method will be determined by the algebraic conditions. \subsection{First optimized method with zero $PL$ and $PL'$} The first method must satisfy the conditions $\{PL=0,PL'=0\}$, thus we need two coefficients to be determined by the maximum algebraic order. We use formula \eqref{phl_multi_defn} to compute the phase-lag and then its first derivative in respect to $v$: \noindent where $v=\omega\,h$, $\omega$ is the frequency and $h$ is the step length used. \bc PL = \big(96\, \left( \cos \left( v \right) \right) ^{4}+ \left( -96+48 \,{v}^{2}b_{{3}} \right) \left( \cos \left( v \right) \right) ^{3}+\\ \left( -48+24\,{v}^{2}b_{{2}} \right) \left( \cos \left( v \right) \right) ^{2} + \left( 60+ \left( 125-144\,b_{{3}}-48\,b_{{2}} \right) {v}^{2} \right) \cos \left( v \right)\\ -12+ \left( 24\,b_{{2}}-95+96\,b_{{3}} \right) {v}^{2}\big)\\ /\left({60+125\,{v}^{2}}\right)\\ \\ \displaystyle PL' = \frac{1}{5}\, \Big( -4800\,v \left( \cos \left( v \right) \right) ^{4}+ \big( 1152\,b_{{3}}v-9600\,\sin \left( v \right) {v}^{2}+4800\,v\\ \displaystyle -4608\,\sin \left( v \right) \big) \left( \cos \left( v \right) \right) ^{3}+ \left( -3600\, \left( {\frac {12}{25}}+{v}^{2} \right) \left( -2+{v}^{2}b_{{3}} \right) \sin \left( v \right)\right.\\ \displaystyle \left. + \left( 576\,b_{{2}}+2400 \right) v \right) \left( \cos \left( v \right) \right)^{2}+ \left( -1200\, \left( {\frac {12}{25}}+{v}^{2} \right) \left( {v}^{2}b_{{2}}-2 \right) \sin \left( v \right)\right.\\ \displaystyle \left. -3456\,v \left( b_{{3}}+1/3\,b_{{2}} \right) \right) \cos \left( v \right)\\ \displaystyle +3600\, \left( {\frac {12}{25}}+{v}^{2} \right) \left( -{\frac {5}{12}}+ \left( -{\frac {125}{144}}+b_{{3}}+\frac{1}{3}\,b_{{2}} \right) {v}^{2} \right) \sin \left( v \right)\\ +2304\,v \left( -{\frac {35}{48}}+b_{{3}}+1/4\,b_{{2}} \right) \Big) \left( 12+25\,{v}^{2} \right) ^{-2} \end{array}\end{equation*} The four equations to be solved are: $$ PL=0, \quad PL'=0, \quad b_0 = -{\frac {95}{6}}+16\,b_{{3}}+6\,b_{{2}}, \quad b_1 = {\frac {125}{12}}-9\,b_{{3}}-4\,b_{{2}} $$ \noindent and the coefficients are given below: \begin{equation} \label{coeff_meth_phl_deriv_1} \begin{array}{l} \displaystyle b_0=-\frac{b_{0,num}}{12 D}, \quad b_1=\frac{b_{1,num}}{48 D}, \quad b_2=-\frac{b_{2,num}}{24 D}, \quad b_3=\frac{b_{3,num}}{48 D} ,\\ \mbox{where } D = {\left(\left(\cos\left(v\right)\right)^{4}-2\,\left(\cos \left(v\right)\right)^{3}+2\,\cos\left(v\right)-1\right){v }^{3}}\;\;\; \mbox{and} \end{array} \end{equation} \bc b_{{0,num}}=-288\,\left(\cos\left(v\right)\right)^{ 6}v+576\,\sin\left(v\right)\left(\cos\left(v\right)\right) ^{5}+192\,\left(\cos\left(v\right)\right)^{5}v\\ -192\,\sin \left(v\right)\left(\cos\left(v\right)\right)^{4}+190\, \left(\cos\left(v\right)\right)^{4}{v}^{3}+720\,\left(\cos \left(v\right)\right)^{4}v\\ -120\,\left(\cos\left(v\right) \right)^{3}v-672\,\sin\left(v\right)\left(\cos\left(v \right)\right)^{3}+370\,\left(\cos\left(v\right)\right)^{3 }{v}^{3}\\ +168\,\sin\left(v\right)\left(\cos\left(v\right) \right)^{2}-540\,\left(\cos\left(v\right)\right)^{2}v+145\, \left(\cos\left(v\right)\right)^{2}{v}^{3}\\ +168\,\cos\left(v \right)\sin\left(v\right)-70\,\cos\left(v\right){v}^{3}-72\, \cos\left(v\right)v+108\,v\\ -48\,\sin\left(v\right) -35\,{v}^{3} \end{array}\end{equation*} \bc b_{{1,num}}=-768\,\left(\cos\left(v\right)\right)^{6 }v+1536\,\sin\left(v\right)\left(\cos\left(v\right)\right) ^{5}+192\,\left(\cos\left(v\right)\right)^{5}v\\ +500\,\left( \cos\left(v\right)\right)^{4}{v}^{3}-192\,\sin\left(v\right) \left(\cos\left(v\right)\right)^{4}+2400\,\left(\cos\left( v\right)\right)^{4}v\\ +1000\,\left(\cos\left(v\right)\right) ^{3}{v}^{3}-2112\,\sin\left(v\right)\left(\cos\left(v\right) \right)^{3}-1980\,\left(\cos\left(v\right)\right)^{2}v\\ +595\, \left(\cos\left(v\right)\right)^{2}{v}^{3}+288\,\sin\left(v \right)\left(\cos\left(v\right)\right)^{2}-100\,\cos\left( v\right){v}^{3}\\ +648\,\cos\left(v\right)\sin\left(v\right)- 192\,\cos\left(v\right)v+348\,v-195\,{v}^{3}-168\,\sin\left(v \right) \end{array}\end{equation*} \bc b_{{2,num}}=-96\,\left(\cos\left(v\right)\right)^{6 }v+192\,\sin\left(v\right)\left(\cos\left(v\right)\right)^ {5}-192\,\left(\cos\left(v\right)\right)^{5}v\\ +192\,\sin \left(v\right)\left(\cos\left(v\right)\right)^{4}+624\, \left(\cos\left(v\right)\right)^{4}v+216\,\left(\cos\left( v\right)\right)^{3}v\\ -480\,\sin\left(v\right)\left(\cos \left(v\right)\right)^{3}+250\,\left(\cos\left(v\right) \right)^{3}{v}^{3}-612\,\left(\cos\left(v\right)\right)^{2}v\\ +215\,\left(\cos\left(v\right)\right)^{2}{v}^{3}-72\,\sin \left(v\right)\left(\cos\left(v\right)\right)^{2}-70\,\cos \left(v\right){v}^{3}\\ +216\,\cos\left(v\right)\sin\left(v \right)-24\,\cos\left(v\right)v+84\,v-48\,\sin\left(v\right) -35\,{v}^{3} \end{array}\end{equation*} \bc \displaystyle b_{{3,num}}=\frac{1}{48}-192\,\left(\cos\left(v\right)\right)^{5 }v+192\,\sin\left(v\right)\left(\cos\left(v\right)\right)^ {4}+288\,\left(\cos\left(v\right)\right)^{4}v\\ +192\,\left( \cos\left(v\right)\right)^{3}v-192\,\sin\left(v\right) \left(\cos\left(v\right)\right)^{3}-96\,\sin\left(v\right) \left(\cos\left(v\right)\right)^{2}\\ -324\,\left(\cos\left(v \right)\right)^{2}v+125\,\left(\cos\left(v\right)\right)^{ 2}{v}^{3}+120\,\cos\left(v\right)\sin\left(v\right)+60\,\cos \left(v\right){v}^{3}\\ -24\,\sin\left(v\right) +36\,v-65\,{v}^{3} \end{array}\end{equation*} The Taylor series expansions, used when $v \rightarrow 0$, are given below: \bc \displaystyle b_{{0}}=-{\frac{12629}{3024}}+{\frac{45767}{18144}}\,{v}^{2}-{\frac{11483491}{23950080}}\,{v}^{4}+{\frac{112258001}{2615348736}}\,{v}^{6} \\ \displaystyle -{\frac{1703481341}{784604620800}}\,{v}^{8}+{\frac{5614773343}{80029671321600}}\,{v}^{10}-{\frac{10940565121}{6307523724902400}}\,{v}^{12}+...\\ \\ \displaystyle b_{{1}}={\frac{20483}{4032}}-{\frac{45767}{24192}}\,{v}^{2}+{\frac{10476617}{31933440}}\,{v}^{4}-{\frac{45578707}{1585059840}}\,{v}^{6} \\ \displaystyle +{\frac{1514526707}{1046139494400}}\,{v}^{8}-{\frac{5016343559}{106706228428800}}\,{v}^{10}+{\frac{19742264573}{17466988776652800}}\,{v}^{12}-\\ \\ \displaystyle b_{{2}}=-{\frac{3937}{2016}}+{\frac{45767}{60480}}\,{v}^{2}-{\frac{1491199}{15966720}}\,{v}^{4}+{\frac{321593093}{43589145600}}\,{v}^{6} \\ \displaystyle -{\frac{189532561}{523069747200}}\,{v}^{8}+{\frac{460150601}{38109367296000}}\,{v}^{10}-{\frac{28082396599}{113535427048243200}}\,{v}^{12}+...\\ \\ \displaystyle b_{{3}}={\frac{17671}{12096}}-{\frac{45767}{362880}}\,{v}^{2}+{\frac{96865}{19160064}}\,{v}^{4}-{\frac{21971953}{261534873600}}\,{v}^{6} \\ \displaystyle +{\frac{82561}{448345497600}}\,{v}^{8}-{\frac{17608099}{123122571264000}}\,{v}^{10}-{\frac{1184824691}{75690284698828800}}\,{v}^{12}-... \end{array}\end{equation*} \subsection{Second optimized method with zero $PL$, $PL'$ and $PL''$} The second method must satisfy the conditions $\{PL=0, PL'=0, PL''=0\}$, thus we need one coefficient to be determined by the maximum algebraic order. We use formula \eqref{phl_multi_defn} to compute the phase-lag and then its first and second derivative in respect to $v$: \bc \displaystyle PL = \big(16\,\left(\cos\left(v\right)\right)^{4}+\left( 8\,b_{{3}}{v}^{2}-16\right)\left(\cos\left(v\right)\right)^{3}+ \left(4\,b_{{2}}{v}^{2}-8\right)\left(\cos\left(v\right) \right)^{2}\\ +\left(10+\left(-6\,b_{{3}}+2\,b_{{1}}\right) {v}^{2}\right)\cos\left(v\right)-2+\left( -4\,b_{{2}}-2\,b_{{1}}-2\,b _{{3}}+5\right){v}^{2}\big)\\ /\left(10+\left( 18\,b_{{3}}+8\,b_{{2}}+2\,b_{{1}} \right){v}^{2}\right) \end{array}\end{equation*} \bc \displaystyle PL' = \left(-16\,v\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}} \right)\left(\cos\left(v\right)\right)^{4}+\left(\left(-160+\left(-32\, b_{{1}}-128\,b_{{2}}\right.\right.\right.\right.\\ \left.\left.\left.\left.-288\,b_{{3}}\right){v}^{2}\right)\sin\left( v\right)+16\,\left(\frac{23}{2}\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right)v \right)\left(\cos\left(v\right)\right)^{3}+\right.\\ \left.\left(-12\, \left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right){v}^{2} \right)\left(-2+b_{{3}}{v}^{2}\right)\sin\left(v\right)\right.\right.\\ \left.\left.+8\,\left(9\,b_{{3}}+\frac{13}{2}\,b_{{2}}+b_{{1}}\right)v\right)\left( \cos\left(v\right)\right)^{2}+\left(-4\,\left(b_{{2}}{v}^{2} -2\right)\left(5+\right.\right.\right.\\ \left.\left.\left.\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right) {v}^{2}\right)\sin\left(v\right)-40\,v\left(3\,b_{{3}}+b_{{2}} \right)\right)\cos\left(v\right)-\right.\\ \left.\left(5+\left(-3\,b_{{3}} +b_{{1}}\right){v}^{2}\right)\left(5+\left( 9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right){v}^{2}\right)\sin\left(v\right)\right.\\ \left.-8\,v\left( \frac{3}{2}\,b_{{2}}-b_{{3}}+b_{{1}}-{\frac{25}{8}}\right)\right)\\ /\left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right){v}^{2}\right)^{2} \end{array}\end{equation*} \bc PL'' = {\frac{1}{729}}\,\left(\left(-10368\,\left( b_{{3}}+\frac{4}{9}\,b_{{2}} +\frac{1}{9}\,b_{{1}}\right)^{2}{v}^{4}\right.\right.\\ \left.\left.+3888\,\left( \frac{1}{9}\,b_{{1}}-{\frac{ 80}{27}}+b_{{3}}+\frac{4}{9}\,b_{{2}}\right)\left( b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9} \,b_{{1}}\right) {v}^{2}-3200-720\,b_{{3}}\right.\right.\\ \left.\left.-320\,b_{{2}}-80\,b_{{1}} \right)\left(\cos\left(v\right)\right)^{4}+\left(10368\, \left(\frac{5}{9}+\left(b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_{{1}}\right) {v}^{2} \right)v\right.\right.\\ \left.\left.\left(b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_{{1}}\right)\sin \left(v\right)-2916\,\left(b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_{{1}} \right)^{2}b_{{3}}{v}^{6}\right.\right.\\ \left.\left.+2592\,\left( b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_ {{1}}\right)\left(b_{{2}}+b_{{3}}+\frac{1}{4}\,b_{{1}}\right){v}^{4}+ \left(-768\,{b_{{2}}}^{2}\right.\right.\right.\\ \left.\left.\left.+\left( 2880-3936\,b_{{3}}-384\,b_{{1}} \right)b_{{2}}-4968\,{b_{{3}}}^{2}+\left(5580-984\,b_{{1}} \right)b_{{3}}\right.\right.\right.\\ \left.\left.\left.+720\,b_{{1}}-48\,{b_{{1}}}^{2}\right) {v}^{2}+1800+ 920\,b_{{3}}+320\,b_{{2}}+80\,b_{{1}}\right)\left(\cos\left(v \right)\right)^{3}\right.\\ \left.+\left(-9936\,\left(b_{{3}}+\frac{2}{23}\,b_{{1}}+{ \frac{8}{23}}\,b_{{2}}\right)\left(\frac{5}{9}+\left( b_{{3}}+\frac{4}{9}\,b_{{2 }}+\frac{1}{9}\,b_{{1}}\right){v}^{2}\right)v\sin\left(v\right)\right.\right.\\ \left.\left.-648\,b _{{2}}\left(b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_{{1}}\right) ^{2}{v}^{6}+ 9072\,\left({\frac{23}{63}}\,b_{{2}}+b_{{3}}+\frac{1}{9}\,b_{{1}} \right)\right.\right.\\ \left.\left.\left(b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_{{1}}\right){v}^{4}+\left( -624 \,{b_{{2}}}^{2}+\left(4280-2268\,b_{{3}}-252\,b_{{1}}\right) b_{{2}}\right.\right.\right.\\ \left.\left.\left.-1944\,\left(b_{{3}}+\frac{1}{9}\,b_{{1}}\right)\left( b_{{3}}+\frac{1}{9}\,b_{{ 1}}-{\frac{140}{27}}\right)\right) {v}^{2}+2800+260\,b_{{2}}+40\,b _{{1}}\right.\right.\\ \left.\left.+360\,b_{{3}}\right)\left(\cos\left(v\right)\right)^{2 }+\left(-2592\,\left(b_{{3}}+\frac{1}{9}\,b_{{1}}+{\frac {13}{18}}\,b_{{2} }\right)\left(\frac{5}{9}\right.\right.\right.\\ \left.\left.\left.+\left(b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_{{1}} \right){v}^{2}\right)v\sin\left(v\right)+2187\,\left(b_{{3}} -\frac{1}{27}\,b_{{1}}\right)\left(b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_{{1}} \right)^{2}{v}^{6}\right.\right.\\ \left.\left.-1863\,\left({\frac {212}{207}}\,b_{{2}}+{\frac {7}{23}}\,b_{{1}}+b_{{3}}\right)\left( b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_ {{1}}\right){v}^{4}+\left(480\,{b_{{2}}}^{2}\right.\right.\right.\\ \left.\left.\left.+\left( -2120+120\,b_ {{1}}+2520\,b_{{3}}\right)b_{{2}}+3240\,{b_{{3}}}^{2}+\left( -4095+ 360\,b_{{1}}\right)b_{{3}}\right.\right.\right.\\ \left.\left.\left.-555\,b_{{1}}\right) {v}^{2}-1325-600\,b_ {{3}}-200\,b_{{2}}\right)\cos\left(v\right)+2160\,\left(\frac{5}{9}+ \left(b_{{3}}+\frac{4}{9}\,b_{{2}}\right.\right.\right.\\ \left.\left.\left.+\frac{1}{9}\,b_{{1}}\right){v}^{2}\right)v \left(\frac{1}{3}\,b_{{2}}+b_{{3}}\right)\sin\left(v\right) +324\,b_{{2 }}\left(b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_{{1}}\right) ^{2}{v}^{6}\right.\\ \left.-648\, \left(-\frac{1}{9}\,b_{{2}}+b_{{3}}+\frac{1}{9}\,b_{{1}}\right)\left( b_{{3}}+\frac{4}{9} \,b_{{2}}+\frac{1}{9}\,b_{{1}}\right){v}^{4}+\left( 144\,{b_{{2}}}^{2}\right.\right.\\ \left.\left.+\left(-520+228\,b_{{3}}+132\,b_{{1}}\right)b_{{2}}-216\,\left( b_ {{3}}+\frac{1}{9}\,b_{{1}}\right)\left(b_{{3}}-b_{{1}}+{\frac {155}{24}} \right)\right){v}^{2}\right.\\ \left.-75-60\,b_{{2}}-40\,b_{{1}}+40\,b_{{3}} \right)\\ /\left(\frac{5}{9}+\left(b_{{3}}+\frac{4}{9}\,b_{{2}}+\frac{1}{9}\,b_{{1}} \right){v}^{2}\right)^{3} \end{array}\end{equation*} The four equations to be solved are: $$ PL=0, \quad PL'=0, \quad PL''=0, \quad b_0 = 5-2\,b_{{2}}-2\,b_{{1}}-2\,b_{{3}} $$ \noindent and the coefficients are given below: \begin{equation} \label{coeff_meth_phl_deriv_2} \begin{array}{l} \displaystyle b_0=\frac{b_{0,num}}{2 D}, \quad b_1=-\frac{b_{1,num}}{8 D}, \quad b_2=\frac{b_{2,num}}{4 D}, \quad b_3=-\frac{b_{3,num}}{8 D} ,\\ \mbox{where } D = {{v}^{4} \left(\sin\left(v\right)\right)^{4}\left(\cos\left(v \right)-1\right)}\;\;\; \mbox{and} \end{array} \end{equation} \bc b_{{0,num}}=-6+25\,\left(\cos\left(v\right)\right) ^{3}{v}^{4}+16\,\left(\cos\left(v\right)\right)^{7}{v}^{2}-120\, \left(\cos\left(v\right)\right)^{4}\\ -32\,\sin\left(v\right) v\left(\cos\left(v\right)\right)^{6}-96\,\sin\left(v \right)v\left(\cos\left(v\right)\right)^{7}+32\,\left(\cos \left(v\right)\right)^{8}{v}^{2}\\ -36\,\cos\left(v\right){v}^{ 2}+15\,\cos\left(v\right){v}^{4}+20\,\left(\cos\left(v \right)\right)^{4}{v}^{4}-96\,\left(\cos\left(v\right) \right)^{8}\\ +30\,{v}^{4}\left(\cos\left(v\right)\right)^{2}+ 20\,\sin\left(v\right)v-12\,{v}^{2}+10\,\left(\cos\left(v \right)\right)^{5}{v}^{4}\\ +160\,\sin\left(v\right)v\left(\cos \left(v\right)\right)^{5}+140\,\sin\left(v\right)v\left( \cos\left(v\right)\right)^{4}\\ -60\,\sin\left(v\right)v \left(\cos\left(v\right)\right)^{3}-134\,\sin\left(v \right)v\left(\cos\left(v\right)\right)^{2}+2\,\sin\left(v \right)v\cos\left(v\right)\\ +18\,\cos\left(v\right)+30\, \left(\cos\left(v\right)\right)^{2}-54\,\left(\cos\left(v \right)\right)^{3}+192\,\left(\cos\left(v\right)\right)^{6}\\ +36\,\left(\cos\left(v\right)\right)^{5}+24\,\left(\cos \left(v\right)\right)^{2}{v}^{2}-64\,\left(\cos\left(v \right)\right)^{6}{v}^{2}+88\,\left(\cos\left(v\right) \right)^{3}{v}^{2}\\ -68\,\left(\cos\left(v\right)\right)^{5}{v }^{2}+20\,\left(\cos\left(v\right)\right)^{4}{v}^{2} \end{array}\end{equation*} \bc b_{{1,num}}=-18-192\,\left(\cos\left(v\right) \right) ^{7}+120\,\left(\cos\left(v\right)\right)^{3}{v}^{4}+128\, \left(\cos\left(v\right)\right)^{7}{v}^{2}\\ -480\,\left(\cos \left(v\right)\right)^{4}-320\,\sin\left(v\right)v\left( \cos\left(v\right)\right)^{6}-192\,\sin\left(v\right)v \left(\cos\left(v\right)\right)^{7}\\ +64\,\left(\cos\left(v \right)\right)^{8}{v}^{2}-104\,\cos\left(v\right){v}^{2}+30\, \cos\left(v\right){v}^{4}+15\,{v}^{4}\\ +60\,\left(\cos\left(v \right)\right)^{4}{v}^{4}-192\,\left(\cos\left(v\right) \right)^{8}+75\,{v}^{4}\left(\cos\left(v\right)\right)^{2}+ 64\,\sin\left(v\right)v\\ -40\,{v}^{2}+496\,\sin\left(v\right)v \left(\cos\left(v\right)\right)^{5}+680\,\sin\left(v \right)v\left(\cos\left(v\right)\right)^{4}\\ -320\,\sin \left(v\right)v\left(\cos\left(v\right)\right)^{3}-418\, \sin\left(v\right)v\left(\cos\left(v\right)\right)^{2}+10 \,\sin\left(v\right)v\cos\left(v\right)\\ +42\,\cos\left(v \right)+162\,\left(\cos\left(v\right)\right)^{2}-258\, \left(\cos\left(v\right)\right)^{3}+528\,\left(\cos\left(v \right)\right)^{6}\\ +408\,\left(\cos\left(v\right)\right)^{5 }+32\,\left(\cos\left(v\right)\right)^{2}{v}^{2}-176\,\left( \cos\left(v\right)\right)^{6}{v}^{2}+336\,\left(\cos\left(v \right)\right)^{3}{v}^{2}\\ -360\,\left(\cos\left(v\right) \right)^{5}{v}^{2}+120\,\left(\cos\left(v\right)\right)^{4}{ v}^{2} \end{array}\end{equation*} \bc b_{{2,num}}=-6-96\,\left(\cos\left(v\right)\right) ^{7 }+15\,\left(\cos\left(v\right)\right)^{3}{v}^{4}+48\,\left( \cos\left(v\right)\right)^{7}{v}^{2}\\ -84\,\left(\cos\left(v \right)\right)^{4}-128\,\sin\left(v\right)v\left(\cos \left(v\right)\right)^{6}-40\,\cos\left(v\right){v}^{2}+15\, \cos\left(v\right){v}^{4}\\ +30\,{v}^{4}\left(\cos\left(v \right)\right)^{2}+20\,\sin\left(v\right) v-8\,{v}^{2}+48\,\sin \left(v\right)v\left(\cos\left(v\right)\right)^{5}\\ +240\, \sin\left(v\right)v\left(\cos\left(v\right)\right)^{4}-48 \,\sin\left(v\right)v\left(\cos\left(v\right)\right)^{3}\\ -126\,\sin\left(v\right)v\left(\cos\left(v\right)\right)^{2 }-6\,\sin\left(v\right)v\cos\left(v\right)+18\,\cos\left(v \right)+42\,\left(\cos\left(v\right)\right)^{2}\\ -114\,\left(\cos\left(v\right)\right)^{3}+48\,\left(\cos\left(v \right)\right)^{6}+192\,\left(\cos\left(v\right)\right)^{5 }+16\,\left(\cos\left(v\right)\right)^{2}{v}^{2}\\ +128\,\left(\cos\left(v\right)\right)^{3}{v}^{2}-136\,\left(\cos\left(v \right)\right)^{5}{v}^{2}-8\,\left(\cos\left(v\right) \right)^{4}{v}^{2} \end{array}\end{equation*} \bc b_{{3,num}}=48\,\left(\cos\left(v\right) \right)^{6}{ v}^{2}-48\,\left(\cos\left(v\right)\right)^{6}+48\,\left( \cos\left(v\right)\right)^{5}\\ -80\,\sin\left(v\right)v \left(\cos\left(v\right)\right)^{5}-48\,\left(\cos\left(v \right)\right)^{5}{v}^{2}+80\,\sin\left(v\right)v\left(\cos \left(v\right)\right)^{4}\\ -96\,\left(\cos\left(v\right) \right)^{4}{v}^{2}+72\,\left(\cos\left(v\right)\right)^{4}+ 96\,\left(\cos\left(v\right)\right)^{3}{v}^{2}-78\,\left( \cos\left(v\right)\right)^{3}\\ +104\,\sin\left(v\right)v \left(\cos\left(v\right)\right)^{3}-18\,\left(\cos\left(v \right)\right)^{2}-102\,\sin\left(v\right)v\left(\cos \left(v\right)\right)^{2}\\ +48\,\left(\cos\left(v\right) \right)^{2}{v}^{2}+5\,{v}^{4}\left(\cos\left(v\right)\right) ^{2}-18\,\sin\left(v\right)v\cos\left(v\right)-48\,\cos \left(v\right){v}^{2}\\ +30\,\cos\left(v\right)+10\,\cos\left(v \right){v}^{4}-6+16\,\sin\left(v\right)v+5\,{v}^{4} \end{array}\end{equation*} The Taylor series expansions of the coefficients are given below: \bc \displaystyle b_{{0}}=-{\frac {12629}{3024}}+{\frac {45767}{12096}}\,{v}^{2}-{\frac{9837221}{7983360}}\,{v}^{4}+{\frac {153204313}{653837184}}\,{v}^{6} \\ \displaystyle -{\frac {2356782689}{87178291200}}\,{v}^{8}+{\frac {20347993339}{9700566220800}}\,{v}^{10}-{\frac {8744186458121}{77410518441984000}}\,{v}^{12} +\ldots\\ \\ \displaystyle b_{{1}}={\frac {20483}{4032}}-{\frac {45767}{16128}}\,{v}^{2}+{\frac {2943449}{3548160}}\,{v}^{4}-{\frac {107557349}{792529920}}\,{v}^{6} \\ \displaystyle +{\frac {5074066909}{348713164800}}\,{v}^{8}-{\frac {10190684747}{9484998082560}}\,{v}^{10}+{\frac {5994017812967}{103214024589312000}}\,{v}^{12}-\ldots\\ \\ \displaystyle b_{{2}}=-{\frac {3937}{2016}}+{\frac {45767}{40320}}\,{v}^{2}-{\frac {8607}{39424}}\,{v}^{4}+{\frac {51408821}{2724321600}}\,{v}^{6} \\ \displaystyle -{\frac{35318011}{34871316480}}\,{v}^{8}+{\frac {3348191339}{118562476032000}}\,{v}^{10}-{\frac {56104711163}{43667471941632000}}\,{v}^{12}+\ldots\\ \\ \displaystyle b_{{3}}={\frac {17671}{12096}}-{\frac {45767}{241920}}\,{v}^{2}+{\frac {22153}{4561920}}\,{v}^{4}-{\frac {41092123}{130767436800}}\,{v}^{6} \\ \displaystyle -{\frac {7321421}{348713164800}}\,{v}^{8}-{\frac {5642643317}{2134124568576000}}\,{v}^{10}-{\frac {210863655707}{681212562289459200}}\,{v}^{12} - \end{array}\end{equation*} \subsection{Third optimized method with zero $PL$, $PL'$, $PL''$ and $PL'''$} All four free coefficients of the third method will be determined by conditions $\{PL=0, PL'=0, PL''=0, PL'''=0\}$. We use formula \eqref{phl_multi_defn} to compute the phase-lag and then its first, second and third derivative in respect to $v$: \bc \displaystyle PL=\left(16\,\left(\cos\left(v\right)\right)^{4}+\left(-16+8 \,b_{{3}}{v}^{2}\right)\left(\cos\left(v\right)\right)^{3}+ \left(4\,b_{{2}}{v}^{2}-8\right)\left(\cos\left(v\right) \right)^{2}\right.\\ \left.+\left(10+\left(2\,b_{{1}}-6\,b_{{3}}\right){v}^{2} \right)\cos\left(v\right)-2+\left(-2\,b_{{2}}+b_{{0}}\right) {v}^{2}\right)/\\ \left(10+\left(18\,b_{{3}}+8\,b_{{2}}+2\,b_{{1}}\right){v}^{2}\right) \end{array}\end{equation*} \bc \displaystyle PL'=\left(-16\,v\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right)\left( \cos\left(v\right)\right)^{4}+\left(\left(-160+\left(-128 \,b_{{2}}\right.\right.\right.\right.\\ \left.\left.\left.\left.-32\,b_{{1}}-288\,b_{{3}}\right){v}^{2}\right)\sin \left(v\right)+16\,\left(4\,b_{{2}}+b_{{1}}+\frac{23}{2}\,b_{{3}} \right)v\right)\left(\cos\left(v\right)\right)^{3}+\right.\\ \left.\left(-12\,\left(-2+b_{{3}}{v}^{2}\right)\left(5+\left(9\,b_ {{3}}+4\,b_{{2}}+b_{{1}}\right){v}^{2}\right)\sin\left(v \right)\right.\right.\\ \left.\left.+8\,\left(\frac{13}{2}\,b_{{2}}+9\,b_{{3}}+b_{{1}}\right)v \right)\left(\cos\left(v\right)\right)^{2}+\left(-4\, \left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right){v}^{2} \right)\right.\right.\\ \left.\left.\left(-2+b_{{2}}{v}^{2}\right)\sin\left(v\right)-120 \,v\left(b_{{3}}+\frac{1}{3}\,b_{{2}}\right)\right)\cos\left(v \right)-\left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right){v}^ {2}\right)\right.\\ \left.\left(5+\left(b_{{1}}-3\,b_{{3}}\right){v}^{2} \right)\sin\left(v\right)+5\,v\left(b_{{0}}+{\frac{18}{5}}\,b _{{3}}-\frac{2}{5}\,b_{{2}}+\frac{2}{5}\,b_{{1}}\right)\right)\\ \left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right){v}^{2}\right)^{2} \end{array}\end{equation*} \bc PL'' = {\frac{1}{64}}\,\left(\left(-2048\,\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+1 /4\,b_{{1}}\right)^{2}{v}^{4}+768\,\left(\frac{1}{4}\,b_{{1}}-{\frac{20}{ 3}}+b_{{2}}+\frac{9}{4}\,b_{{3}}\right)\right.\right.\\ \left.\left.\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{ {1}}\right){v}^{2}-320\,b_{{2}}-80\,b_{{1}}-3200-720\,b_{{3}} \right)\left(\cos\left(v\right)\right)^{4}\right.\\ \left.+\left(2048\, \left(\frac{5}{4}+\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}}\right){v}^{2} \right)v\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}}\right)\sin \left(v\right)\right.\right.\\ \left.\left.-576\,b_{{3}}\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1 }}\right)^{2}{v}^{6}+1152\,\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}} \right)\left(b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}}\right){v}^{4}\right.\right.\\ \left.\left.+\left( -768\,{b_{{2}}}^{2}+\left(2880-384\,b_{{1}}-3936\,b_{{3}}\right)b_ {{2}}-4968\,{b_{{3}}}^{2}+\left(5580-984\,b_{{1}}\right)b_{{3}}\right.\right.\right.\\ \left.\left.\left.-48 \,{b_{{1}}}^{2}+720\,b_{{1}}\right){v}^{2}+1800+920\,b_{{3}}+80\,b_{ {1}}+320\,b_{{2}}\right)\left(\cos\left(v\right)\right)^{3}\right.\\ +\left.\left(-1536\,\left(\frac{5}{4}+\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}} \right){v}^{2}\right)\left(\frac{1}{4}\,b_{{1}}+{\frac{23}{8}}\,b_{{3}} +b_{{2}}\right)v\sin\left(v\right)\right.\right.\\ \left.\left.-128\,b_{{2}}\left(\frac{9}{4}\,b_{{ 3}}+b_{{2}}+\frac{1}{4}\,b_{{1}}\right)^{2}{v}^{6}+1472\,\left({\frac{63} {23}}\,b_{{3}}+{\frac{7}{23}}\,b_{{1}}+b_{{2}}\right)\right.\right.\\ \left.\left.\left(\frac{9}{4}\,b _{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}}\right){v}^{4}+\left(-624\,{b_{{2}}}^{2 }+\left(-252\,b_{{1}}+4280-2268\,b_{{3}}\right)b_{{2}}\right.\right.\right.\\ \left.\left.\left.-1944\, \left(b_{{3}}+\frac{1}{9}\,b_{{1}}\right)\left(b_{{3}}+\frac{1}{9}\,b_{{1}}-{ \frac{140}{27}}\right)\right){v}^{2}+2800+360\,b_{{3}}+40\,b_{{1} }+260\,b_{{2}}\right)\right.\\ \left.\left(\cos\left(v\right)\right)^{2}+ \left(-832\,\left(\frac{5}{4}+\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}} \right){v}^{2}\right)\left(b_{{2}}+{\frac{18}{13}}\,b_{{3}}+\frac{2}{13}\,b_{{1}}\right)v\sin\left(v\right)\right.\right.\\ \left.\left.+432\,\left(b_{{3}}-\frac{1}{27} \,b_{{1}}\right)\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}}\right)^{ 2}{v}^{6}-848\,\left({\frac{207}{212}}\,b_{{3}}+b_{{2}}+{\frac{63} {212}}\,b_{{1}}\right)\right.\right.\\ \left.\left.\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}} \right){v}^{4}+\left(480\,{b_{{2}}}^{2}+\left(-2120+2520\,b_{{3} }+120\,b_{{1}}\right)b_{{2}}+3240\,{b_{{3}}}^{2}\right.\right.\right.\\ \left.\left.\left.+\left(360\,b_{{1} }-4095\right)b_{{3}}-555\,b_{{1}}\right){v}^{2}-1325-200\,b_{{2}}- 600\,b_{{3}}\right)\cos\left(v\right)\right.\\ \left.+320\,\left(b_{{2}}+3\,b_ {{3}}\right)\left(\frac{5}{4}+\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}} \right){v}^{2}\right)v\sin\left(v\right)+64\,b_{{2}}\right.\\ \left.\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}}\right)^{2}{v}^{6}+32\,\left(\frac{9}{4}\,b _{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}}\right)\left(-9\,b_{{3}}+b_{{2}}-b_{{1} }\right){v}^{4}\right.\\ \left.+\left(24\,{b_{{2}}}^{2}+\left(-220-18\,b_{{1}}- 162\,b_{{3}}-60\,b_{{0}}\right)b_{{2}}-486\,\left(b_{{3}}+\frac{1}{9}\,b_{ {1}}\right)\right.\right.\\ \left.\left.\left(b_{{3}}+{\frac{5}{18}}\,b_{{0}}+{\frac{40}{27}} +\frac{1}{9}\,b_{{1}}\right)\right){v}^{2}-200+10\,b_{{1}}-10\,b_{{2}}+90 \,b_{{3}}+25\,b_{{0}}\right)\\ /\left(\frac{5}{4}+\left(\frac{9}{4}\,b_{{3}}+b_{{2}}+\frac{1}{4}\,b_{{1}}\right){v}^{2}\right)^{3} \end{array}\end{equation*} \bc PL''' = \left(768\,v\left(\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right)^{ 2}{v}^{4}-\frac{1}{4}\,\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right)\right.\right.\\ \left.\left.\left(b _{{1}}-40+9\,b_{{3}}+4\,b_{{2}}\right){v}^{2}+5\,b_{{2}}+\frac{5}{4}\,b_{{1} }+{\frac{45}{4}}\,b_{{3}}+25\right)\right.\\ \left.\left(9\,b_{{3}}+4\,b_{{2}}+b_ {{1}}\right)\left(\cos\left(v\right)\right)^{4}+\left(512 \,\left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right){v}^{2} \right)\right.\right.\\ \left.\left.\left(\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right)^{2}{v} ^{4}-{\frac{9}{8}}\,\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right) \left(b_{{1}}-{\frac{80}{9}}+9\,b_{{3}}+4\,b_{{2}}\right){v}^{2}\right.\right.\right.\\ \left.\left.\left.+{\frac{135}{8}}\,b_{{3}}+\frac{15}{2}\,b_{{2}}+25+{\frac{15}{8}}\,b_{{1}} \right)\sin\left(v\right)-432\,\left(\left(9\,b_{{3}}+4\,b_{ {2}}+b_{{1}}\right)^{2}{v}^{4}\right.\right.\right.\\ \left.\left.\left.-\frac{4}{9}\,\left(b_{{1}}-{\frac{45}{2}}+ 9\,b_{{3}}+4\,b_{{2}}\right)\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}} \right){v}^{2}+{\frac{80}{9}}\,b_{{2}}+{\frac{20}{9}}\,b_{{1}}+20 \,b_{{3}}+25\right)\right.\right.\\ \left.\left.v\left(4\,b_{{2}}+b_{{1}}+\frac{23}{2}\,b_{{3}} \right)\right)\left(\cos\left(v\right)\right)^{3}+\left( 108\,\left(b_{{3}}\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right)^{2} {v}^{6}-2\,\left(4\,b_{{3}}\right.\right.\right.\right.\\ \left.\left.\left.\left.+4\,b_{{2}}+b_{{1}}\right)\left(9\,b_ {{3}}+4\,b_{{2}}+b_{{1}}\right){v}^{4}+\left(414\,{b_{{3}}}^{2}+ \left(328\,b_{{2}}-155+82\,b_{{1}}\right)\right.\right.\right.\right.\\ \left.\left.\left.\left.b_{{3}}+4\,\left(b_{{1} }+4\,b_{{2}}\right)\left(b_{{1}}-5+4\,b_{{2}}\right)\right){v} ^{2}-50-{\frac{80}{3}}\,b_{{2}}-{\frac{20}{3}}\,b_{{1}}-{\frac{230} {3}}\,b_{{3}}\right)\right.\right.\\ \left.\left.\left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}} \right){v}^{2}\right)\sin\left(v\right)-672\,v\left(\left( 9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right)^{2}\right.\right.\right.\\ \left.\left.\left.\left(b_{{1}}+{\frac{61}{ 14}}\,b_{{2}}+9\,b_{{3}}\right){v}^{4}-\frac{1}{7}\,\left(9\,b_{{3}}+4\,b_ {{2}}+b_{{1}}\right)\left(81\,{b_{{3}}}^{2}+\right.\right.\right.\right.\\ \left.\left.\left.\left.\left(-630+{\frac{ 189}{2}}\,b_{{2}}+18\,b_{{1}}\right)b_{{3}}+26\,{b_{{2}}}^{2}+ \left(\frac{21}{2}\,b_{{1}}-305\right)b_{{2}}+{b_{{1}}}^{2}-70\,b_{{1}} \right){v}^{2}\right.\right.\right.\\ \left.\left.\left.+{\frac{405}{7}}\,{b_{{3}}}^{2}+\left({\frac{135}{ 2}}\,b_{{2}}+225+{\frac{90}{7}}\,b_{{1}}\right)b_{{3}}+{\frac{130} {7}}\,{b_{{2}}}^{2}+\left(\frac{15}{2}\,b_{{1}}+{\frac{1525}{14}}\right)b _{{2}}+25\,b_{{1}}\right.\right.\right.\\ \left.\left.\left.+\frac{5}{7}\,{b_{{1}}}^{2}\right)\right)\left(\cos \left(v\right)\right)^{2}+\left(16\,\left(b_{{2}}\left(9\, b_{{3}}+4\,b_{{2}}+b_{{1}}\right)^{2}{v}^{6}-14\,\left(9\,b_{{3}}+ 4\,b_{{2}}+b_{{1}}\right)\right.\right.\right.\\ \left.\left.\left.\left(9\,b_{{3}}+{\frac{23}{7}}\,b_{{2}} +b_{{1}}\right){v}^{4}+\left(729\,{b_{{3}}}^{2}+\left(-1260+162 \,b_{{1}}+{\frac{1701}{2}}\,b_{{2}}\right)b_{{3}}+234\,{b_{{2}}}^{2}\right.\right.\right.\right.\\ \left.\left.\left.\left.+\left(-535+{\frac{189}{2}}\,b_{{1}}\right)b_{{2}}-140\,b_{{1}}+ 9\,{b_{{1}}}^{2}\right){v}^{2}-350-15\,b_{{1}}-{\frac{195}{2}}\,b_{ {2}}-135\,b_{{3}}\right)\right.\right.\\ \left.\left.\left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{ 1}}\right){v}^{2}\right)\sin\left(v\right)+288\,v\left( \left(b_{{1}}+{\frac{51}{4}}\,b_{{3}}+{\frac{53}{12}}\,b_{{2}} \right)\right.\right.\right.\\ \left.\left.\left.\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right)^{2}{v}^{4}-5\, \left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right)\left(9\,{b_{{3}}}^{2} +\left(b_{{1}}+7\,b_{{2}}-{\frac{51}{2}}\right)b_{{3}}\right.\right.\right.\right.\\ \left.\left.\left.\left.+\frac{4}{3}\,{b_{{ 2}}}^{2}+\left(\frac{1}{3}\,b_{{1}}-{\frac{53}{6}}\right)b_{{2}}-2\,b_{{1 }}\right){v}^{2}+225\,{b_{{3}}}^{2}+\left(25\,b_{{1}}+{\frac{1275 }{4}}+175\,b_{{2}}\right)b_{{3}}\right.\right.\right.\\ \left.\left.\left.+{\frac{100}{3}}\,{b_{{2}}}^{2}+ \left({\frac{25}{3}}\,b_{{1}}+{\frac{1325}{12}}\right)b_{{2}}+25 \,b_{{1}}\right)\right)\cos\left(v\right)+\left(\left(b_{{ 1}}-27\,b_{{3}}\right)\right.\right.\\ \left.\left.\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right) ^{2}{v}^{6}+63\,\left({\frac{23}{7}}\,b_{{3}}+b_{{1}}+{\frac{212}{ 63}}\,b_{{2}}\right)\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right){v }^{4}\right.\right.\\ \left.\left.+\left(-9720\,{b_{{3}}}^{2}+\left(4095-7560\,b_{{2}}-1080\,b_ {{1}}\right)b_{{3}}-1440\,{b_{{2}}}^{2}\right.\right.\right.\\ \left.\left.\left.+\left(-360\,b_{{1}}+2120 \right)b_{{2}}+555\,b_{{1}}\right){v}^{2}+1800\,b_{{3}}+600\,b_{{2 }}+1325\right)\right.\\ \left.\left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}} \right){v}^{2}\right)\sin\left(v\right)+48\,v\left(\left( \frac{13}{2}\,b_{{2}}+9\,b_{{3}}+b_{{1}}\right)\right.\right.\\ \left.\left.\left(9\,b_{{3}}+4\,b_{{2}} +b_{{1}}\right)^{2}{v}^{4}+\frac{1}{2}\,\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1} }\right)\right.\right.\\ \left.\left.\left(81\,{b_{{3}}}^{2}+\left({\frac{45}{2}}\,b_{{0}}+ 180+27\,b_{{2}}+18\,b_{{1}}\right)b_{{3}}-4\,{b_{{2}}}^{2}+\left(3 \,b_{{1}}+130+10\,b_{{0}}\right)b_{{2}}\right.\right.\right.\\ \left.\left.\left.+\left(b_{{1}}+\frac{5}{2}\,b_{{0}} +20\right)b_{{1}}\right){v}^{2}-{\frac{405}{2}}\,{b_{{3}}}^{2}+ \left(-{\frac{135}{2}}\,b_{{2}}-{\frac{225}{4}}\,b_{{0}}+225-45\,b _{{1}}\right)b_{{3}}\right.\right.\\ \left.\left.+10\,{b_{{2}}}^{2}+\left(-25\,b_{{0}}+{\frac{ 325}{2}}-\frac{15}{2}\,b_{{1}}\right)b_{{2}}-\frac{5}{2}\,\left(-10+\frac{5}{2}\,b_{{0}}+b _{{1}}\right)b_{{1}}\right)\right)\\ /\left(5+\left(9\,b_{{3}}+4\,b_{{2}}+b_{{1}}\right){v}^{2}\right)^{4} \end{array}\end{equation*} After solving the system: $$ PL=0, \quad PL'=0, \quad PL''=0, \quad PL'''=0 $$ \noindent we get the coefficients: \begin{equation} \label{coeff_meth_phl_deriv_3} \begin{array}{l} \displaystyle b_0=-\frac{b_{0,num}}{3 D}, \quad b_1=\frac{b_{1,num}}{4 D}, \quad b_2=-\frac{b_{2,num}}{2 D}, \quad b_3=\frac{b_{3,num}}{12 D} ,\\ \mbox{where } D={{v}^{5}\left(\cos\left(v\right)+1\right)\left(\sin\left(v\right)\right)^{3}} \;\;\; \mbox{and} \end{array} \end{equation} \bc b_{{0,num}}=192\,(\cos(v))^{6} {v}^{2}-126\,\sin(v)(\cos(v) )^{3}{v}^{3}+99\,\sin(v)(\cos(v ))^{2}v\\ -126\,\sin(v)(\cos (v))^{2}{v}^{3}-18\,\sin(v){v}^{ 3}\cos(v)+630\,\sin(v)(\cos (v))^{3}v\\ -144\,\sin(v)( \cos(v))^{6}v+48\,\sin(v){v}^{3} (\cos(v))^{7}-288\,\sin(v )(\cos(v))^{7}v\\ -144\,( \cos(v))^{2}-12\,(\cos(v) )^{3}+336\,(\cos(v))^{4}+48\, \sin(v){v}^{3}(\cos(v))^{ 6}\\ +30\,{v}^{2}+36\,\cos(v)+144\,(\cos(v ))^{7}{v}^{2}-24\,(\cos(v) )^{5}+249\,(\cos(v))^{2}{v}^{2}\\ -418\,(\cos(v))^{3}{v}^{2}-662\,( \cos(v))^{4}{v}^{2}+148\,(\cos(v ))^{5}{v}^{2}-99\,\cos(v)\sin(v )v\\ -9\,\sin(v)v+66\,\cos(v){v}^{2 }+96\,\sin(v){v}^{3}(\cos(v) )^{5}+96\,\sin(v)(\cos(v) )^{4}{v}^{3}\\ -126\,\sin(v)(\cos(v ))^{4}v-18\,\sin(v){v}^{3}-192\, (\cos(v))^{8}+176\,{v}^{2}(\cos (v))^{8}\\ -288\,\sin(v)( \cos(v))^{5}v \end{array}\end{equation*} \bc b_{{1,num}}=12+352\,(\cos(v))^{ 6}{v}^{2}-36\,\sin(v)(\cos(v) )^{3}{v}^{3}+168\,\sin(v)(\cos(v ))^{2}v\\ -100\,\sin(v)(\cos (v))^{2}{v}^{3}-92\,\sin(v){v}^{ 3}\cos(v)+96\,\sin(v)(\cos (v))^{3}v\\ -672\,\sin(v)( \cos(v))^{6}v-384\,(\cos(v ))^{7}+36\,(\cos(v))^{2}\\ -48\,(\cos(v))^{3}-48\,(\cos (v))^{4}+128\,\sin(v){v}^{3} (\cos(v))^{6}+33\,{v}^{2}-48\,\cos (v)\\ +448\,(\cos(v))^{7}{v} ^{2}+480\,(\cos(v))^{5}-129\,( \cos(v))^{2}{v}^{2}-196\,(\cos(v ))^{3}{v}^{2}\\ -316\,(\cos(v) )^{4}{v}^{2}-464\,(\cos(v))^{5}{ v}^{2}+12\,\cos(v)\sin(v)v-45\,\sin (v)v\\ +197\,\cos(v){v}^{2}+128\,\sin (v){v}^{3}(\cos(v))^{5}- 32\,\sin(v)(\cos(v))^{4}{ v}^{3}\\ +504\,\sin(v)(\cos(v) )^{4}v+4\,\sin(v){v}^{3}-288\,\sin(v )(\cos(v))^{5}v \end{array}\end{equation*} \bc b_{{2,num}}=152\,(\cos(v))^{6} {v}^{2}-96\,(\cos(v))^{6}+48\,\sin (v){v}^{3}(\cos(v))^{5}\\ -192\,\sin(v)(\cos(v))^{5} v+104\,(\cos(v))^{5}{v}^{2}-72\,\sin (v)(\cos(v))^{4}v\\ +48\,\sin(v)(\cos(v))^{4}{v}^{ 3}-244\,(\cos(v))^{4}{v}^{2}+144\, (\cos(v))^{4}\\ +216\,\sin(v)(\cos(v))^{3}v-44\,\sin( v)(\cos(v))^{3}{v}^{3}-134\, (\cos(v))^{3}{v}^{2}\\ -12\,(\cos(v))^{3}+39\,\sin(v)(\cos (v))^{2}v-44\,\sin(v)( \cos(v))^{2}{v}^{3}\\ -48\,(\cos(v))^{2}+79\,(\cos(v))^{2} {v}^{2}-33\,\cos(v)\sin(v)v-4\,\sin (v){v}^{3}\cos(v)\\ +18\,\cos(v){v}^{2}+12\,\cos(v)-3\,\sin(v)v -4\,\sin(v){v}^{3}+10\,{v}^{2} \end{array}\end{equation*} \bc b_{{3,num}}=208\,(\cos(v))^{5} {v}^{2}-96\,(\cos(v))^{5}-216\,\sin (v)(\cos(v))^{4}v\\ +120\,(\cos(v))^{4}{v}^{2}+96\,\sin(v )(\cos(v))^{4}{v}^{3}-360\, (\cos(v))^{3}{v}^{2}\\ +144\,(\cos (v))^{3}-72\,\sin(v)(\cos (v))^{3}v+72\,\sin(v)( \cos(v))^{3}{v}^{3}\\ +252\,\sin(v) (\cos(v))^{2}v-157\,(\cos( v))^{2}{v}^{2}-120\,\sin(v)(\cos (v))^{2}{v}^{3}\\ -12\,(\cos(v))^{2}+149\,\cos(v){v}^{2}-48\,\cos (v)+36\,\cos(v)\sin(v)v\\ -72\,\sin(v){v}^{3}\cos(v)-45\,\sin (v)v+25\,{v}^{2}+24\,\sin(v){v}^{3}+12 \end{array}\end{equation*} The Taylor series expansions of the coefficients are given below: \bc \displaystyle b_{{0}}=-{\frac {12629}{3024}}+{\frac {45767}{9072}}\,{v}^{2}-{\frac {27865393}{11975040}}\,{v}^{4}+{\frac {557684327}{817296480}}\,{v}^{6} \\ \displaystyle -{\frac {235111157089}{1569209241600}}\,{v}^{8}+{\frac {575696865983}{26676557107200}}\,{v}^{10}-{\frac {73845973877087}{32750603956224000}}\,{v}^{12}+...\\ \\ \displaystyle b_{{1}}={\frac {20483}{4032}}-{\frac {45767}{12096}}\,{v}^{2}+{\frac {3549253}{2280960}}\,{v}^{4}-{\frac {36881797}{99066240}}\,{v}^{6} \\ \displaystyle +{\frac {95714204623}{2092278988800}}\,{v}^{8}-{\frac {138581370311}{35568742809600}}\,{v}^{10}+{\frac {106905916402097}{567677135241216000}}\,{v}^{12}\\ \\ \displaystyle b_{{2}}=-{\frac {3937}{2016}}+{\frac {45767}{30240}}\,{v}^{2}-{\frac {3156581}{7983360}}\,{v}^{4}+{\frac {21796097}{681080400}}\,{v}^{6} \\ \displaystyle -{\frac {2365857293}{1046139494400}}\,{v}^{8}-{\frac {102137141}{17784371404800}}\,{v}^{10}-{\frac {3198002983423}{283838567620608000}}\,{v}^{12}\\ \\ \displaystyle b_{{3}}={\frac {17671}{12096}}-{\frac {45767}{181440}}\,{v}^{2}+{\frac {135959}{47900160}}\,{v}^{4}-{\frac {14453093}{16345929600}}\,{v}^{6} \\ \displaystyle -{\frac {90901339}{896690995200}}\,{v}^{8}-{\frac {1564247467}{106706228428800}}\,{v}^{10}-{\frac {3513993676211}{1703031405723648000}}\,{v}^{12} \end{array}\end{equation*} It is noteworthy that the Taylor series expansions of all four optimized methods coincide in the constant term and the coefficient of $v^2$ and differ on the coefficients of $v^4$ and for higher powers. \subsection{Error analysis} We present the principal term of the local truncation error of the five methods: Classical method: \bc \displaystyle PLTE_{Classical}={\frac{45767}{725760}}\,y^{(10)}{h}^{10} \end{array}\end{equation*} Phase fitted method: \bc \displaystyle PLTE_{Phase-Fitted}={\frac{45767}{725760}}\left(y^{(10)}+y^{(8)}{\omega}^{2}\right){h}^{10} \end{array}\end{equation*} Zero $PL$ and $PL'$ method: \bc \displaystyle PLTE_{1st\;deriv}={\frac{45767}{725760}}\left(y^{(10)}+{\omega}^{4}y^{(6)}+2\,{\omega}^{2}y^{(8)}\right){h}^{10} \end{array}\end{equation*} Zero $PL$, $PL'$ and $PL''$ method: \bc \displaystyle PLTE_{2nd\;deriv}={\frac{45767}{725760}}\left(y^{(10)}+3\,{\omega}^{4}y^{(6)}+3\,{\omega}^{2}y^{(8)}+{\omega}^{6}y^{(4)}\right){h}^{10} \end{array}\end{equation*} Zero $PL$, $PL'$, $PL''$ and $PL'''$ method: \bc \displaystyle PLTE_{3rd\;deriv}={\frac{45767}{725760}}\left(6\,y^{(6)}{\omega}^{4}+y^{(10)}+4\,y^{(4)}{\omega}^{6}+4\,{\omega}^{2}y^{(8)}+{\omega}^{8}y^{(2)}\right){h}^{10} \end{array}\end{equation*} \noindent where $\omega$ is the dominant frequency of the problem. We also present the principal term of the local truncation error of the above methods for the case of the one-dimensional time-independent Schr\"odinger equation: Classical method: \bc \displaystyle PLTE_{Classical}={\frac{1}{725760}}\,{h}^{10}\Big[-45767\,y{E}^{5}+228835\,y\,{E}^{4}\\ +((-2288350\,W''-457670\,(W)^{2})y-915340\,(W')y'){E}^{3}\\ +((457670\,(W)^{3}+3935962\,W^{(4)}+6865050\,W\,''+4576700\,(W')^{2})y\\ +3661360\,(W^{(3)})y'+2746020\,(W')y'W){E}^{2}+((-228835\,(W)^{4}\\ -6865050\,(W)^{2}W''+(-7871924\,W^{(4)}-9153400\,(W')^{2})W-1327243\,W^{(6)}\\ -9656837\,(W'')^{2}-15469246\,(W')W^{(3)})y-14645440\,(W')(W'')y'\\ -7322720\,W(W^{(3)})y'-2837554\,(W^{(5)})y'-2746020\,(W)^{2}(W')y')E\\ +(45767\,(W)^{5}+2288350\,(W)^{3}W''+(3935962\,W^{(4)}+4576700\,(W')^{2})(W)^{2}\\ +(1327243\,W^{(6)}+9656837\,(W'')^{2}+15469246\,(W')W^{(3)})W\\ +2929088\,(W')W^{(5)}+45767\,W^{(8)}+4485166\,(W'')W^{(4)}\\ +10617944\,(W')^{2}W''+2562952\,(W^{(3)})^{2})y+915340\,(W)^{3}(W')y'\\ +3661360\,(W)^{2}(W^{(3)})y'+(2837554\,(W^{(5)})y'+14645440\,(W')(W'')y')W\\ +3661360\,(W')^{3}y'+366136\,(W^{(7)})y'+12814760\,(W'')(W^{(3)})y'\\ +8238060\,(W')(W^{(4)})y'\Big] \end{array}\end{equation*} Phase fitted method: \bc \displaystyle PLTE_{Phase-Fitted}={\frac{1}{725760}}\,{h}^{10}\Big[(-45767\,{\overline{W}}+45767\,W)y{E}^{4}\\ +((-1281476\,W''+183068\,W{\overline{W}}-183068\,(W)^{2})y-366136\,W'y'){E}^{3}\\ +(((4851302\,W-1006874\,{\overline{W}})W''-274602\,(W)^{2}{\overline{W}}+274602\,(W)^{3}\\ +3203690\,W^{(4)}+3295224\,(W')^{2})y+((-549204\,{\overline{W}}+1647612\,W)W'\\ +2562952\,W^{(3)})y'){E}^{2}+((-8970332\,(W'')^{2}+(2013748\,W{\overline{W}}\\ -5858176\,(W)^{2})W''+(-7871924\,W+1281476\,{\overline{W}})(W')^{2}\\ -14279304\,W'W^{(3)}+183068\,(W)^{3}{\overline{W}}-183068\,(W)^{4}+732272\,W^{(4)}{\overline{W}}\\ -7139652\,WW^{(4)}-1281476\,W^{(6)})y-12448624\,W'W''y'\\ +((1098408\,W{\overline{W}}-2196816\,(W)^{2})W'+1098408\,W^{(3)}{\overline{W}}\\ -6224312\,WW^{(3)}-2562952\,W^{(5)})y')E+((9656837\,W\\ -686505\,{\overline{W}})(W'')^{2}+(-1006874\,(W)^{2}{\overline{W}}+4485166\,W^{(4)}\\ +2288350\,(W)^{3}+10617944\,(W')^{2})W''+(4576700\,(W)^{2}\\ -1281476\,W{\overline{W}})(W')^{2}+(2929088\,W^{(5)}-1189942\,W^{(3)}{\overline{W}}\\ +15469246\,WW^{(3)})W'+45767\,W^{(8)}+45767\,(W)^{5}-45767\,(W)^{4}{\overline{W}}\\ +3935962\,(W)^{2}W^{(4)}+(1327243\,W^{(6)}-732272\,W^{(4)}{\overline{W}})W\\ +2562952\,(W^{(3)})^{2}-45767\,{\overline{W}}\,W^{(6)})y+((-2196816\,{\overline{W}}\\ +14645440\,W)W'+12814760\,W^{(3)})y'W''+(3661360\,(W')^{3}\\ +(8238060\,W^{(4)}+915340\,(W)^{3}-549204\,(W)^{2}{\overline{W}})W'\\ +3661360\,(W)^{2}W^{(3)}+(2837554\,W^{(5)}-1098408\,W^{(3)}{\overline{W}})W\\ -274602\,{\overline{W}}\,W^{(5)}+366136\,W^{(7)})y'\Big] \end{array}\end{equation*} Zero $PL$ and $PL'$ method: \bc \displaystyle PLTE_{1st\;deriv}={\frac{1}{725760}}\,{h}^{10}\Big[((-45767\,(W)^{2}-594971\,W''-45767\,{{\overline{W}}}^{2}\\ +91534\,W{\overline{W}})y-91534\,W'y'){E}^{3}+((137301\,(W)^{3}-274602\,(W)^{2}{\overline{W}}\\ +(3157923\,W''+137301\,{{\overline{W}}}^{2})W+2517185\,W^{(4)}-1373010\,W''{\overline{W}}\\ +2196816\,(W')^{2})y+(1647612\,W^{(3)}-549204\,W'{\overline{W}}+823806\,W\,W')y'){E}^{2}\\ +((-1235709\,W^{(6)}-137301\,(W)^{4}+274602\,(W)^{3}{\overline{W}}+(-4851302\,W''\\ -137301\,{{\overline{W}}}^{2})(W)^{2}+(-6590448\,(W')^{2}+3386758\,W''{\overline{W}}\\ -6407380\,W^{(4)})W-320369\,W''{{\overline{W}}}^{2}-8283827\,(W'')^{2}+1373010\,{\overline{W}}\,W^{(4)}\\ +2196816\,(W')^{2}{\overline{W}}-13089362\,W'W^{(3)})y+(-2288350\,W^{(5)}\\ -1647612\,(W)^{2}W'+(-5125904\,W^{(3)}+1647612\,W'{\overline{W}})W-274602\,W'{{\overline{W}}}^{2}\\ -10251808\,W'W''+1830680\,{\overline{W}}\,W^{(3)})y')E+(2929088\,W'W^{(5)}\\ +45767\,W^{(8)}+(1327243\,W-91534\,{\overline{W}})W^{(6)}+45767\,(W)^{5}\\ -91534\,(W)^{4}{\overline{W}}+(45767\,{{\overline{W}}}^{2}+2288350\,W'')(W)^{3}+(4576700\,(W')^{2}\\ -2013748\,W''{\overline{W}}+3935962\,W^{(4)})(W)^{2}+(-2562952\,(W')^{2}{\overline{W}}\\ +9656837\,(W'')^{2}+320369\,W''{{\overline{W}}}^{2}-1464544\,{\overline{W}}\,W^{(4)}\\ +15469246\,W'W^{(3)})W+(45767\,{{\overline{W}}}^{2}+4485166\,W'')W^{(4)}\\ +10617944\,(W')^{2}W''-1373010\,(W'')^{2}{\overline{W}}+183068\,(W')^{2}{{\overline{W}}}^{2}\\ +2562952\,(W^{(3)})^{2}-2379884\,W'W^{(3)}{\overline{W}})y+((-549204\,{\overline{W}}\\ +2837554\,W)W^{(5)}+366136\,W^{(7)}+915340\,W'(W)^{3}+(-1098408\,W'{\overline{W}}\\ +3661360\,W^{(3)})(W)^{2}+(-2196816\,{\overline{W}}\,W^{(3)}+14645440\,W'W''\\ +274602\,W'{{\overline{W}}}^{2})W+8238060\,W'W^{(4)}+(12814760\,W''+183068\,{{\overline{W}}}^{2})W^{(3)}\\ -4393632\,W'W''{\overline{W}}+3661360\,(W')^{3})y'\Big] \end{array}\end{equation*} Zero $PL$, $PL'$ and $PL''$ method: \bc \displaystyle PLTE_{2nd\;deriv}={\frac{1}{725760}}\,{h}^{10}\Big[-183068\,y\,W''{E}^{3}+((45767\,(W)^{3}\\ -137301\,{\overline{W}}\,(W)^{2}+(1784913\,W''+137301\,{{\overline{W}}}^{2})W-1235709\,{\overline{W}}\,W''\\ -45767\,{{\overline{W}}}^{3}+1281476\,(W')^{2}+1876447\,W^{(4)})y+274602\,W'y'W\\ +(915340\,W^{(3)}-274602\,W'{\overline{W}})y'){E}^{2}+((-91534\,(W)^{4}+274602\,(W)^{3}{\overline{W}}\\ +(-274602\,{{\overline{W}}}^{2}-3844428\,W'')(W)^{2}+(-5675108\,W^{(4)}+91534\,{{\overline{W}}}^{3}\\ -5308972\,(W')^{2}+4119030\,{\overline{W}}\,W'')W-11899420\,W'W^{(3)}-7597322\,(W'')^{2}\\ +2746020\,(W')^{2}{\overline{W}}-823806\,{{\overline{W}}}^{2}W''+1922214\,W^{(4)}{\overline{W}}-1189942\,W^{(6)})y\\ -1098408\,(W)^{2}W'y'+(-4027496\,W^{(3)}+1647612\,W'{\overline{W}})y'W\\ +(2196816\,W^{(3)}{\overline{W}}-8054992\,W'W''-549204\,W'{{\overline{W}}}^{2}-2013748\,W^{(5)})y')E\\ +(45767\,(W)^{5}-137301\,(W)^{4}{\overline{W}}+(137301\,{{\overline{W}}}^{2}+2288350\,W'')(W)^{3}\\ +(-3020622\,{\overline{W}}\,W''+3935962\,W^{(4)}-45767\,{{\overline{W}}}^{3}+4576700\,(W')^{2})(W)^{2}\\ +(9656837\,(W'')^{2}+1327243\,W^{(6)}-3844428\,(W')^{2}{\overline{W}}+961107\,{{\overline{W}}}^{2}W''\\ -2196816\,W^{(4)}{\overline{W}}+15469246\,W'W^{(3)})W+45767\,W^{(8)}-137301\,W^{(6)}{\overline{W}}\\ +2929088\,W'W^{(5)}+(4485166\,W''+137301\,{{\overline{W}}}^{2})W^{(4)}+2562952\,(W^{(3)})^{2}\\ -3569826\,W'W^{(3)}{\overline{W}}-2059515\,(W'')^{2}{\overline{W}}+(-45767\,{{\overline{W}}}^{3}\\ +10617944\,(W')^{2})W''+549204\,(W')^{2}{{\overline{W}}}^{2})y+915340\,(W)^{3}W'y'\\ +(-1647612\,W'{\overline{W}}+3661360\,W^{(3)})y'(W)^{2}+(14645440\,W'W''\\ -3295224\,W^{(3)}{\overline{W}}+823806\,W'{{\overline{W}}}^{2}+2837554\,W^{(5)})y'W\\ +366136\,W^{(7)}y'+(-823806\,W^{(5)}{\overline{W}}+8238060\,W'W^{(4)}\\ +(549204\,{{\overline{W}}}^{2}+12814760\,W'')W^{(3)}-91534\,W'{{\overline{W}}}^{3}\\ -6590448\,W'W''{\overline{W}}+3661360\,(W')^{3})y'\Big] \end{array}\end{equation*} Zero $PL$, $PL'$, $PL''$ and $PL'''$ method: \bc \displaystyle PLTE_{3rd\;deriv}={\frac{1}{725760}}\,{h}^{10}\Big[((1281476\,W^{(4)}+(-732272\,{\overline{W}}\\ +732272\,W)W''+549204\,(W')^{2})y+366136\,W^{(3)}y'){E}^{2}+((-1144175\,W^{(6)}\\ +(2379884\,{\overline{W}}-4942836\,W)W^{(4)}-10709478\,W'W^{(3)}-6910817\,(W'')^{2}\\ +(4210564\,W{\overline{W}}-1373010\,{{\overline{W}}}^{2}-2837554\,(W)^{2})W''\\ -45767\,(W)^{4}+183068\,(W)^{3}{\overline{W}}-274602\,(W)^{2}{{\overline{W}}}^{2}\\ +(-4027496\,(W')^{2}+183068\,{{\overline{W}}}^{3})W+2929088\,(W')^{2}{\overline{W}}\\ -45767\,{{\overline{W}}}^{4})y+(-1739146\,W^{(5)}+(2196816\,{\overline{W}}\\ -2929088\,W)W^{(3)}+1098408\,W{\overline{W}}\,W'-5858176\,W'W''-549204\,W'{{\overline{W}}}^{2}\\ -549204\,(W)^{2}W')y')E+(45767\,W^{(8)}+(1327243\,W-183068\,{\overline{W}})W^{(6)}\\ +2929088\,W'W^{(5)}+(3935962\,(W)^{2}+4485166\,W''+274602\,{{\overline{W}}}^{2}\\ -2929088\,W{\overline{W}})W^{(4)}+2562952\,(W^{(3)})^{2}+(-4759768\,W'{\overline{W}}\\ +15469246\,W\,W')W^{(3)}+(-2746020\,{\overline{W}}+9656837\,W)(W'')^{2}\\ +(2288350\,(W)^{3}+1922214\,W{{\overline{W}}}^{2}-4027496\,(W)^{2}{\overline{W}}\\ +10617944\,(W')^{2}-183068\,{{\overline{W}}}^{3})W''+45767\,(W)^{5}\\ -183068\,(W)^{4}{\overline{W}}+274602\,(W)^{3}{{\overline{W}}}^{2}+(-183068\,{{\overline{W}}}^{3}\\ +4576700\,(W')^{2})(W)^{2}+(-5125904\,(W')^{2}{\overline{W}}+45767\,{{\overline{W}}}^{4})W\\ +1098408\,(W')^{2}{{\overline{W}}}^{2})y+(366136\,W^{(7)}+(-1098408\,{\overline{W}}\\ +2837554\,W)W^{(5)}+8238060\,W'W^{(4)}+(3661360\,(W)^{2}\\ -4393632\,W{\overline{W}}+1098408\,{{\overline{W}}}^{2}+12814760\,W'')W^{(3)}+(-8787264\,W'{\overline{W}}\\ +14645440\,W\,W')W''+1647612\,W{{\overline{W}}}^{2}W'-2196816\,(W)^{2}{\overline{W}}\,W'\\ +3661360\,(W')^{3}+915340\,(W)^{3}W'-366136\,{{\overline{W}}}^{3}W')y'\Big] \end{array}\end{equation*} The principal terms of the local truncation errors presented above are collected in respect to the energy $E$ in descending order. As we can easily see, the maximum power of $E$ in the error for each case is: \begin{itemize} \item $E^5$ for the classical method \item $E^4$ for the phase-fitted method \item $E^3$ for the zero $PL$ and $PL'$ method \item $E^3$ for the zero $PL$, $PL'$ and $PL''$ method and \item $E^2$ for the zero $PL$, $PL'$, $PL''$ and $PL'''$ method. \end{itemize} A low maximum power of $E$ is crucial when integrating the Schr\"odinger equation using a high value of energy. \subsection{Stability analysis} The stability analysis of the methods concerns the application of the test problem $y''=-\omega y$. Here we present the characteristic equations of the five methods: \bc C.E._{Classical}=1+{\lambda}^{8}+{\frac{1}{12096}}\,(17671\,{s}^{2}-24192){\lambda}^{7}+{\frac{1}{12096}}\,(-23622\,{s}^{2}\\ +24192){\lambda}^{6}+{\frac{1}{12096}}\,(61449\,{s}^{2}-12096){\lambda}^{5}-{\frac{12629}{3024}}\,{s}^{2}{\lambda}^{4}\\ +{\frac{1}{12096}}\,(61449\,{s}^{2}-12096){\lambda}^{3}+{\frac{1}{12096}}\,(-23622\,{s}^{2}+24192){\lambda}^{2}\\ +{\frac{1}{12096}}\,(17671\,{s}^{2}-24192)\lambda \end{array}\end{equation*} \bc C.E._{Phase-Fitted}=-{\frac{109}{32}}\,(1+{\lambda}^{2}-2\,\lambda\,\cos(s))(-{\frac{32}{109}}\,(\lambda-1)^{6}(\cos(s))^{3}\\ +({\frac{96}{109}}+{\frac{96}{109}}\,{\lambda}^{6}+({s}^{2}-{\frac{416}{109}}){\lambda}^{5}+(-{\frac{808}{327}}\,{s}^{2}+{\frac{880}{109}}){\lambda}^{4}+({\frac{1202}{327}}\,{s}^{2}-{\frac{1120}{109}}){\lambda}^{3}\\ +(-{\frac{808}{327}}\,{s}^{2}+{\frac{880}{109}}){\lambda}^{2}+({s}^{2}-{\frac{416}{109}})\lambda)(\cos(s))^{2}+(-{\frac{96}{109}}-{\frac{96}{109}}\,{\lambda}^{6}\\ +(-{\frac{404}{327}}\,{s}^{2}+{\frac{296}{109}}){\lambda}^{5}+(-{\frac{480}{109}}+{\frac{736}{327}}\,{s}^{2}){\lambda}^{4}+({\frac{560}{109}}-{\frac{1144}{327}}\,{s}^{2}){\lambda}^{3}\\ +(-{\frac{480}{109}}+{\frac{736}{327}}\,{s}^{2}){\lambda}^{2}+(-{\frac{404}{327}}\,{s}^{2}+{\frac{296}{109}})\lambda)\cos(s)+{\frac{32}{109}}+{\frac{32}{109}}\,{\lambda}^{6}\\ +(-{\frac{72}{109}}+{\frac{137}{327}}\,{s}^{2}){\lambda}^{5}+({\frac{80}{109}}-{\frac{56}{109}}\,{s}^{2}){\lambda}^{4}+(-{\frac{80}{109}}+{\frac{302}{327}}\,{s}^{2}){\lambda}^{3}\\ +({\frac{80}{109}}-{\frac{56}{109}}\,{s}^{2}){\lambda}^{2}+(-{\frac{72}{109}}+{\frac{137}{327}}\,{s}^{2})\lambda)(\cos(s)-1)^{-3} \end{array}\end{equation*} \bc C.E._{1st\,Deriv}={\frac{125}{48}}\,(-{\frac{96}{125}}\,\lambda\,s(\lambda-1)^{4}(\cos(s))^{5}+{\frac{48}{125}}\,(4\,\lambda\,\sin(s)\\ +s({\lambda}^{2}+4\,\lambda+1))(\lambda-1)^{4}(\cos(s))^{4}+(-{\frac{192}{125}}\,\lambda\,(\lambda-1)^{4}\sin(s)\\ -2\,({\frac{48}{125}}+{\frac{48}{125}}\,{\lambda}^{6}-{\frac{192}{125}}\,{\lambda}^{5}+({\frac{396}{125}}+{s}^{2}){\lambda}^{4}+(-{\frac{38}{25}}\,{s}^{2}-{\frac{504}{125}}){\lambda}^{3}\\ +({\frac{396}{125}}+{s}^{2}){\lambda}^{2}-{\frac{192}{125}}\,\lambda)s)(\cos(s))^{3}+\lambda\,(-{\frac{96}{125}}\,(\lambda-1)^{4}\sin(s)+(({s}^{2}-{\frac{132}{125}}){\lambda}^{4}\\ +(-{\frac{62}{25}}\,{s}^{2}+{\frac{648}{125}}){\lambda}^{3}+({\frac{98}{25}}\,{s}^{2}-{\frac{1032}{125}}){\lambda}^{2}+(-{\frac{62}{25}}\,{s}^{2}+{\frac{648}{125}})\lambda+{s}^{2}\\ -{\frac{132}{125}})s)(\cos(s))^{2}+({\frac{24}{25}}\,\lambda\,(\lambda-1)^{4}\sin(s)+{\frac{12}{25}}\,s(\frac{8}{5}+\frac{8}{5}\,{\lambda}^{6}+(-{\frac{24}{5}}+{s}^{2}){\lambda}^{5}\\ +(\frac{1}{6}\,{s}^{2}+{\frac{34}{5}}){\lambda}^{4}+(-{\frac{36}{5}}-\frac{1}{3}\,{s}^{2}){\lambda}^{3}+(\frac{1}{6}\,{s}^{2}+{\frac{34}{5}}){\lambda}^{2}+(-{\frac{24}{5}}+{s}^{2})\lambda))\cos(s)\\ -{\frac{24}{125}}\,\lambda\,(\lambda-1)^{4}\sin(s)-{\frac{13}{25}}\,(1+{\lambda}^{2})({\frac{48}{65}}+{\frac{48}{65}}\,{\lambda}^{4}+({s}^{2}-{\frac{132}{65}}){\lambda}^{3}\\ +(-{\frac{14}{13}}\,{s}^{2}+{\frac{168}{65}}){\lambda}^{2}+({s}^{2}-{\frac{132}{65}})\lambda)s)(1+{\lambda}^{2}-2\,\lambda\,\cos(s)){s}^{-1}\\ (\cos(s)+1)^{-1}(\cos(s)-1)^{-3} \end{array}\end{equation*} \bc C.E._{2nd\,Deriv}=-\frac{5}{8}\,(-{\frac{32}{5}}\,{\lambda}^{2}(\lambda-1)^{2}({s}^{2}-3)(\cos(s))^{7}\\ +{\frac{32}{5}}\,({\lambda}^{2}{s}^{2}-\frac{3}{2}\,{\lambda}^{2}+3\,\lambda\,\sin(s)s+\lambda\,{s}^{2}-3\,\lambda+{s}^{2}-\frac{3}{2})\lambda\,(\lambda-1)^{2}(\cos(s))^{6}\\ -\frac{8}{5}\,(10\,s({\lambda}^{2}+1+\frac{6}{5}\,\lambda)\lambda\,\sin(s)+{\lambda}^{4}{s}^{2}+(4\,{s}^{2}-6){\lambda}^{3}+(18-6\,{s}^{2}){\lambda}^{2}\\ +(4\,{s}^{2}-6)\lambda+{s}^{2})(\lambda-1)^{2}(\cos(s))^{5}+(16\,({\lambda}^{2}-\frac{7}{5}\,\lambda+1)s\lambda\,(\lambda-1)^{2}\sin(s)\\ +\frac{8}{5}\,{\lambda}^{6}{s}^{2}+({\frac{72}{5}}-16\,{s}^{2}){\lambda}^{5}+({\frac{88}{5}}\,{s}^{2}+{\frac{12}{5}}){\lambda}^{4}+(-{\frac{168}{5}}-{\frac{32}{5}}\,{s}^{2}+4\,{s}^{4}){\lambda}^{3}\\ +({\frac{88}{5}}\,{s}^{2}+{\frac{12}{5}}){\lambda}^{2}+({\frac{72}{5}}-16\,{s}^{2})\lambda+\frac{8}{5}\,{s}^{2})(\cos(s))^{4}\\ -4\,(-{\frac{26}{5}}\,({\lambda}^{2}+{\frac{25}{26}}\,\lambda+1)s\lambda\,\sin(s)-\frac{4}{5}\,{\lambda}^{4}{s}^{2}+({\frac{39}{10}}-{\frac{16}{5}}\,{s}^{2}){\lambda}^{3}+({s}^{4}-\frac{9}{5}){\lambda}^{2}\\ +({\frac{39}{10}}-{\frac{16}{5}}\,{s}^{2})\lambda-\frac{4}{5}\,{s}^{2})(\lambda-1)^{2}(\cos(s))^{3}+(-{\frac{102}{5}}\,({\lambda}^{2}-\frac{2}{17}\,\lambda+1)s\lambda\,\\ (\lambda-1)^{2}\sin(s)-{\frac{16}{5}}\,{\lambda}^{6}{s}^{2}+({\frac{64}{5}}\,{s}^{2}-{\frac{18}{5}}+{s}^{4}){\lambda}^{5}+(-8\,{s}^{4}-16\,{s}^{2}-{\frac{24}{5}}){\lambda}^{4}\\ +({\frac{84}{5}}+{\frac{64}{5}}\,{s}^{2}+6\,{s}^{4}){\lambda}^{3}+(-8\,{s}^{4}-16\,{s}^{2}-{\frac{24}{5}}){\lambda}^{2}+({\frac{64}{5}}\,{s}^{2}-{\frac{18}{5}}+{s}^{4})\lambda\\ -{\frac{16}{5}}\,{s}^{2})(\cos(s))^{2}+2\,(-\frac{9}{5}\,({\lambda}^{2}-\frac{4}{9}\,\lambda+1)s\lambda\,\sin(s)-\frac{4}{5}\,{\lambda}^{4}{s}^{2}\\ +(-{\frac{16}{5}}\,{s}^{2}+3+{s}^{4}){\lambda}^{3}+(-\frac{8}{5}\,{s}^{2}+\frac{6}{5}){\lambda}^{2}+(-{\frac{16}{5}}\,{s}^{2}+3+{s}^{4})\lambda\\ -\frac{4}{5}\,{s}^{2})(\lambda-1)^{2}\cos(s)+{\frac{16}{5}}\,s({\lambda}^{2}-\frac{1}{2}\,\lambda+1)\lambda\,(\lambda-1)^{2}\sin(s)\\ +(1+{\lambda}^{2})(\frac{8}{5}\,{\lambda}^{4}{s}^{2}+({s}^{4}-\frac{6}{5}-{\frac{16}{5}}\,{s}^{2}){\lambda}^{3}+({\frac{12}{5}}+{\frac{16}{5}}\,{s}^{2}){\lambda}^{2}\\ +({s}^{4}-\frac{6}{5}-{\frac{16}{5}}\,{s}^{2})\lambda+\frac{8}{5}\,{s}^{2}))(1+{\lambda}^{2}-2\,\lambda\,\cos(s)){s}^{-2}\\ (\cos(s)+1)^{-2}(\cos(s)-1)^{-3} \end{array}\end{equation*} \bc C.E._{3rd\,Deriv}=-(1+{\lambda}^{2}-2\,\lambda\,\cos(s))((-{\frac{88}{3}}\,{s}^{2}+32){\lambda}^{3}(\cos(s))^{7}\\ -8\,(\lambda\,s({s}^{2}-6)\sin(s)+(4-{\frac{31}{6}}\,{s}^{2}){\lambda}^{2}+3\,\lambda\,{s}^{2}+4-{\frac{31}{6}}\,{s}^{2}){\lambda}^{2}\\ (\cos(s))^{6}+12\,(s((-5+{s}^{2}){\lambda}^{2}+(2-\frac{2}{3}\,{s}^{2})\lambda-5+{s}^{2})\lambda\,\sin(s)\\ +(-{\frac{13}{9}}\,{s}^{2}+\frac{2}{3}){\lambda}^{4}+\frac{8}{3}\,{\lambda}^{3}{s}^{2}+({\frac{7}{9}}\,{s}^{2}-\frac{8}{3}){\lambda}^{2}+\frac{8}{3}\,\lambda\,{s}^{2}-{\frac{13}{9}}\,{s}^{2}+\frac{2}{3})\lambda\,(\cos(s))^{5}\\ -6\,(s(({s}^{2}-3){\lambda}^{4}+(-2\,{s}^{2}+4){\lambda}^{3}+(\frac{2}{3}\,{s}^{2}+2){\lambda}^{2}+(-2\,{s}^{2}+4)\lambda\\ -3+{s}^{2})\sin(s)+\frac{5}{3}\,{\lambda}^{4}{s}^{2}+(-8+{\frac{31}{3}}\,{s}^{2}){\lambda}^{3}+(-\frac{2}{3}-{\frac{11}{9}}\,{s}^{2}){\lambda}^{2}\\ +(-8+{\frac{31}{3}}\,{s}^{2})\lambda+\frac{5}{3}\,{s}^{2})\lambda\,(\cos(s))^{4}+(s({\lambda}^{6}{s}^{2}+(6-6\,{s}^{2}){\lambda}^{5}+(66-9\,{s}^{2}){\lambda}^{4}\\ +(-4\,{s}^{2}-3){\lambda}^{3}+(66-9\,{s}^{2}){\lambda}^{2}+(6-6\,{s}^{2})\lambda+{s}^{2})\sin(s)+(30\,{s}^{2}-12){\lambda}^{5}\\ +(-{\frac{245}{6}}\,{s}^{2}-4){\lambda}^{4}+(-8+{\frac{145}{3}}\,{s}^{2}){\lambda}^{3}+(-{\frac{245}{6}}\,{s}^{2}-4){\lambda}^{2}+(30\,{s}^{2}-12)\lambda)\\ (\cos(s))^{3}+(s({\lambda}^{6}{s}^{2}+(-21+6\,{s}^{2}){\lambda}^{5}+(-9\,{s}^{2}+{\frac{27}{2}}){\lambda}^{4}+(12\,{s}^{2}-39){\lambda}^{3}\\ +(-9\,{s}^{2}+{\frac{27}{2}}){\lambda}^{2}+(-21+6\,{s}^{2})\lambda+{s}^{2})\sin(s)+({\frac{157}{12}}\,{s}^{2}+1){\lambda}^{5}\\ +({\frac{44}{3}}\,{s}^{2}-16){\lambda}^{4}+({\frac{173}{6}}\,{s}^{2}-2){\lambda}^{3}+({\frac{44}{3}}\,{s}^{2}-16){\lambda}^{2}+({\frac{157}{12}}\,{s}^{2}+1)\lambda)(\cos(s))^{2}\\ +(-s({\lambda}^{6}{s}^{2}+(3-6\,{s}^{2}){\lambda}^{5}+(3\,{s}^{2}+9){\lambda}^{4}+(-12\,{s}^{2}+3){\lambda}^{3}+(3\,{s}^{2}+9){\lambda}^{2}\\ +(3-6\,{s}^{2})\lambda+{s}^{2})\sin(s)+(4-{\frac{149}{12}}\,{s}^{2}){\lambda}^{5}+({\frac{29}{6}}\,{s}^{2}+4){\lambda}^{4}+(8-{\frac{161}{6}}\,{s}^{2}){\lambda}^{3}\\ +({\frac{29}{6}}\,{s}^{2}+4){\lambda}^{2}+(4-{\frac{149}{12}}\,{s}^{2})\lambda)\cos(s)-({\lambda}^{4}{s}^{2}-{\frac{15}{4}}\,{\lambda}^{3}\\ +(\frac{3}{2}+2\,{s}^{2}){\lambda}^{2}-{\frac{15}{4}}\,\lambda+{s}^{2})s(1+{\lambda}^{2})\sin(s)+(-1-{\frac{25}{12}}\,{s}^{2}){\lambda}^{5}+5\,{\lambda}^{4}{s}^{2}\\ +(-2-{\frac{37}{6}}\,{s}^{2}){\lambda}^{3}+5\,{\lambda}^{2}{s}^{2}+(-1-{\frac{25}{12}}\,{s}^{2})\lambda){s}^{-3}(\cos(s)+1)^{-1}(\sin(s))^{-3} \end{array}\end{equation*} From the characteristic equations we evaluate $s_0$ and the interval of periodicity $[0,s_0^2]$. These are given below: \begin{itemize} \item $s_0=0.754$ ($[0,0.569]$) for the classical method \item $s_0=0.803$ ($[0,0.645]$) for the phase-fitted method \item $s_0=0.874$ ($[0,0.763]$) for the zero $PL$ and $PL'$ method \item $s_0=1.010$ ($[0,1.020]$) for the zero $PL$, $PL'$ and $PL''$ method and \item $s_0=1.865$ ($[0,3.478]$) for the zero $PL$, $PL'$, $PL''$ and $PL'''$ method. \end{itemize} As we can see, by requiring higher derivatives of the phase-lag to be vanished, we increase the interval of periodicity, which is a very important property. \section{Numerical results} \label{Numerical_results} \subsection{The problems} The efficiency of the two newly constructed methods will be measured through the integration of two real initial value problems with oscillating solutions. \subsubsection{The Schr\"odinger equation} \label{Intro} The radial Schr\"{o}dinger equation is given by: \begin{equation} \label{Schrodinger} y''(x) = \left( \frac{l(l+1)}{x^{2}}+V(x)-E \right) y(x) \end{equation} \noindent where $\frac{l(l+1)}{x^{2}}$ is the \textit{centrifugal potential}, $V(x)$ is the \textit{potential}, $E$ is the \textit{energy} and $W(x) = \frac{l(l+1)}{x^{2}} + V(x)$ is the \textit{effective potential}. It is valid that ${\mathop {\lim} \limits_{x \to \infty}} V(x) = 0$ and therefore ${\mathop {\lim} \limits_{x \to \infty}} W(x) = 0$. We consider $E>0$ and divide $[0,\infty)$ into subintervals $[a_{i},b_{i}]$ so that $W(x)$ is a constant with value ${\mathop{W_{i}}\limits^{\_}}$. After this the problem \eqref{Schrodinger} can be expressed by the approximation \begin{equation} \begin{array}{l} \label{Schrodinger_simplified} y''_{i} = ({\mathop{W}\limits^{\_}} - E)\,y_{i}, \quad\quad \mbox{whose solution is}\\ y_{i}(x) = A_{i}\,\exp{\left(\sqrt{{\mathop{W}\limits^{\_}}-E}\,x\right)} + B_{i}\,\exp{\left(-\sqrt{{\mathop{W}\limits^{\_}}-E}\,x\right)}, \\ A_{i},\,B_{i}\,\in {\mathbb R}. \end{array} \end{equation} We will integrate problem \eqref{Schrodinger} with $l=0$ at the interval $[0,15]$ using the well known Woods-Saxon potential \begin{eqnarray} \label{Woods_Saxon} V(x) = \frac{u_{0}}{1+q} + \frac{u_{1}\,q}{(1+q)^2}, \quad\quad q = \exp{\left(\frac{x-x_{0}}{a}\right)}, \quad \mbox{where}\\ \nonumber u_{0}=-50, \quad a=0.6, \quad x_{0}=7 \quad \mbox{and} \quad u_{1}=-\frac{u_{0}}{a} \end{eqnarray} \noindent and with boundary condition $y(0)=0$. \noindent The potential $V(x)$ decays more quickly than $\frac{l\,(l+1)}{x^2}$, so for large $x$ (asymptotic region) the Schr\"{o}dinger equation \eqref{Schrodinger} becomes \begin{equation} \label{Schrodinger_reduced} y''(x) = \left( \frac{l(l+1)}{x^{2}}-E \right) y(x) \end{equation} \noindent The last equation has two linearly independent solutions $k\,x\,j_{l}(k\,x)$ and\\ $k\,x\,n_{l}(k\,x)$, where $j_{l}$ and $n_{l}$ are the \textit{spherical Bessel} and \textit{Neumann} functions. When $x \rightarrow \infty$ the solution takes the asymptotic form \begin{equation} \label{asymptotic_solution} \begin{array}{l} y(x) \approx A\,k\,x\,j_{l}(k\,x) - B\,k\,x\,n_{l}(k\,x) \\ \approx D[sin(k\,x - \pi\,l/2) + \tan(\delta_{l})\,\cos{(k\,x - \pi\,l/2)}], \end{array} \end{equation} \noindent where $\delta_{l}$ is called \textit{scattering phase shift} and it is given by the following expression: \begin{equation} \tan{(\delta_{l})} = \frac{y(x_{i})\,S(x_{i+1}) - y(x_{i+1})\,S(x_{i})} {y(x_{i+1})\,C(x_{i}) - y(x_{i})\,C(x_{i+1})}, \end{equation} \noindent where $S(x)=k\,x\,j_{l}(k\,x)$, $C(x)=k\,x\,n_{l}(k\,x)$ and $x_{i}<x_{i+1}$ and both belong to the asymptotic region. Given the energy we approximate the phase shift, the accurate value of which is $\pi/2$ for the above problem. We will use three different values for the energy: \begin{itemize} \item $E_1=989.701916$ \item $E_2=341.495874$ \item $E_3=163.215341$ \end{itemize} As for the frequency $w$ we will use the suggestion of Ixaru and Rizea \cite{ix_ri}: \begin{equation} \omega = \begin{cases} \sqrt{E-50}, & x\in[0,\,6.5]\\ \sqrt{E}, & x\in[6.5,\,15] \end{cases} \end{equation} \subsubsection{The N-Body Problem} The N-body problem is the problem that concerns the movement of N bodies under Newton's law of gravity. It is expressed by a system of vector differential equations \begin{equation}\label{IVP_N_body} \begin{array}{l} \displaystyle \overrightarrow{\ddot{y}_i} = G\,\sum\limits_{j=1,\,j\neq0}^{N}{\frac{m_j\,(\overrightarrow{y_j}-\overrightarrow{y_i})}{|\overrightarrow{y_j}-\overrightarrow{y_i}|^3}}, \quad i=1,2,..,N \end{array} \end{equation} \noindent where $G$ is the gravitational constant, $m_j$ is the mass of body $j$ and $\overrightarrow{y_i}$ is the vector of the position of body $i$. It is easy to see that each vector differential equation of (\ref{IVP_N_body}) can be analyzed into three simplified differential equations, that express the three directions $x, y, z$. So $\overrightarrow{y_j}-\overrightarrow{y_i}$ expresses the difference between the coordinates of bodies $j$ and $i$ for the corresponding direction, while $|\overrightarrow{y_j}-\overrightarrow{y_i}|$ represents the distance between bodies $i$ and $j$. The above system of ODEs cannot be solved analytically. Instead we produce a highly accurate numerical solution by using a 10-stage implicit Runge-Kutta method of Gauss with 20th algebraic order, that is also symplectic and A-stable. The method can be easily reproduced using simplifying assumptions for the order conditions (see \cite{butcher}). The reference solution is obtained by using the previous method to integrate the N-body problem for a specific time-span and for different step-lengths. In order to find the step-length $h_{opt}$ that gives the best approximation, we have to keep in mind that the total error of a numerical method that integrates a system of ODEs consists of the error due to the truncation error of the method and the roundoff error of all computations. While the global truncation error of the method tends to zero, while $h$ decreases, the opposite happens to the roundoff, which tends to infinity. If $y_{acc}$ is the analytical solution for a specific time-span of the problem, then let $\epsilon_n=||\overline{y}_{h_n}-y_{acc}||$ and $\varepsilon_n=||\overline{y}_{h_{n+1}}-\overline{y}_{h_{n}}||$, where $\overline{y}_{h_n}$ is the approximate solution of $y$ using a step-length $h_n$. $\epsilon_n$ represents the actual error of the approximation and $\varepsilon_n$ is the best known approximation to the actual error, being the difference of two approximations with different step-lengths. We see that, when $h_n \rightarrow h_{opt}$ $\Rightarrow$ $\epsilon_n \rightarrow \epsilon_{min}$ and $\varepsilon_n \rightarrow \varepsilon_{min}$. The minimum values of the errors $\epsilon_{min}$ and $\varepsilon_{min}$ are positive numbers and depend on the software that is used for the integration and the computer system that it runs on. We can also see that $\epsilon_n$ and $\varepsilon_n$ have similar behavior around $n_{opt}$, meaning that they increase and decrease simultaneously. According to these we find the step-length $h_{opt}$ that minimizes $\varepsilon_n$, which is easily calculated for every $h_n$. In \cite{hairer} the data for the five outer planet problem is given. This system consists of the sun and the five most distant planets of the solar system. In Table \ref{table_five_outer_planets} we can see the masses, the initial position components and the initial velocity components of the six bodies. Masses are relative to the sun, so that the sun has mass 1. In the computations the sun with the four inner planets are considered one body, so the mass is larger than one. Distances are in astronomical units, time is in earth days and the gravitational constant is $G=2.95912208286 \cdot 10^{-4}$. \begin{equation} \begin{array}{c|c|c|c} \label{table_five_outer_planets} Planet & Mass & Initial\; Position & Initial\; Velocity \\ \hline Sun & 1.00000597682 & 0 & 0 \\ & & 0 & 0 \\ & & 0 & 0 \\ \hline Jupiter & 0.000954786104043 & -3.5023653 & ~~0.00565429 \\ & & -3.8169847 & -0.00412490 \\ & & -1.5507963 & -0.00190589 \\ \hline Saturn & 0.000285583733151 & ~~9.0755314 & ~~0.00168318 \\ & & -3.0458353 & ~~0.00483525 \\ & & -1.6483708 & ~~0.00192462 \\ \hline Uranus & 0.0000437273164546 & ~~8.3101420 & ~~0.00354178 \\ & & -16.2901086 & ~~0.00137102 \\ & & -7.2521278 & ~~0.00055029 \\ \hline Neptune & 0.0000517759138449 & ~~11.4707666 & ~~0.00288930 \\ & & -25.7294829 & ~~0.00114527 \\ & & -10.8169456 & ~~0.00039677 \\ \hline Pluto & 1/(1.3\cdot10^8) & -15.5387357 & ~~0.00276725 \\ & & -25.2225594 & -0.00170702 \\ & & -3.1902382 & -0.00136504 \\ \end{array} \end{equation} The system of equations (\ref{IVP_N_body}) has been solved for $t\in[0,10^6]$, for which time-span, the previously mentioned method of Gauss produces a $10.5$ decimal digits solution. We have used $\omega=0.00145044732989$, which is the dominant frequency of the problem, as evaluated by the square root of the spectral radius of matrix A, if the problem is expressed in the form $y''=Ay+B$. \subsection{The methods} \begin{itemize} \item The classical method developed by Quinlan and Tremaine \cite{qt8} \item The phase-fitted method developed by Panopoulos, Anastassi and\\ Simos \cite{panopoulos_match} \item The zero $PL$ and $PL'$ method developed here \item The zero $PL$, $PL'$ and $PL''$ developed here \item The zero $PL$, $PL'$, $PL''$ and $PL'''$ developed here \end{itemize} \subsection{Comparison} We are presenting the accuracy of the methods expressed by $-\log_{10}$(error at the end point) versus the $\log_{10}$(total steps). In Figures \ref{fig_res_989}, \ref{fig_res_341} and \ref{fig_res_163} we are presenting the efficiency of the methods for the Schr\"odinger equation using a value for the energy equal to i) $989.701916$, ii) $341.495874$ and iii) $163.215341$. Also in Figure \ref{fig_nbody} we present the efficiency for the N-body problem and particularly the five outer planet problem. \begin{figure} \caption{Efficiency for the Schr\"odinger equation using E = 989.701916} \label{fig_res_989} \end{figure} \begin{figure} \caption{Efficiency for the Schr\"odinger equation using E = 341.495874} \label{fig_res_341} \end{figure} \begin{figure} \caption{Efficiency for the Schr\"odinger equation using E = 163.215341} \label{fig_res_163} \end{figure} \begin{figure} \caption{Efficiency for the N-body problem} \label{fig_nbody} \end{figure} We see that for each successive derivative of the phase-lag nullified, we gain in efficiency for both IVPs tested here. \section{Conclusions} We have developed three new optimized eight-step symmetric methods with zero phase-lag and derivatives. We showed that the more derivatives of the phase-lag are vanished, the bigger the interval of periodicity and the higher the efficiency of the method. This is the case for both problems tested here. Also the local error truncation analysis shows the relation of the error to the energy, revealing the importance of nullified phase-lag derivatives when integrating the Schr\"odinger equation, especially when using high value of energy. \end{document}
arXiv
It is a well-established heuristic that most orbits for Bianchi 8 and Bianchi 9 cosmologies are at late times well approximated by a sequence of Bianchi 2 orbits. I will use this heuristic to construct an approximate Poincare-map $\Phi_0$ and will show that this map approximates the actual return-map in the C^0-norm. Since the Bianchi ODE is polynomial, we can complexify it and will prove that (in a reasonable domain) the same estimates hold, thus yielding $C^\infty$-estimates via Cauchy's integral formula. I will present (and roughly sketch the proof of) two corollaries of these estimates, which can be summarised by the following: 1. Stable foliation. For almost every base-point, there is an analytic (in the interior) codimension 1 stable manifold attached. 2. Positive measure. The union of these attached stable manifolds has positive measure. We present a notion of entropy for domains in Eulcidean space which modifies Perelman's entropy introduced for closed Riemannian manifolds. We'll discuss basic properties of this quantity and explain how it is related to control of local volume ratios for the domains under consideration and how it may prevent local volume collapse for families of evolving domains. If time permits we will also show a natural connection between the entropy and Harnack inequalities for the backward heat equation. We introduce the MAP-kinase cascade, a pattern of chemical reactions which is an important element of many signaling pathways in cells. This process may be modelled in different ways. A common feature of models/numerical simulations/experiments is the existence of periodic orbits due to relaxation oscillation. We will present a rigorous proof of this phenomenon for a model with feedback control, due to Gedeon and Sontag; express some critique of their model and results; and finally discuss the model we propose to study. I will report on an example by Ninomiya et al. where an ode system with a global attractor combined with a diffusion term gives surprisingly rise to blow up solutions. This phenomenon reminds of Turing instability where the diffusion destabilizes a stable ode equilibrium. I will discuss the blow-up behaviour of the one-dimensional heat equation with quadratic nonlinearity in complex time. The first talk will give an introduction to the problems of: 1. Existence and uniqueness of solutions 2. Global behaviour 3. Analytic continuation beyond the blow-up time. We define the Ricci flow on R^n and show that certain warped product solutions of Ricci flow are equivalent to solutions of a system of PDE�s. Next we explain how one can get estimates for geometric quantities by means of a maximum principle. We sketch what collapsing of such a warped product solution means. Finally we indicate how one can obtain Gaussian estimates for Ricci flow by considering the example of the heat equation on R^n. We consider elliptic differential-difference equations with several nonnegative difference operators. The interest in such operators is due to their fundamentally new properties compared with even strongly elliptic differential-difference operators as well as due to applications of the obtained results to certain nonlocal problems arising in the plasma theory. We obtain a priori estimates of solutions. In addition, using these estimates, we can show that the considered operator is sectorial, construct its Friedrichs extension, and prove a theorem on the smoothness of solutions. I will give an introduction to the theory of Optimal Transport. This notion defines a natural metric - the Wasserstein distance - in the space of probability measures. I will show that certain types of PDEs can be viewed as gradient flows with respect to that metric and how this interpretation can be used to establish estimates for convergence rates in these equations. We consider one of the possible generalizations of the notion of shadowing property to actions of nonabelian groups. We consider how the classical shadowing lemma can be generalized for this case. An important example will be Baumslag-Solitar group. The BKL-Conjecture states that the approach to the initial singularity is vacuum-dominated, local and oscillatory. The highly symmetric Bianchi cosmologies play an important role in this BKL-picture, as they are believed to capture the essential dynamics of more general solutions. A detailed study of Takens Linearization Theorem and the Non-Resonance-Conditions lead us to a new result in Bianchi class A: We are able to show, for the first time, that for admissible periodic heteroclinic chains in Bianchi IX there exisist C1- stable - manifolds of orbits that follow these chains towards the big bang. We also study Bianchi models of class B, where no rigorous results exist to date. We find an example for a periodic heteroclinic chain that allows Takens Linearization at all Base points and give some arguments why it qualifies as a candidate for proving the first rigorous convergence theorem in class B. We conclude with an outlook on future research on the chaotic dynamics of the Einstein Equations towards the big bang - in order to shed a little more light on our "tumbling universe" at birth. Chimera states are coherence-incoherence patterns observed in homogeneous discrete oscillatory media with non-local coupling. Despite their nontrivial dynamical nature, such patterns can be effectively analyzed in terms of the thermodynamic limit formalism. In particular, using statistical physics concept of local mean field and the Ott-Antonsen invariant manifold reduction, one can explain typical bifurcation scenarios leading to the appearance of chimera states and provide their reasonable classification. In Bianchi 8 and 9 cosmologies, estimates for the transit near the Kasner circle of equilibria are essential for questions regarding long-time dynamics. In the fall 2012, I presented new estimates on the transit in a complexified version of the Bianchi differential equations (thus allowing to obtain estimates on derivatives by Cauchy integrals). This talk will focus on the technical details of the proof of these estimates, using a perturbative ansatz, elementary but lengthy estimates on various integral operators and a variant of Schauder's fixed point theorem. In this talk, I will briefly explain the phenomena such as the Belousov-Zhabotinsky reaction to show how spiral patterns arise and why they are important and interesting. Then we may ask: What are the corresponding mathematical models for spiral patterns? The most intuitive way is perhaps the reaction-diffusion systems. I will show how to derive the systems from the special Euclidean symmetry SE(2). Another approach is the kinematic model. It regards a spiral as a curvature flow along the normal direction of a given planar curve. At last, I will focus on the kinematic model and show how to prove the existence of rotating spirals. Many processes in living cells are controlled by biochemical substances regulating active stresses. The cytoplasm is an active material with both viscoelastic and liquid properties. First, we incorporate the active stress into a two-phase model of the cytoplasm which accounts for the spatiotemporal dynamics of the cytoskeleton and the cytosol. The cytoskeleton is described as a solid matrix that together with the cytosol as interstitial fluid constitutes a poroelastic material. We find different forms of mechanochemical waves including traveling, standing and rotating waves by employing linear stability analysis and numerical simulations in one and two spatial dimensions. In a second step, we expand the chemo-mechanical model in order to model the manifold contraction patterns observed experimentally in protoplasmic droplets of Physarum polycephalum. To achieve this, we combine a biophysically realistic model of a calcium oscillator with the poroelastic model derived in the first part of the talk and assume that the active tension is regulated by calcium. With the help of two-dimensional simulations the model is shown to reproduce the contraction patterns observed in protoplasmic droplets as well as a number of other traveling and standing wave patterns. Joint work with Markus Radszuweit (PTB, TU Berlin), S. Alonso (PTB), H. Engel (TU Berlin).
CommonCrawl
# Boundary value problems and their solution using finite difference methods Finite difference methods are a class of numerical methods used to approximate the solution of differential equations. They are based on the idea of discretizing the domain and approximating the derivatives using finite differences. Consider the following boundary value problem: $$\frac{d^2y}{dx^2} = f(x, y), \quad a \le x \le b$$ $$y(a) = y_a, \quad y(b) = y_b$$ We can discretize the domain into $N$ intervals and approximate the second derivative using finite differences: $$\frac{y_{i+1} - 2y_i + y_{i-1}}{(\Delta x)^2} \approx f(x_i, y_i)$$ ## Exercise Solve the following boundary value problem using finite difference methods: $$\frac{d^2y}{dx^2} = -y, \quad 0 \le x \le 1$$ $$y(0) = 1, \quad y(1) = 0$$ To solve the boundary value problem, we first need to discretize the domain into $N$ intervals and approximate the second derivative using finite differences. Then, we can use an iterative method, such as the Jacobi method, to solve the resulting system of linear equations. # Euler method for solving ordinary differential equations The Euler method can be implemented in C++ as follows: ```cpp #include <iostream> #include <cmath> double euler_method(double y0, double t0, double tf, double dt, double (*f)(double, double)) { double t = t0; double y = y0; while (t < tf) { y += f(t, y) * dt; t += dt; } return y; } ``` ## Exercise Solve the following ordinary differential equation using the Euler method: $$\frac{dy}{dt} = -y, \quad y(0) = 1$$ # Finite difference methods for solving systems of ordinary differential equations Finite difference methods can be implemented in C++ as follows: ```cpp #include <iostream> #include <cmath> #include <vector> std::vector<double> finite_difference_method(std::vector<double> y0, double t0, double tf, double dt, std::vector<double> (*f)(double, std::vector<double>)) { double t = t0; std::vector<double> y = y0; while (t < tf) { std::vector<double> dy = f(t, y); for (int i = 0; i < y.size(); i++) { y[i] += dy[i] * dt; } t += dt; } return y; } ``` ## Exercise Solve the following system of ordinary differential equations using finite difference methods: $$\frac{dy_1}{dt} = -y_1, \quad y_1(0) = 1$$ $$\frac{dy_2}{dt} = -y_2, \quad y_2(0) = 0$$ # Introduction to the Runge-Kutta method The classic Runge-Kutta method can be implemented in C++ as follows: ```cpp #include <iostream> #include <cmath> double runge_kutta_method(double y0, double t0, double tf, double dt, double (*f)(double, double)) { double t = t0; double y = y0; while (t < tf) { double k1 = f(t, y); double k2 = f(t + dt/2, y + k1 * dt/2); double k3 = f(t + dt/2, y + k2 * dt/2); double k4 = f(t + dt, y + k3 * dt); y += (k1 + 2*k2 + 2*k3 + k4) * dt / 6; t += dt; } return y; } ``` ## Exercise Solve the following ordinary differential equation using the Runge-Kutta method: $$\frac{dy}{dt} = -y, \quad y(0) = 1$$ # The classic Runge-Kutta method and its implementation in C++ The classic Runge-Kutta method can be implemented in C++ as follows: ```cpp #include <iostream> #include <cmath> double runge_kutta_method(double y0, double t0, double tf, double dt, double (*f)(double, double)) { double t = t0; double y = y0; while (t < tf) { double k1 = f(t, y); double k2 = f(t + dt/2, y + k1 * dt/2); double k3 = f(t + dt/2, y + k2 * dt/2); double k4 = f(t + dt, y + k3 * dt); y += (k1 + 2*k2 + 2*k3 + k4) * dt / 6; t += dt; } return y; } ``` ## Exercise Solve the following ordinary differential equation using the classic Runge-Kutta method: $$\frac{dy}{dt} = -y, \quad y(0) = 1$$ # Improved Runge-Kutta methods and their implementation in C++ An example of an improved Runge-Kutta method is the fifth-order Runge-Kutta method, which can be implemented in C++ as follows: ```cpp #include <iostream> #include <cmath> double runge_kutta_method(double y0, double t0, double tf, double dt, double (*f)(double, double)) { double t = t0; double y = y0; while (t < tf) { double k1 = f(t, y); double k2 = f(t + dt/5, y + k1 * dt/5); double k3 = f(t + 3*dt/5, y + 3*k2/4 * dt); double k4 = f(t + 4*dt/5, y + 2*k3/5 * dt); double k5 = f(t + dt, y + k4 * dt); y += (k1 + 2*k2 + 2*k3 + 2*k4 + k5) * dt / 6; t += dt; } return y; } ``` ## Exercise Solve the following ordinary differential equation using the fifth-order Runge-Kutta method: $$\frac{dy}{dt} = -y, \quad y(0) = 1$$ # Solving nonlinear ordinary differential equations using finite differences Finite difference methods can be implemented in C++ as follows: ```cpp #include <iostream> #include <cmath> #include <vector> std::vector<double> finite_difference_method(std::vector<double> y0, double t0, double tf, double dt, std::vector<double> (*f)(double, std::vector<double>)) { double t = t0; std::vector<double> y = y0; while (t < tf) { std::vector<double> dy = f(t, y); for (int i = 0; i < y.size(); i++) { y[i] += dy[i] * dt; } t += dt; } return y; } ``` ## Exercise Solve the following nonlinear ordinary differential equation using finite difference methods: $$\frac{dy}{dt} = -y^2, \quad y(0) = 1$$ # Solving partial differential equations using finite difference methods Finite difference methods can be implemented in C++ as follows: ```cpp #include <iostream> #include <cmath> #include <vector> std::vector<double> finite_difference_method(std::vector<double> y0, double t0, double tf, double dt, std::vector<double> (*f)(double, std::vector<double>)) { double t = t0; std::vector<double> y = y0; while (t < tf) { std::vector<double> dy = f(t, y); for (int i = 0; i < y.size(); i++) { y[i] += dy[i] * dt; } t += dt; } return y; } ``` ## Exercise Solve the following partial differential equation using finite difference methods: $$\frac{\partial^2y}{\partial x^2} = -y, \quad y(0, t) = 1$$ # Implementing numerical methods for ordinary differential equations in C++ Numerical methods can be implemented in C++ as follows: ```cpp #include <iostream> #include <cmath> double numerical_method(double y0, double t0, double tf, double dt, double (*f)(double, double)) { double t = t0; double y = y0; while (t < tf) { // Implement the numerical method here t += dt; } return y; } ``` ## Exercise Implement the Euler method and the Runge-Kutta method in C++ and use them to solve the following ordinary differential equation: $$\frac{dy}{dt} = -y, \quad y(0) = 1$$ # Advanced numerical methods for solving differential equations in C++ Advanced numerical methods can be implemented in C++ as follows: ```cpp #include <iostream> #include <cmath> double advanced_numerical_method(double y0, double t0, double tf, double dt, double (*f)(double, double)) { double t = t0; double y = y0; while (t < tf) { // Implement the advanced numerical method here t += dt; } return y; } ``` ## Exercise Implement the fifth-order Runge-Kutta method in C++ and use it to solve the following ordinary differential equation: $$\frac{dy}{dt} = -y, \quad y(0) = 1$$ # Applications of numerical methods for solving differential equations Applications of numerical methods for solving differential equations can be found in many areas, such as: - Physics: solving equations of motion, fluid dynamics, and heat transfer. - Engineering: designing structures, analyzing stress, and simulating systems. - Economics: modeling growth, inflation, and financial markets. ## Exercise Discuss the applications of numerical methods for solving differential equations in the field of physics. Numerical methods for solving differential equations have numerous applications in physics. Some examples include: - Solving the equation of motion for a projectile: $$\frac{d^2y}{dt^2} = -g, \quad y(0) = 0, \quad y'(0) = v$$ - Modeling fluid flow: $$\frac{\partial^2u}{\partial x^2} + \frac{\partial^2u}{\partial y^2} = 0$$ - Simulating heat transfer: $$\frac{\partial T}{\partial t} = \alpha \left(\frac{\partial^2T}{\partial x^2} + \frac{\partial^2T}{\partial y^2}\right)$$
Textbooks
\begin{document} \title{Nontautological Bielliptic Cycles} \begin{abstract} Let $[\overline{\mathcal{B}}_{2,0,20}]$ and $[\mathcal{B}_{2,0,20}]$ be the classes of the loci of stable resp. smooth bielliptic curves with~$20$ marked points where the bielliptic involution acts on the marked points as the permutation $(1\; 2)...(19\; 20)$. Graber and Pandharipande proved in \cite{Graber2001} that these classes are nontatoulogical. In this note we show that their result can be extended to prove that $[\overline{\mathcal{B}}_{g}]$ is nontautological for $g\geq 12$ and that $[\mathcal{B}_{12}]$ is nontautological. \end{abstract} \section{Introduction} The system of \emph{tautological rings} $\{ R^\bullet(\overline{\mathcal{M}}_{g,n})\}$ is defined to be the minimal system of $\mathbb{Q}$-subalgebras of the Chow rings $A^\bullet(\overline{\mathcal{M}}_{g,n})$ closed under pushforward (and hence pullback) along the natural gluing and forgetful morphisms \begin{align*} \overline{\mathcal{M}}_{g_1,n_1+1}\times \overline{\mathcal{M}}_{g_2,n_2+1}& \longrightarrow \overline{\mathcal{M}}_{g_1+g_2,n_1+n_2},\\ \overline{\mathcal{M}}_{g,n+2}& \longrightarrow \overline{\mathcal{M}}_{g+1,n},\\ \overline{\mathcal{M}}_{g,n+1}&\longrightarrow \overline{\mathcal{M}}_{g,n}. \end{align*} The tautological ring $R^\bullet(\mathcal{M}_{g,n})$ of the moduli space of smooth curves is the image of~$R^\bullet(\overline{\mathcal{M}}_{g,n})$ under the localization morphism $A^\bullet(\overline{\mathcal{M}}_{g,n})\rightarrow A^\bullet(\mathcal{M}_{g,n})$. We will denote by $RH^{2\bullet}(\overline{\mathcal{M}}_{g,n})$ the image of $R^\bullet(\overline{\mathcal{M}}_{g,n})$ under the cycle map $A^\bullet(\overline{\mathcal{M}}_{g,n})\rightarrow H^{2\bullet}(\overline{\mathcal{M}}_{g,n})$ and define $RH^{2\bullet}(\mathcal{M}_{g,n})$ accordingly. We say a cohomology class is \emph{tautological} if it lies in the tautological subring of its cohomology ring, otherwise we say it is \emph{nontautological}. In this note we will work over $\mathbb{C}$ and all Chow and cohomology rings are assumed to be taken with rational coefficients. These tautological rings are relatively well understood. An additive set of generators for the groups $R^\bullet(\overline{\mathcal{M}}_{g,n})$ is given by decorated boundary strata and there exists an algorithm for computing the intersection product (see \cite{Graber2001}). The class of many ``geometrically defined'' loci can be shown to be tautological, for example this is the case for the class of the locus $\overline{\mathcal{H}}_g$ of hyperelliptic curves in $\overline{\mathcal{M}}_g$ (see \cite[Theorem 1]{Faber2005}). Any odd cohomology class of $\overline{\mathcal{M}}_{g,n}$ is nontautological by definition. Deligne proved that $H^{11}(\overline{\mathcal{M}}_{1,11})\neq 0$, thus providing a first example of the existence of nontautological classes. In fact it is known that $H^\bullet(\overline{\mathcal{M}}_{0,n})=RH^\bullet(\overline{\mathcal{M}}_{0,n})$ (see \cite{Keel}) and that $H^{2\bullet}(\overline{\mathcal{M}}_{1,n})=RH^{2\bullet}(\overline{\mathcal{M}}_{1,n})$ for all $n$ (see \cite[Corollary 1.2]{petersen2014}). Examples of geometrically defined loci which can be proven to be nontautological are still relatively scarce. In \cite{Graber2001} Graber and Pandharipande hunt for algebraic classes in $H^{2\bullet}(\overline{\mathcal{M}}_{g,n})$ and $H^{2\bullet}(\mathcal{M}_{g,n})$ which are nontautological. In particular they show that the classes of the loci~$\overline{\mathcal{B}}_{2,0,20}$ and $\mathcal{B}_{2,0,20}$ of stable resp. smooth bielliptic curves of genus 2 with 20 marked points where the bielliptic involution acts on the set of marked points as the permutation $(1\; 2)...(19\; 20)$ are nontautological. They also show that for sufficiently high odd genus $h$ the class of the locus of stable curves of genus $2h$ admitting a map to a curve of genus $h$ is nontautological in $H^{2\bullet}(\overline{\mathcal{M}}_{2h})$. Their result relies on the existence of odd cohomology in $H^\bullet(\overline{\mathcal{M}}_{h,1})$ which has been proven to exist in \cite{pikaart1994} for all $h\geq 8069$. A recent survey of different methods of obtaining nontautological classes can be found in \cite{faber2013}. In this note we prove the following two new results. \begin{theorem}\label{thmain} The cohomology class $[\overline{\mathcal{B}}_{g,n,2m}]$ is nontautological for all $g+m\geq 12$, $0\leq n \leq 2g-2$ and $g\geq 2$. \end{theorem} \begin{theorem}\label{thmainop} The cohomology class $[\mathcal{B}_{g,0,2m}]$ is nontautological when $g+m=12$ and $g\geq 2$. \end{theorem} With Theorem \ref{thmain} we improve the genus for which algebraic nontautological classes on $\overline{\mathcal{M}}_{g}$ are known to exists from 16138 to 12. As far as the author is aware, Theorem \ref{thmainop} provides the first example of a nontautological algebraic class on $\mathcal{M}_g$. \end{ack} \section{Preliminaries} Admissible double covers were introduced to compactify moduli spaces of double covers of smooth curves, let us recall the definition: \begin{definition} Let $(S,x_1,...,x_k,y_1,...,y_{2m})$ be a stable pointed curve of arithmetic genus $g$. An \emph{admissible double cover} is the data of a stable pointed curve $(T,x'_1,...,x'_{k},y'_1,...,y'_m)$ of arithmetic genus $g'$ and a 2-to-1 map $f\colon S\rightarrow T$ satisfying the following conditions: \begin{itemize} \item the restriction to the smooth locus $f^{\text{sm}}\colon S^{\text{sm}}\rightarrow T^{\text{sm}}$ is branched exactly at the points $x'_1,...,x'_k$ and the inverse image of $x'_i$ is $x_i$ for all $i=1,...,k$, \item the inverse image of $y'_i$ under $f$ is $\{y_{2i},y_{2i+1}\}$, \item the image under $f$ of each node is a node. \end{itemize} We call $S$ the \emph{source} curve and $T$ the \emph{target} curve of the admissible cover. An \emph{admissible hyperelliptic structure} on $S$ is an admissible cover where $g'=0$ and an \emph{admissible bielliptic structure} on $S$ is an admissible cover with $g'=1$. Note that the admissible double cover $S\rightarrow T$ induces an involution on $S$ fixing the points $x_1,...,x_k$ and permuting the points $y_1,...,y_{2m}$ pairwise. \end{definition} \noindent One can define families of admissible double covers and isomorphisms between them (see \cite[Section 4]{acv}). By using the Riemann-Hurwitz formula and by induction on the number of nodes we can deduce that the number $k$ in the above definition equals $2g+2-4g'$. We denote the moduli stack of admissible bielliptic covers with $2m$ marked points switched by the involution by $\B^{\textup{Adm}}_{g,2m}$. When $m=0$ we simply write $\B^{\textup{Adm}}_g$. A natural target map and source map from each moduli space of admissible double covers can be defined as follows. The target map is a finite surjective map which sends each admissible cover to the target stable pointed curve $(T,x'_1,...,x'_{k},y'_1,...,y'_m)\in \overline{\mathcal{M}}_{g',k+m}$. From the properness of $\overline{\mathcal{M}}_{g',k+m}$ we deduce that the space of such admissible covers is proper. The dimension of the space of such admissible double covers equals $2g-g'+2m-1$. In the bielliptic case we get \begin{align*} \dim \B^{\textup{Adm}}_{g,2m}&= 2g-2+m. \end{align*} The source map forgets all the structure of an admissible double cover except for \[(S,x_1,...,x_{k},y_1,...,y_{2m})\in \overline{\mathcal{M}}_{g,k+2m}.\] In the bielliptic case this gives a map $\B^{\textup{Adm}}_{g,2m}\rightarrow \overline{\mathcal{M}}_{g,2g-2+2m}$. We can compose this map with a composition of forgetful maps $\overline{\mathcal{M}}_{g,2g-2+2m} \rightarrow \overline{\mathcal{M}}_{g,n+2m}$ which forgets the first $2g-2-n$ points (which therefore correspond to the first $2g-2-n$ ramification points of the admissible bielliptic covers) and stabilizes. We denote by $\overline{\mathcal{B}}_{g,n,2m}$ the image substack of $\B^{\textup{Adm}}_{g,2m}$ in $\overline{\mathcal{M}}_{g,n+2m}$. The above discussion can be summarized in the following diagram: \begin{center} \begin{tikzcd} \B^{\textup{Adm}}_{g,2m}\arrow{d}\arrow{r} & \overline{\mathcal{B}}_{g,n,2m} \arrow[hook]{r} &\overline{\mathcal{M}}_{g,n+2m}\\ \overline{\mathcal{M}}_{1,2g-2+m} \end{tikzcd}. \end{center} The moduli stack $\mathcal{B}^{\text{Adm}}_{g,2m}$ is the open dense substack of $\B^{\textup{Adm}}_{g,2m}$ of admissible bielliptic covers of smooth curves and we denote its image stack in $\mathcal{M}_{g,n+2m}$ by $\mathcal{B}_{g,n,2m}$. We have well defined Chow classes \begin{align*} [\overline{\mathcal{B}}_{g,n,2m}]\in A^{g-1+n+m}(\overline{\mathcal{M}}_{g,n+2m})\\ [\mathcal{B}_{g,n,2m}]\in A^{g-1+n+m}(\mathcal{M}_{g,n+2m}). \end{align*} We will abuse notation and also denote the image of these classes in the respective cohomology rings by $[\overline{\mathcal{B}}_{g,n,2m}]$ and $[\mathcal{B}_{g,n,2m}]$. In a completely analogous way, we can define spaces of admissible hyperelliptic covers $\overline{\mathcal{H}}^{\textup{Adm}}_{g,2m}$ and the loci $\overline{\mathcal{H}}_{g,n,2m}$ and $\mathcal{H}_{g,n,2m}$ in $\overline{\mathcal{M}}_{g,n+2m}$ and $\mathcal{M}_{g,n+2m}$ for all~$0\leq n \leq 2g+2$. \begin{notation} We will denote by $\overline{\mathcal{M}}_{g,n}^D$ (resp. $\mathcal{M}^D_{g,n}$) the moduli stack parameterizing trivial \'{e}tale double covers \[ f\colon (C_1;y_{1,1},...,y_{n,1})\cup (C_2; y_{1,2},...,y_{n,2}) \rightarrow (C;y_{1},...,y_n) \] mapping two isomorphic stable (resp. smooth) curves $(C_1;y_{1,1},...,y_{n,1})\simeq (C_2; y_{1,2},...,y_{n,2})$ to a curve $(C;y_{1},...,y_n) \simeq (C_1;y_{1,1},...,y_{n,1})$ such that $f^{-1}(y_i)=(y_{i,1},y_{i,2})$. \end{notation} \noindent Our proof of Theorem \ref{thmain} relies on the following result for pullbacks along gluing morphisms. \begin{proposition}[{\cite[Proposition 1]{Graber2001}}] \label{prop:Kunneth} Let $\xi\colon \overline{\mathcal{M}}_{g_1,n_1+1}\times \overline{\mathcal{M}}_{g_2,n_2+1} \rightarrow \overline{\mathcal{M}}_{g_1+g_2,n_1+n_2}$ be the gluing morphism and let $\gamma \in RH^{\bullet}(\overline{\mathcal{M}}_{g_1+g_2,n_1+n_2})$, then \[ \xi^*(\gamma)\in RH^{\bullet}(\overline{\mathcal{M}}_{g_1,n_1+1})\otimes RH^{\bullet}(\overline{\mathcal{M}}_{g_2,n_2+1}). \] \end{proposition} We say that a cycle $\lambda\in H^\bullet (\overline{\mathcal{M}}_{g_1,n_1})\otimes H^\bullet(\overline{\mathcal{M}}_{g_2,n_2})$ \emph{admits a tautological K\"{u}nneth decomposition} if $\lambda \in RH^{\bullet}(\overline{\mathcal{M}}_{g_1,n_1})\otimes RH^{\bullet}(\overline{\mathcal{M}}_{g_2,n_2})$. \section{Proof of Theorem \ref{thmain} and \ref{thmainop}} We are now ready to prove Theorem \ref{thmain}. We start by proving the following weaker result. \begin{proposition}\label{easyprop} We have \[ [\overline{\mathcal{B}}_{g,0,2m}]\not\in RH^\bullet(\overline{\mathcal{M}}_{g,2m}) \] for $g+m=12$ and $g\geq 2$. \end{proposition} \begin{proof} Let $\iota_1\colon\mathcal{M}_{1,11}\times \mathcal{M}_{1,11} \rightarrow \overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}$ be the inclusion and $\iota_2 \colon\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}\rightarrow \overline{\mathcal{M}}_{g,2m}$ the gluing morphism which glues the corresponding first $g-1$ points of the two factors and orders the remaining points by sending the $k$'th marked point of the first curve to $2k-1$ and the $k$'th marked point of the second curve to $2k$. Let $\iota$ be the composition $\iota_2 \circ \iota_1$ and let $\Delta$ resp. $\Delta_o$ be the diagonal of $\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}$ resp. $\mathcal{M}_{1,11}\times \mathcal{M}_{1,11}$ so that $\iota_1^*([\Delta])=[\Delta_o]$. In Lemma \ref{lem:pullback} we will prove that $\iota^*([\overline{\mathcal{B}}_{g,0,2m}])= \alpha [\Delta_o]$ for some $\alpha\in \mathbb{Q}_{> 0}$. Let $\partial(\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}):=((\partial \overline{\mathcal{M}}_{1,11})\times \overline{\mathcal{M}}_{1,11}) \cup (\overline{\mathcal{M}}_{1,11} \times (\partial \overline{\mathcal{M}}_{1,11}))$. Since the sequence \begin{center} \begin{tikzcd} A^{10}(\partial (\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11})) \arrow{r} & A^{11} (\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}) \arrow{r}{\iota_1^*}& A^{11}(\mathcal{M}_{1,11}\times \mathcal{M}_{1,11}) \arrow{r} & 0 \end{tikzcd} \end{center} is exact there exists a class $B\in A^{10}(\partial (\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}))$ such that $\iota_2^*([\overline{\mathcal{B}}_{g,0,2m}]) = \alpha [\Delta] + B$. The class $B$ admits a tautological K\"{u}nneth decomposition by Lemma \ref{lem:CohM11}.\ref{point2}. Given a basis $\{ e_i\}_{i\in I}$ for $H^\bullet(\overline{\mathcal{M}}_{1,11})$ with dual basis $\{\hat{e}_i\}_{i\in I}$ the cohomology class of the diagonal can be written as \[ [\Delta]=\sum_{i\in I} (-1)^{\deg e_i} e_i \otimes \hat{e_i}. \] In particular since $H^{11}(\overline{\mathcal{M}}_{1,11})\neq 0$ the diagonal $[\Delta]$ does not admit a tautological K\"{u}nneth decomposition. Since the pullback of a tautological class along a (composition of) gluing morphisms admits a tautological K\"{u}nneth decomposition by Proposition \ref{prop:Kunneth}, this shows that~$[\overline{\mathcal{B}}_{g,0,2m}]$ is nontautological. \end{proof} \begin{lemma}\label{lem:pullback} Consider the composition of gluing morphisms $\iota\colon \mathcal{M}_{1,11}\times \mathcal{M}_{1,11} \rightarrow \overline{\mathcal{M}}_{g,2m}$ defined above. We have $\iota^*(\overline{\mathcal{B}}_{g,2m})= \alpha [\Delta_o]$ for some $\alpha \in \mathbb{Q}_{>0}$. \end{lemma} \begin{proof} Consider the fiber diagram \begin{equation*} \begin{tikzcd} F \arrow{r} \arrow{d} &\B^{\textup{Adm}}_{g,2m}\arrow{d}{\phi}\\ \mathcal{M}_{1,11}\times \mathcal{M}_{1,11} \arrow{r}{\iota}& \overline{\mathcal{M}}_{g,2m} \end{tikzcd} \end{equation*} We will describe the fiber product $F$, or rather the push forward of its class to $\mathcal{M}_{1,11}\times \mathcal{M}_{1,11}$. Consider the moduli stack $\mathcal{M}_{1,11}^D$, there is a closed embedding $\mathcal{M}_{1,11}^D\rightarrow \mathcal{M}_{1,11}\times \mathcal{M}_{1,11}$, $(C_1\cup C_2\rightarrow C) \mapsto (C_1,C_2)$ with image the diagonal $\Delta_o$. We define a map $\eta\colon\mathcal{M}_{1,11}^D \rightarrow \B^{\textup{Adm}}_{g,2m}$ as follows: on the source curves~$\eta$ attaches rational bridges $R_i$ between the corresponding marked points $y_{i,1}$ of $C_1$ and~$y_{i,2}$ of $C_2$ for all $1\leq i \leq g-1$ and on the target curve it attaches a rational curve $R_i'$ with two marked points to the corresponding marked point~$y_i$ of~$C$. The trivial double cover $C_1\cup C_2\rightarrow C$ then induces an admissible double cover \[ \left(C_1\cup C_2 \cup \bigcup_{i=1}^{g-1}R_i\, ;\, y_{g,1},y_{g,2},...,y_{11,1},y_{11,2}\right) \longrightarrow \left(C \cup \bigcup_{i=1}^{g-1}R'_i\, ; \, y_g,...,y_{11}\right), \] branched at the marked points of each $R'_i$, which maps each pair of marked points $y_{i,1}$, $y_{i,2}$ of $C_1\cup C_2 \cup \bigcup_{i=1}^{g-1}R_i$ to the corresponding marked point $y_i$ of $C \cup \bigcup_{i=1}^{g-1}R'_i$. By the universal property of fiber products we get a map $\mathcal{M}_{1,11}^D\rightarrow F$. We claim that the composition $\mathcal{M}_{1,11}^D\rightarrow F\rightarrow F^{\text{red}}$ is a finite\footnote{As in \cite[Definition 1.8]{Vistoli1989}.} surjective morphism. The map $F\rightarrow \mathcal{M}_{1,11} \times \mathcal{M}_{1,11}$ is proper since properness is stable under base extension, the map~$\mathcal{M}_{1,11}^D\rightarrow \mathcal{M}_{1,11}\times \mathcal{M}_{1,11}$ is proper because $\overline{\mathcal{M}}_{1,11}^D\rightarrow \overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}$ is proper. It follows that~$\mathcal{M}_{1,11}^D\rightarrow F$ is proper. Since the map $\mathcal{M}_{1,11}^D\rightarrow \mathcal{M}_{1,11}\times \mathcal{M}_{1,11}$ is quasifinite so is $\mathcal{M}_{1,11}^D\rightarrow F$. Since $\mathcal{M}_{1,11}^D\rightarrow F^{\text{red}}$ is proper and quasifinite and $F^{\text{red}}$ is of finite type (and reduced) it remains to check that this map induces a surjection on closed points. By definition an object of $F$ over $\spec \mathbb{C}$ consists of a curve $\tilde{C}:=(\tilde{C_1},\tilde{C}_2)\in \mathcal{M}_{1,11}\times \mathcal{M}_{1,11}( \mathbb{C})$, an object $(S\rightarrow T)\in \B^{\textup{Adm}}_{g,2m}( \mathbb{C})$ and an isomorphism $\gamma\colon \iota(\tilde{C})\xrightarrow{\sim} \phi(S\rightarrow T)$. To prove the claim we will show that $(\tilde{C},(S\rightarrow T),\gamma)$ is isomorphic to an object in the image of $\mathcal{M}_{1,11}^D( \mathbb{C})$. Let~$f\colon\tilde{C}_1\cup \tilde{C}_2 \rightarrow \iota(\tilde{C})$ be the map of curves induced by $\iota$, set $C:=\iota(\tilde{C})$, $C_1:=f(\tilde{C}_1)$ and~$C_2:=f(\tilde{C}_2)$, let $\tau$ be the involution on $C$ induced by the bielliptic involution of~$S\rightarrow T$ and let~$Q_i$ be the node of $C$ corresponding to the $i$'th marking of~$\tilde{C}_1$ and $\tilde{C}_2$. Since $C_1$ and $C_2$ are smooth there are two possibilities for the action of $\tau$ on~$C$: Either it fixes $C_1$ and $C_2$ or it switches the whole of $C_1$ with the whole of $C_2$. Suppose $\tau$ fixes~$C_1$ and~$C_2$. By construction the involution $\tau$ maps marked points lying on $C_1$ to marked points lying on $C_2$ so this is only possible if $C$ has no marked points at all. In this case $\tau$ must fix the different strands of $C$ at each~$Q_i$. If the inverse image of $Q_i$ in $S$ were to be a rational bridge $R_i$ then this rational bridge would have 2 marked ramification points which are not nodes, but this would imply that~$\tau$ switches the nodes on the rational bridge and therefore switches the strands of~$C$ at~$Q_i$. It follows that the inverse image of each $Q_i$ in $S$ is a single node $\hat{Q}_i$. Since $C_1$ and $C_2$ are smooth, $\tau$ induces an involution on the set of nodes $\{\hat{Q}_1,...,\hat{Q}_{11}\}$. We can thus find distinct $\hat{Q}_i$, $\hat{Q}_j\neq \tau(\hat{Q}_i)$ such that~$S- \{\hat{Q}_i,\tau(\hat{Q}_i),\hat{Q}_j,\tau(\hat{Q}_j)\}$ is connected. But this means that there are at least two nodes~$P_i$ and $P_j$ of $T$ such that $T-\{P_i,P_j\}$ is connected. This would imply that the arithmetic genus of~$T$ is at least 2, which is a contradiction. We can therefore assume $\tau$ maps $C_1$ to $C_2$. Let us first suppose that $\tau$ does not fix all nodes, so there exist some distinct $i$, $j$ such that $\tau(Q_i)=Q_j$. Let $P$ be the image of $\{ Q_i,Q_j\}$ under the bielliptic map. Like before we see that $T\backslash \{P\}$ is connected and it therefore has arithmetic genus 0 (since by assumption the arithmetic genus of $T$ is $1$). However the arithmetic genus of $C_1\backslash \{Q_i,Q_j\})$ is 1 and the bielliptic map restricts to an isomorphism $C_1\backslash \{Q_i,Q_j\}\rightarrow T\backslash \{P\}$, which is a contradiction. We have thus proven that $\tau$ switches the components $C_1$ and $C_2$ and fixes the nodes $Q_i$, which implies that $((\tilde{C}_1,\tilde{C}_2), (S\rightarrow T),\gamma)$ is isomorphic to an object in the image of $\overline{\mathcal{M}}_{1,11}^D( \mathbb{C})$. This proves that the map $\mathcal{M}_{1,11}^D\rightarrow F^{\text{red}}$ is surjective. It follows that the pushforward of $\Delta_o$ to ${\mathcal{M}_{1,11}\times \mathcal{M}_{1,11}}$ equals the pushforward of $F$ to $\mathcal{M}_{1,11}\times\mathcal{M}_{1,11}$ up to a scalar. Since \[\codim_{\mathcal{M}_{1,11}\times \mathcal{M}_{1,11}}\Delta_o=11 = \codim_{\overline{\mathcal{M}}_{g,2m}}\B^{\textup{Adm}}_{g,2m}\] we see that $\iota^*[\overline{\mathcal{B}}_{g,2m}]=\alpha [\Delta_o]$ for some $\alpha\in \mathbb{Q}_{>0}$ and~$g+m=12$. \end{proof} \begin{lemma}\label{lem:CohM11} \begin{enumerate}[i] \item Every algebraic class of codimension $11$ in $\overline{\mathcal{M}}_{1,11} \times \overline{\mathcal{M}}_{1,11}$ supported on $\partial (\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11})$ admits a tautological K\"{u}nneth decomposition.\label{point2} \item Every algebraic class on $\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}$ of complex codimension less than 11 admits a tautological K\"{u}nneth decomposition.\label{point1} \end{enumerate} \end{lemma} \begin{proof} This is a slightly weaker version of \cite[Lemma 3]{Graber2001}, the proof given there required that $RH^{2\bullet} (\overline{\mathcal{M}}_{1,n})=H^{2\bullet}(\overline{\mathcal{M}}_{1,n})$ and $H^k(\overline{\mathcal{M}}_{1,n})=0$ for $n<11$, for which there was no reference at the time of \cite{Graber2001}. The first equation is \cite[Corollary 1.2]{petersen2014}. The second condition follows from Getzlers' computations for $n<11$ in \cite{Getzler1998}. \end{proof} \begin{prgrph} We have now concluded the proof of Proposition \ref{easyprop}. To prove Theorem \ref{thmain} it remains to show that $[\overline{\mathcal{B}}_{g,n,2m}]$ is nontautological for all $n$, $g$, $m$ with $g+m>12$. \end{prgrph} \begin{proof}[Proof of Theorem \ref{thmain}.] We will show in Lemma \ref{lem:add1} and \ref{lem:add2} that if $[\overline{\mathcal{B}}_{g,n,2m}]$ is nontautological then so are $[\overline{\mathcal{B}}_{g,n+1,2m}]$ for $n\leq 2g-3$, and $[\overline{\mathcal{B}}_{g,n,2m+2}]$. In Lemma \ref{lemg+1} we will show that if $[\overline{\mathcal{B}}_{g,1,0}]$ is nontautological then so is $[\overline{\mathcal{B}}_{g+1}]$. Using these statements inductively, with base case the statement of Proposition \ref{easyprop}, we conclude that $[\overline{\mathcal{B}}_{g,n,2m}]$ is nontautological for all $g+m\geq 12$. \end{proof} \begin{lemma}\label{lem:add1} If $[\overline{\mathcal{B}}_{g,n,2m}]$ is nontautological and $n\leq 2g-3$ then so is $[\overline{\mathcal{B}}_{g,n+1,2m}]$. \end{lemma} \begin{proof} Let $\pi\colon \overline{\mathcal{M}}_{g,n+1+2m}\rightarrow{\overline{\mathcal{M}}}_{g,n+2m}$ be the morphism which forgets the first point and stabilizes. Since $\pi(\overline{\mathcal{B}}_{g,n+1,2m})=\overline{\mathcal{B}}_{g,n,2m}$ and $\dim \overline{\mathcal{B}}_{g,n+1,2m}=\dim \overline{\mathcal{B}}_{g,n,2m}$ we have $\pi_*[\overline{\mathcal{B}}_{g,n+1,2m}]=\alpha [\overline{\mathcal{B}}_{g,n,2m}]$ for some $\alpha\in \mathbb{Q}_{>0}$. Because the push forward of a tautological class by the forgetful morphism is tautological, the result follows. \end{proof} \begin{lemma}\label{lem:add2} If $[\overline{\mathcal{B}}_{g,n,2m}]$ is nontautological then so is $[\overline{\mathcal{B}}_{g,n,2m+2}]$. \end{lemma} \begin{proof} Suppose $n< 2g-2$ then by the previous result $[\overline{\mathcal{B}}_{g,n+1,2m}]$ is nontautological. Consider the gluing morphism \[ \sigma\colon \overline{\mathcal{M}}_{g,n+2m+1}\times\overline{\mathcal{M}}_{0,3}\rightarrow \overline{\mathcal{M}}_{g,n+2m+2} \] which glues the first points of both curves together, then $\sigma^{-1}(\overline{\mathcal{B}}_{g,n,2m+2})=\overline{\mathcal{B}}_{g,n+1,2m}$. Since $\codim_{\overline{\mathcal{M}}_{g,n+2m+2}}\overline{\mathcal{B}}_{g,n,2m+2}=\codim_{\overline{\mathcal{M}}_{g,n+2m+1}}\overline{\mathcal{B}}_{g,n+1,2m}$ it follows that $\sigma^*[\overline{\mathcal{B}}_{g,n,2m+2}]=\alpha [\overline{\mathcal{B}}_{g,n+1,2m}]$ for some $\alpha\in \mathbb{Q}_{>0}$. Since $\sigma$ is a gluing morphism and the pullback of a tautological class along $\sigma$ admits tautological K\"{u}nneth decomposition $[\overline{\mathcal{B}}_{g,n,2m+2}]$ is nontautological. If $n=2g-2$ we can first prove that $[\overline{\mathcal{B}}_{g,n-1,2m+2}]$ is nontautological in the same way by pulling back through the map $\overline{\mathcal{M}}_{g,n+2m}\times\overline{\mathcal{M}}_{0,3}\rightarrow \overline{\mathcal{M}}_{g,n+2m+1}$ and then use Lemma \ref{lem:add1}. \end{proof} \begin{lemma}\label{lemg+1} If $[\overline{\mathcal{B}}_{g,1,0}]$ is nontautological then so is $[\overline{\mathcal{B}}_{g+1}]$. \end{lemma} \begin{proof} Let $\epsilon\colon \overline{\mathcal{M}}_{g,1}\times \overline{\mathcal{M}}_{1,1}\rightarrow \overline{\mathcal{M}}_{g+1}$ be the gluing morphism. From the description of the boundary divisors of $\B^{\textup{Adm}}_{g+1}$ (see \cite[Page 1275-1276]{Pagani2016}) it follows that there exists $\alpha, \beta\in \mathbb{Q}_{> 0}$ such that \[ \epsilon^*[\overline{\mathcal{B}}_{g+1}] = \alpha [\overline{\mathcal{B}}_{g,1,0}\times \overline{\mathcal{M}}_{1,1}] +\beta [\overline{\mathcal{H}}_{g-1,0,2}\times \overline{\mathcal{M}}_{1,1}^D] \in H^\bullet(\overline{\mathcal{M}}_{g,1}\times \overline{\mathcal{M}}_{1,1}). \] The class $[\overline{\mathcal{H}}_{g-1,0,2}\times \overline{\mathcal{M}}_{1,1}^D] $ admits a tautological K\"{u}nneth decomposition (since the class of the hyperelliptic locus is tautological by \cite[Theorem 1]{Faber2005} and therefore so is its pushforward under a gluing morphism with a tautological class). The class $[\overline{\mathcal{B}}_{g,1}\times \overline{\mathcal{M}}_{1,1}]$ does not admit a tautological K\"{u}nneth decomposition by assumption. It follows by Proposition \ref{prop:Kunneth} that $[\overline{\mathcal{B}}_{g+1}]$ is nontautological. \end{proof} \begin{prgrph} We will now prove a similar result for the open locus of $\overline{\mathcal{M}}_{g,2m}$ where $g+m=12$. \end{prgrph} \begin{proof}[Proof of Theorem \ref{thmainop}] The case where $g=2$ is treated in \cite[Section 3]{Graber2001}. We use a similar argument to prove the remaining cases. The proof runs by contradiction. Suppose $ [\mathcal{B}_{g,0,2m}] \in RH^\bullet (\mathcal{M}_{g,2m})$ then there is some collection of cycles $Z_i$ in $\overline{\mathcal{M}}_{g,2m}$, of complex codimension 11 and supported on $\partial \overline{\mathcal{M}}_{g,2m}$ such that $\sum [Z_i] + [\overline{\mathcal{B}}_{g,0,2m}]$ is a tautological class. Consider again the gluing morphism $\iota_2\colon\overline{\mathcal{M}}_{1,11} \times \overline{\mathcal{M}}_{1,11}\rightarrow \overline{\mathcal{M}}_{g,2m}$ as above. By assumption the pullback of $\sum [Z_i] + [\overline{\mathcal{B}}_{g,2m}]$ to $\overline{\mathcal{M}}_{1,11} \times \overline{\mathcal{M}}_{1,11}$ admits a tautological K\"{u}nneth decomposition whereby the pullback of $\sum [Z_i]$ to $\overline{\mathcal{M}}_{1,11} \times \overline{\mathcal{M}}_{1,11}$ must be nontautological. We shall use the usual notation that $\Delta_j$ is the locus of curves in $\overline{\mathcal{M}}_{g,2m}$ consisting of two curves, one of which has genus $j$, glued together in a single node, and $\Delta_{\text{irr}}$ is the locus that generically parametrizes irreducible singular curves. Since $\iota_2(\overline{\mathcal{M}}_{1,11} \times \overline{\mathcal{M}}_{1,11})$ does not have a separating node we see that $\iota_2(\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11})\not\subset \Delta_j$. The intersection \[ \Delta_j \cap (\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}) \] therefore lies in $\partial (\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11})$. It follows by Lemma \ref{lem:CohM11}.\ref{point2} that $\iota_2^*[Z_i]$ admits a tautological K\"{u}nneth decomposition if $\supp Z_i\subset \Delta_j$. Consider now the $Z_i$ with support inside $\Delta_{\text{irr}}$. We can decompose the map $\iota_2$ as \begin{center} \begin{tikzcd} \overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11} \arrow{r}{\iota_2''} & \overline{\mathcal{M}}_{g-1,2m+2} \arrow{r}{\iota_2'} &\overline{\mathcal{M}}_{g,2m} \end{tikzcd} \end{center} Then there exist cycles $Y_i$ in $\overline{\mathcal{M}}_{g-1,2m+2}$ such that $\iota'_{2*}[Y_i]=[Z_i]$. Now \begin{align*} \iota_2^*[Z_i] &= \iota''^*_2\iota'^*_1[Z_i]\\ &=\iota''^*_2 (c_1(N_{\overline{\mathcal{M}}_{g-1,2m+2}}\overline{\mathcal{M}}_{g,2m})\cap [Y_i]). \end{align*} We see that $\iota^*_2[Z_i]$ decomposes as a product of algebraic classes of codimension less than $11$, which admit tautological K\"{u}nneth decomposition by Lemma \ref{lem:CohM11}.\ref{point1}. We conclude that all the $[Z_i]$ have tautological K\"{u}nneth decomposition when pulled back to $\overline{\mathcal{M}}_{1,11}\times \overline{\mathcal{M}}_{1,11}$. Therefore $\iota_2^*(\sum [Z_i] + [\overline{\mathcal{B}}_{g,0,2m}])$ does not admit a tautological K\"{u}nneth decomposition. It follows by Proposition \ref{prop:Kunneth} that $[\mathcal{B}_{g,2m}]$ is nontautological. \end{proof} {} J.~van~Zelm, \textsc{Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 7ZL, United Kingdom} \par\nopagebreak \textit{E-mail address}: \texttt{[email protected]} \end{document}
arXiv
Lone divider The lone divider procedure is a procedure for proportional cake-cutting. It involves a heterogenous and divisible resource, such as a birthday cake, and n partners with different preferences over different parts of the cake. It allows the n people to divide the cake among them such that each person receives a piece with a value of at least 1/n of the total value according to his own subjective valuation. The procedure was developed by Hugo Steinhaus for n = 3 people.[1] It was later extended by Harold W. Kuhn to n > 3, using the Frobenius–Konig theorem.[2] A description of the cases n = 3, n = 4 appears in [3] : 31–35  and the general case is described in.[4]: 83–87  Description For convenience we normalize the valuations such that the value of the entire cake is n for all agents. The goal is to give each agent a piece with a value of at least 1. Step 1. One player chosen arbitrarily, called the divider, cuts the cake into n pieces whose value in his/her eyes is exactly 1. Step 2. Each of the other n − 1 partners evaluates the resulting n pieces and says which of these pieces he considers "acceptable", i.e, worth at least 1. Now the game proceeds according to the replies of the players in step 3. We present first the case n = 3 and then the general case. Steinhaus' procedure for the case n = 3 There are two cases. • Case A: At least one of the non-dividers marks two or more pieces as acceptable. Then, the third partner picks an acceptable piece (by the pigeonhole principle he must have at least one); the second partner picks an acceptable piece (he had at least two before, so at least one remains); and finally the divider picks the last piece (for the divider, all pieces are acceptable). • Case B: Both other partners mark only one piece as acceptable. Then, there is at least one piece that is acceptable only for the divider. The divider takes this piece and goes home. This piece is worth less than 1 for the remaining two partners, so the remaining two pieces are worth at least 2 for them. They divide it among them using divide and choose. The procedure for any n There are several ways to describe the general case; the shorter description appears in [5] and is based on the concept of envy-free matching – a matching in which no unmatched agent is adjacent to a matched piece. Step 3. Construct a bipartite graph G = (X + Y, E) in which each vertex in X is an agent, each vertex in Y is a piece, and there is an edge between an agent x and a piece y iff x values y at least 1. Step 4. Find a maximum-cardinality envy-free matching in G. Note that the divider is adjacent to all n pieces, so |NG(X)| = n ≥ |X| (where NG(X) is the set of neighbors of X in Y). Hence, a non-empty envy-free matching exists. Step 5. Give each matched piece to its matched agent. Note that each matched agent has a value of at least 1, and thus goes home happily. Step 6. Recursively divide the remaining cake among the remaining agents. Note that each remaining agent values each piece given away at less than 1, so he values the remaining cake at more than the number of agents, so the precondition for recursion is satisfied. Query complexity At each iteration, the algorithm asks the lone divider at most n mark queries, and each of the other agents at most n eval queries. There are at most n iterations. Therefore, the total number of queries in the Robertson-Webb query model is O(n2) per agent, and O(n3) overall. This is much more than required for last diminisher (O(n) per agent) and for Even-Paz (O(log n) per agent). See also • For other procedures for solving the same problem, see proportional cake-cutting. • One advantage of lone-divider is that it can be modified to yield a symmetric fair cake-cutting procedure. • Fair Division: Method of Lone Divider at Cut-the-Knot. References 1. Steinhaus, Hugo (1948). "The problem of fair division". Econometrica. 16 (1): 101–4. JSTOR 1914289. 2. Kuhn, Harold (1967), "On games of fair division", Essays in Mathematical Economics in Honour of Oskar Morgenstern, Princeton University Press, pp. 29–37, archived from the original on 2019-01-16, retrieved 2019-01-15 3. Brams, Steven J.; Taylor, Alan D. (1996). Fair division: from cake-cutting to dispute resolution. Cambridge University Press. ISBN 0-521-55644-9. 4. Robertson, Jack; Webb, William (1998). Cake-Cutting Algorithms: Be Fair If You Can. Natick, Massachusetts: A. K. Peters. ISBN 978-1-56881-076-8. LCCN 97041258. OL 2730675W. 5. Segal-Halevi, Erel; Aigner-Horev, Elad (2022). "Envy-free matchings in bipartite graphs and their applications to fair division". Information Sciences. 587: 164–187. arXiv:1901.09527. doi:10.1016/j.ins.2021.11.059. S2CID 170079201.
Wikipedia
Human Development Index World map by quartiles of Human Development Index in 2011. Data unavailable The Human Development Index (HDI) is a composite statistic used to rank countries by level of "human development" and separate "very high human development", "high human development", "medium human development", and "low human development" countries. The Human Development Index (HDI) is a comparative measure of life expectancy, literacy, education and standards of living for countries worldwide. It is a standard means of measuring well-being, especially child welfare. It is used to distinguish whether the country is a developed, a developing or an under-developed country, and also to measure the impact of economic policies on quality of life. There are also HDI for states, cities, villages, etc. by local organizations or companies. 1 Origins 2 Dimensions and calculation 2.1 New methodology for 2010 data onwards 2.2 Methodology used until 2010 3 2011 report 3.1 Inequality-adjusted HDI 3.2 Countries not included 4.3 Non-UN members (not calculated by UNDP) 6 2008 statistical update 7 2007/2008 report 8 Past top countries 8.1 In each original report 9 Future HDI projections 10 Criticisms The origins of the HDI are found in the annual Human Development Reports of the United Nations Development Programme (UNDP). These were devised and launched by Pakistani economist Mahbub ul Haq in 1990 and had the explicit purpose "to shift the focus of development economics from national income accounting to people centered policies". To produce the Human Development Reports, Mahbub ul Haq brought together a group of well-known development economists including: Paul Streeten, Frances Stewart, Gustav Ranis, Keith Griffin, Sudhir Anand and Meghnad Desai. But it was Nobel laureate Amartya Sen's work on capabilities and functionings that provided the underlying conceptual framework. Haq was sure that a simple composite measure of human development was needed in order to convince the public, academics, and policy-makers that they can and should evaluate development not only by economic advances but also improvements in human well-being. Sen initially opposed this idea, but he went on to help Haq develop the Human Development Index (HDI). Sen was worried that it was difficult to capture the full complexity of human capabilities in a single index but Haq persuaded him that only a single number would shift the attention of policy-makers from concentration on economic to human well-being.[1][2] Other organizations and companies also make HD Indices with differing formulae and results (see below). Dimensions and calculation Published on 4 November 2010 (and updated on 10 June 2011), starting with the 2010 Human Development Report the HDI combines three dimensions: A long and healthy life: Life expectancy at birth Access to knowledge: Mean years of schooling and Expected years of schooling A decent standard of living: GNI per capita (PPP US$) The HDI combined three dimensions up until its 2010 report: Life expectancy at birth, as an index of population health and longevity Knowledge and education, as measured by the adult literacy rate (with two-thirds weighting) and the combined primary, secondary, and tertiary gross enrollment ratio (with one-third weighting). Standard of living, as indicated by the natural logarithm of gross domestic product per capita at purchasing power parity. New methodology for 2010 data onwards 2010 Very High HDI nations, by population size In its 2010 Human Development Report the UNDP began using a new method of calculating the HDI. The following three indices are used: 1. Life Expectancy Index (LEI) <math>= \frac{\textrm{LE} - 20}{63.2}</math> 2. Education Index (EI) <math>= \frac{\sqrt{\textrm{MYSI} \cdot \textrm{EYSI}}} {0.951}</math> 2.1 Mean Years of Schooling Index (MYSI) <math>= \frac{\textrm{MYS}}{13.2}</math>[3] 2.2 Expected Years of Schooling Index (EYSI) <math>= \frac{\textrm{EYS}}{20.6}</math>[4] 3. Income Index (II) <math>= \frac{\ln(\textrm{GNIpc}) - \ln(163)}{\ln(108,211) - \ln(163)}</math> Finally, the HDI is the geometric mean of the previous three normalized indices: <math>\textrm{HDI} = \sqrt[3]{\textrm{LEI}\cdot \textrm{EI} \cdot \textrm{II}}.</math> LE: Life expectancy at birth MYS: Mean years of schooling (Years that a 25-year-old person or older has spent in schools) EYS: Expected years of schooling (Years that a 5-year-old child will spend with his education in his whole life) GNIpc: Gross national income at purchasing power parity per capita Methodology used until 2010 HDI trends between 1975 and 2004 (Central and) Eastern Europe and the CIS This is the methodology used by the UNDP up until its 2010 report. The formula defining the HDI is promulgated by the United Nations Development Programme (UNDP)[5] In general, to transform a raw variable, say <math>x</math>, into a unit-free index between 0 and 1 (which allows different indices to be added together), the following formula is used: <math>x\text{-index} = \frac{x - \min\left(x\right)}{\max\left(x\right)-\min\left(x\right)}</math> where <math>\min\left(x\right)</math> and <math>\max\left(x\right)</math> are the lowest and highest values the variable <math>x</math> can attain, respectively. The Human Development Index (HDI) then represents the uniformly weighted sum with ⅓ contributed by each of the following factor indices: Life Expectancy Index = <math>\frac{LE - 25} {85-25}</math> Education Index = <math>\frac{2} {3} \times ALI + \frac{1} {3} \times GEI</math> Adult Literacy Index (ALI) = <math>\frac{ALR - 0} {100 - 0}</math> Gross Enrollment Index (GEI) = <math>\frac{CGER - 0} {100 - 0}</math> GDP = <math>\frac{\log\left(GDPpc\right) - \log\left(100\right)} {\log\left(40000\right) - \log\left(100\right)}</math> Other organizations/companies may include Democracy Index, Population, etc. which produces different number of HDI. Main article: List of countries by Human Development Index The 2011 Human Development Report was released on 2 November 2011, and calculated HDI values based on estimates for 2011. Below is the list of the "Very High Human Development" countries (equal to the top quartile):[6] Note: The green arrows ( ), red arrows ( ), and blue dashes ( ) represent changes in rank when compared to the new 2011 data HDI for 2010 - published in the 2011 report (p. 131). Norway 0.943 ( ) Australia 0.929 ( ) Netherlands 0.910 ( ) United States 0.910 ( ) New Zealand 0.908 ( ) Canada 0.908 ( ) Ireland 0.908 ( ) Liechtenstein 0.905 ( ) Germany 0.905 ( ) Sweden 0.904 ( ) Switzerland 0.903 ( ) Japan 0.901 ( ) Hong Kong 0.898 ( 1) Iceland 0.898 ( -1) South Korea 0.897 ( ) Denmark 0.895 ( ) Israel 0.888 ( ) Belgium 0.886 ( ) Austria 0.885 ( ) France 0.884 ( ) Slovenia 0.884 ( ) Finland 0.882 ( ) Spain 0.878 ( ) Italy 0.874 ( ) Luxembourg 0.867 ( ) Singapore 0.866 ( ) Czech Republic 0.865 ( ) United Kingdom 0.863 ( ) Greece 0.861 ( ) United Arab Emirates 0.846 ( ) Cyprus 0.840 ( ) Andorra 0.838 ( ) Brunei 0.838 ( ) Estonia 0.835 ( ) Slovakia 0.834 ( ) Malta 0.832 ( ) Qatar 0.831 ( ) Hungary 0.816 ( ) Poland 0.813 ( ) Lithuania 0.810 ( 1) Portugal 0.809 ( -1) Bahrain 0.806 ( ) Latvia 0.805 ( ) Chile 0.805 ( ) Argentina 0.797 ( 1) Croatia 0.796 ( -1) Barbados 0.793 ( ) Inequality-adjusted HDI Main article: List of countries by inequality-adjusted HDI Below is a list of countries in the top quartile by Inequality-adjusted Human Development Index (IHDI).[7] Note: The green arrows ( ), red arrows ( ), and blue dashes ( ) represent changes in rank when compared to the 2011 HDI list, for countries listed in both rankings. Sweden 0.851 ( 5) Netherlands 0.846 ( 1) Iceland 0.845 ( 5) Denmark 0.842 ( 4) Slovenia 0.837 ( 7) Finland 0.833 ( 7) Canada 0.829 ( 7) Czech Republic 0.821 ( 9) Austria 0.820 ( 1) Belgium 0.819 ( 1) Spain 0.799 ( 2) Luxembourg 0.799 ( 3) United Kingdom 0.791 ( 4) Slovakia 0.787 ( 7) Israel 0.779 ( 8) Italy 0.779 ( 2) United States 0.771 ( 19) Estonia 0.769 ( 2) Hungary 0.759 ( 3) Greece 0.756 ( 2) Cyprus 0.755 ( 2) South Korea 0.749 ( 17) Lithuania 0.730 ( ) Portugal 0.726 ( ) Montenegro 0.718 ( 7) Latvia 0.717 ( 1) Serbia 0.694 ( 9) Countries in the top quartile of HDI ("Very high human development" group) with a missing IHDI include: New Zealand, Liechtenstein, Japan, Hong Kong, Singapore, United Arab Emirates, Andorra, Brunei, Malta, Qatar, Bahrain and Barbados. Countries not included Some countries were not included for various reasons, mainly the unavailability of certain crucial data. The following United Nations Member States were not included in the 2011 report:[8] North Korea, Marshall Islands, Monaco, Nauru, San Marino, Somalia and Tuvalu. The 2010 Human Development Report by the United Nations Development Program was released on November 4, 2010, and calculates HDI values based on estimates for 2010. Below is the list of the "Very High Human Development" countries:[9] Note: The green arrows ( ), red arrows ( ), and blue dashes ( ) represent changes in rank when compared to the 2007 HDI published in the 2009 report. New Zealand 0.907 ( 17) United States 0.902 ( 9) Liechtenstein 0.891 ( 13) Germany 0.885 ( 12) Japan 0.884 ( 1) Switzerland 0.874 ( 4) France 0.872 ( 6) Israel 0.872 ( 12) Iceland 0.869 ( 14) Luxembourg 0.852 ( 13) Austria 0.851 ( 11) Singapore 0.846 ( 5) Andorra 0.824 ( 2) Slovakia 0.818 ( 11) United Arab Emirates 0.815 ( 3) Malta 0.815 ( 5) Brunei 0.805 ( 7) Qatar 0.803 ( 5) Portugal 0.795 ( 6) Barbados 0.788 ( 5) The 2010 Human Development Report was the first to calculate an Inequality-adjusted Human Development Index (IHDI), which factors in inequalities in the three basic dimensions of human development (income, life expectancy, and education). Below is a list of countries in the top quartile by IHDI:[10] Germany 0.814 ( 3) Ireland 0.813 ( 3) Poland 0.709 ( 1) Romania 0.675 ( 3) The Bahamas 0.671 ( 4) Countries in the top quartile of HDI ("Very high human development" group) with a missing IHDI include: New Zealand, Liechtenstein, Japan, Hong Kong, Singapore, Andorra, United Arab Emirates, Malta, Brunei, Qatar, Bahrain and Barbados. Some countries were not included for various reasons, mainly the unavailability of certain crucial data. The following United Nations Member States were not included in the 2010 report.[11] Cuba lodged a formal protest at its lack of inclusion. The UNDP explained that Cuba had been excluded due to the lack of an "internationally reported figure for Cuba's Gross National Income adjusted for Purchasing Power Parity". All other indicators for Cuba were available, and reported by the UNDP, but the lack of one indicator meant that no ranking could be attributed to the country.[12][13] Non-UN members (not calculated by UNDP) Taiwan 0.868 (Ranked 18th among countries if included).[14] The 2009 Human Development Report by UNDP was released on October 5, 2009, and covers the period up to 2007. It was titled "Overcoming barriers: Human mobility and development". The top countries by HDI were grouped in a new category called "Very High Human Development". The report refers to these countries as developed countries.[15] They are: Norway 0.971 ( 1) Australia 0.970 ( 2) Liechtenstein 0.951 ( 1) Kuwait 0.916 ( ) Some countries were not included for various reasons, such as being a non-UN member or unable or unwilling to provide the necessary data at the time of publication. Besides the states with limited recognition, the following states were also not included. 2008 statistical update A new index was released on December 18, 2008. This so-called "statistical update" covered the period up to 2006 and was published without an accompanying Human Development Report. The update is relevant due to newly released estimates of purchasing power parities (PPP), implying substantial adjustments for many countries, resulting in changes in HDI values and, in many cases, HDI ranks.[16] Iceland 0.968 ( ) New Zealand 0.944 ( 1) South Korea 0.928 ( 1) Kuwait 0.912 ( 4) Bahrain 0.902 ( 9) Some countries were not included for various reasons, such as being a non-UN member, unable, or unwilling to provide the necessary data at the time of publication. Besides the states with limited recognition, the following states were also not included. 2007/2008 report The Human Development Report for 2007/2008 was launched in Brasilia, Brazil, on November 27, 2007. Its focus was on "Fighting climate change: Human solidarity in a divided world."[17] Most of the data used for the report are derived largely from 2005 or earlier, thus indicating an HDI for 2005. Not all UN member states choose to or are able to provide the necessary statistics. The report showed a small increase in world HDI in comparison with last year's report. This rise was fueled by a general improvement in the developing world, especially of the least developed countries group. This marked improvement at the bottom was offset with a decrease in HDI of high income countries. A HDI below 0.5 is considered to represent "low development". All 22 countries in that category are located in Africa. The highest-scoring Sub-Saharan countries, Gabon and South Africa, are ranked 119th and 121st, respectively. Nine countries departed from this category this year and joined the "medium development" group. A HDI of 0.8 or more is considered to represent "high development". This includes all developed countries, such as those in North America, Western Europe, Oceania, and Eastern Asia, as well as some developing countries in Eastern Europe, Central and South America, Southeast Asia, the Caribbean, and the oil-rich Arabian Peninsula. Seven countries were promoted to this category this year, leaving the "medium development" group: Albania, Belarus, Brazil, Libya, Macedonia, Russia and Saudi Arabia. On the following table, green arrows ( ) represent an increase in ranking over the previous study, while red arrows ( ) represent a decrease in ranking. They are followed by the number of spaces they moved. Blue dashes ( ) represent a nation that did not move in the rankings since the previous study. Past top countries The list below displays the top-ranked country from each year of the Human Development Index. Norway have been ranked the highest nine times, Canada eight times, followed by Japan which has been ranked highest three times. Iceland has been ranked highest twice. In each original report The year represents when the report was published. In parentheses is the year for which the index was calculated. 2011 (2011)– Norway 2008 (2006)– Iceland 2000 (1998)– Canada 1994 (????)– Canada 1993 (????)– Japan 1991 (1990)– Japan Future HDI projections Further information: List of countries by future Human Development Index projections of the United Nations In April 2010, the Human Development Report Office provided[18] the 2010-2030 HDI projections (quoted in September 2010, by the United Nations Development Programme, in the Human Development Research paper 2010/40, pp. 40–42). These projections were reached by re-calculating the HDI, using (for components of the HDI) projections of the components conducted by agencies that provide the UNDP with data for the HDI. HDI for a sample of 150 countries shows a very high correlation with logarithm of GDP per capita. The Human Development Index has been criticised on a number of grounds, including failure to include any ecological considerations, focusing exclusively on national performance and ranking (although many national Human Development Reports, looking at subnational performance, have been published by UNDP and others—so this last claim is untrue), not paying much attention to development from a global perspective and based on grounds of measurement error of the underlying statistics and formula changes by the UNDP which can lead to severe misclassifications of countries in the categories of being a 'low', 'medium', 'high' or 'very high' human development country.[19] Other authors claimed that the Human Development Reports "have lost touch with their original vision and the index fails to capture the essence of the world it seeks to portray".[20] The index has also been criticized as "redundant" and a "reinvention of the wheel", measuring aspects of development that have already been exhaustively studied.[21][22] The index has further been criticised for having an inappropriate treatment of income, lacking year-to-year comparability, and assessing development differently in different groups of countries.[23] Economist Bryan Caplan has criticised the way HDI scores are produced; each of the three components are bounded between zero and one. As a result of that, rich countries effectively cannot improve their rating (and thus their ranking relative to other countries) in certain categories, even though there is a lot of scope for economic growth and longevity left. "This effectively means that a country of immortals with infinite per-capita GDP would get a score of .666 (lower than South Africa and Tajikistan) if its population were illiterate and never went to school."[24] He argues, "Scandinavia comes out on top according to the HDI because the HDI is basically a measure of how Scandinavian your country is."[24] Economists Hendrik Wolff, Howard Chong and Maximilian Auffhammer discuss the HDI from the perspective of data error in the underlying health, education and income statistics used to construct the HDI.[19] They identify three sources of data error which are due to (i) data updating, (ii) formula revisions and (iii) thresholds to classify a country's development status and find that 11%, 21% and 34% of all countries can be interpreted as currently misclassified in the development bins due to the three sources of data error, respectively. The authors suggest that the United Nations should discontinue the practice of classifying countries into development bins because the cut-off values seem arbitrary, can provide incentives for strategic behavior in reporting official statistics, and have the potential to misguide politicians, investors, charity donators and the public at large which use the HDI. In 2010 the UNDP reacted to the criticism and updated the thresholds to classify nations as low, medium and high human development countries. In a comment to The Economist in early January 2011, the Human Development Report Office responded[25] to a January 6, 2011 article in The Economist[26] which discusses the Wolff et al. paper. The Human Development Report Office states that they undertook a systematic revision of the methods used for the calculation of the HDI and that the new methodology directly addresses the critique by Wolff et al. in that it generates a system for continuous updating of the human development categories whenever formula or data revisions take place. The following are common criticisms directed at the HDI: that it is a redundant measure that adds little to the value of the individual measures composing it; that it is a means to provide legitimacy to arbitrary weightings of a few aspects of social development; that it is a number producing a relative ranking which is useless for inter-temporal comparisons, and difficult to compare a country's progress or regression since the HDI for a country in a given year depends on the levels of, say, life expectancy or GDP per capita of other countries in that year.[27][28][29][30] However, each year, UN member states are listed and ranked according to the computed HDI. If high, the rank in the list can be easily used as a means of national aggrandizement; alternatively, if low, it can be used to highlight national insufficiencies. Using the HDI as an absolute index of social welfare, some authors have used panel HDI data to measure the impact of economic policies on quality of life.[31] Ratan Lal Basu criticises the HDI concept from a completely different angle. According to him the Amartya Sen-Mahbub ul Haq concept of HDI considers that provision of material amenities alone would bring about Human Development, but Basu opines that Human Development in the true sense should embrace both material and moral development. According to him human development based on HDI alone, is similar to dairy farm economics to improve dairy farm output. To quote: 'So human development effort should not end up in amelioration of material deprivations alone: it must undertake to bring about spiritual and moral development to assist the biped to become truly human.'[32] For example, a high suicide rate would bring the index down. A few authors have proposed alternative indices to address some of the index's shortcomings.[33] However, of those proposed alternatives to the HDI, few have produced alternatives covering so many countries, and that no development index (other than, perhaps, Gross Domestic Product per capita) has been used so extensively—or effectively, in discussions and developmental planning as the HDI. However, there has been one lament about the HDI that has resulted in an alternative index: David Hastings, of the United Nations Economic and Social Commission for Asia and the Pacific published a report geographically extending the HDI to 230+ economies, whereas the UNDP HDI for 2009 enumerates 182 economies and coverage for the 2010 HDI dropped to 169 countries.[34][35] Democracy Index Gini coefficient Gender Parity Index Gender-related Development Index Gender Empowerment Measure Genuine Progress Indicator Legatum Prosperity Index Living Planet Index Happy Planet Index Physical quality-of-life index Human development (humanity) American Human Development Report Child Development Index Satisfaction with Life Index Multidimensional Poverty Index List of countries by Human Development Index List of countries by inequality-adjusted HDI List of African countries by Human Development Index List of Australian states and territories by HDI List of Argentine provinces by Human Development Index List of Brazilian states by Human Development Index List of Chilean regions by Human Development Index List of Chinese administrative divisions by Human Development Index List of European countries by Human Development Index List of Indian states by Human Development Index List of Latin American countries by Human Development Index List of Mexican states by Human Development Index List of Pakistani Districts by Human Development Index List of Philippine provinces by Human Development Index List of Russian federal subjects by HDI List of South African provinces by HDI List of US states by HDI List of Venezuelan states by Human Development Index ↑ Fukuda-Parr, Sakiko (2003). "The Human Development Paradigm: operationalizing Sen's ideas on capabilities". Feminist Economics 9 (2–3): 301–317. doi:10.1080/1354570022000077980. ↑ United Nations Development Programme (1999). Human Development Report 1999. New York: Oxford University Press. ↑ Mean years of schooling (of adults) (years) is a calculation of the average number of years of education received by people ages 25 and older in their lifetime based on education attainment levels of the population converted into years of schooling based on theoretical durations of each level of education attended. Source: Barro, R. J.; Lee, J.-W. (2010). "A New Data Set of Educational Attainment in the World, 1950-2010". NBER Working Paper No. 15902. ↑ (Expected years of schooling is a calculation of the number of years a child of school entrance age is expected to spend at school, or university, including years spent on repetition. It is the sum of the age-specific enrolment ratios for primary, secondary, post-secondary non-tertiary and tertiary education and is calculated assuming the prevailing patterns of age-specific enrolment rates were to stay the same throughout the child's life. (Source: UNESCO Institute for Statistics (2010). Correspondence on education indicators. March. Montreal.) ↑ Definition, Calculator, etc. at UNDP site ↑ 2011 Human Development Index ↑ 2011 Human Development Complete Report ↑ International Human Rights Development Indicators, UNDP ↑ "Samoa left out of UNDP index", Samoa Observer, January 22, 2010 ↑ Cuba country profile, UNDP ↑ Report of Directorate General of Budget, Accounting and Statistics, Executive Yuan, R.O.C.(Taiwan) ↑ http://hdr.undp.org/en/media/HDR_2009_EN_Complete.pdf Human Development Report 2009[ (p. 171, 204) ↑ News – Human Development Reports (UNDP) ↑ HDR 2007/2008 – Human Development Reports (UNDP) ↑ In: Daponte Beth Osborne, and Hu difei: "Technical Note on Re-Calculating the HDI, Using Projections of Components of the HDI", April 2010, United Nations Development Programme, Human Development Report Office. ↑ 19.0 19.1 Wolff, Hendrik; Chong, Howard; Auffhammer, Maximilian (2011). "Classification, Detection and Consequences of Data Error: Evidence from the Human Development Index". Economic Journal 121 (553): 843–870. doi:10.1111/j.1468-0297.2010.02408.x. ↑ Sagara, Ambuj D.; Najam, Adil (1998). "The human development index: a critical review". Ecological Economics 25 (3): 249–264. doi:10.1016/S0921-8009(97)00168-7. ↑ McGillivray, Mark (1991). "The human development index: yet another redundant composite development indicator?". World Development 19 (10): 1461–1468. doi:10.1016/0305-750X(91)90088-Y. ↑ Srinivasan, T. N. (1994). "Human Development: A New Paradigm or Reinvention of the Wheel?". American Economic Review 84 (2): 238–243. JSTOR 2117836. ↑ McGillivray, Mark; White, Howard (2006). "Measuring development? The UNDP's human development index". Journal of International Development 5 (2): 183–192. doi:10.1002/jid.3380050210. ↑ 24.0 24.1 Caplan, Bryan (May 22, 2009). "Against the Human Development Index". Library of Economics and Liberty. ↑ "UNDP Human Development Report Office's comments". The Economist. January 2011. [dead link] ↑ "The Economist (pages 60-61 in the issue of Jan 8, 2011)". January 6, 2011. ↑ Rao, V. V. B. (1991). "Human development report 1990: review and assessment". World Development 19 (10): 1451–1460. doi:10.1016/0305-750X(91)90087-X. ↑ McGillivray, M. (1991). "The Human Development Index: Yet Another Redundant Composite Development Indicator?". World Development 18 (10): 1461–1468. doi:10.1016/0305-750X(91)90088-Y. ↑ Hopkins, M. (1991). "Human development revisited: A new UNDP report". World Development 19 (10): 1461–1468. doi:10.1016/0305-750X(91)90089-Z. ↑ Tapia Granados, J. A. (1995). "Algunas ideas críticas sobre el índice de desarrollo humano". Boletín de la Oficina Sanitaria Panamericana 119 (1): 74–87. ↑ Davies, A.; Quinlivan, G. (2006). "A Panel Data Analysis of the Impact of Trade on Human Development". Journal of Socio-Economics 35 (5): 868–876. doi:10.1016/j.socec.2005.11.048. ↑ HDI-2 ↑ Noorbakhsh, Farhad (1998). "The human development index: some technical issues and alternative indices". Journal of International Development 10 (5): 589–605. doi:10.1002/(SICI)1099-1328(199807/08)10:5<589::AID-JID484>3.0.CO;2-S. ↑ Hastings, David A. (2009). "Filling Gaps in the Human Development Index". United Nations Economic and Social Commission for Asia and the Pacific, Working Paper WP/09/02. ↑ Hastings, David A. (2011). "A "Classic" Human Development Index with 232 Countries". HumanSecurityIndex.org. Information Note linked to data Human Development Report 2011 Human Development Index Update Human Development Interactive Map Human Development Tools and Rankings Technical note explaining the definition of the HDI PDF (5.54 MB) An independent HDI covering 232 countries, formulated along lines of the traditional (pre-2010) approach. List of countries by HDI at NationMaster.com America Is # ... 15? by Dalton Conley, The Nation, March 4, 2009 Economic classification of countries Developed country · Developing country · Least developed country · High income economy · Newly industrialized country · Heavily Indebted Poor Countries Worlds Theory First World · Second World · Third World · Fourth World By country (future estimates · growth · per capita [future estimates]) Purchasing power parity (PPP) By country (future estimates · per capita [future estimates] · per hour worked, per person employed) List of countries by GNI (nominal) per capita · List of countries by GNI (PPP) per capita per hour · monthly (Europe) · per year · Minimum wage (Europe · USA · Canada) Other national accounts Net material product · Gross/Net national wealth · Expenditures on R&D List of countries by Human Development Index · Human Poverty Index · List of countries by percentage of population living in poverty Digital Opportunity Index · List of countries by number of Internet users · List of countries by number of broadband Internet users Lists of countries by population statistics Population (density · graphical · growth rate · per household · past and future · per unit area of arable land · urban) · Age at first marriage · Birth rate · Natural increase · Death rate · Divorce rate · Fertility rate · Foreign-born (2005) · Life expectancy · Median age · Net migration · Sex ratio · Urbanization Antiviral medications for pandemic influenza · Health expenditure per capita · HIV/AIDS adult prevalence rate · Infant mortality rate · Percentage suffering from undernourishment · Suicide rate (OECD) Education and innovation Education Index · Global Innovation Index · Literacy rate · Patents · Programme for International Student Assessment Distribution of wealth · Employment rate · Global Gender Gap Report · Human Poverty Index · Income equality · Labour force · Millionaires · Most charitable · Per capita personal income · Percentage living in poverty · Sen social welfare function · Unemployment rate · US dollar billionaires English-speakers · Human Development Index Lists by country List of international rankings List of top international rankings by country af:Menslike ontwikkelingsindeks ar:مؤشر التنمية البشرية az:İnsan İnkişafı İndeksi bn:মানব উন্নয়ন সূচক bar:Human Development Index bg:Индекс на човешкото развитие ca:Índex de Desenvolupament Humà cs:Index lidského rozvoje cy:Indecs Datblygiad Dynol da:Human Development Index de:Human Development Index et:Inimarengu indeks el:Δείκτης ανθρώπινης ανάπτυξης es:Índice de desarrollo humano eo:Indekso de homa disvolviĝo eu:Giza Garapen Indizea fa:شاخص توسعه انسانی fr:Indice de développement humain gl:Índice de Desenvolvemento Humano ko:인간 개발 지수 hi:मानव विकास सूचकांक hr:HDI io:Indexo pri humana developeso id:Indeks Pembangunan Manusia is:Vísitala um þróun lífsgæða it:Indice di sviluppo umano he:מדד הפיתוח האנושי jv:Indèks Pembangunan Manungsa ka:ადამიანის განვითარების ინდექსი kk:Адам даму индексі lo:ດັດສະນີການພັດທະນາມະນຸດ la:Index Evolutionis Humanae lv:Tautas attīstības indekss lt:Žmogaus socialinės raidos indeksas jbo:remna kamfarvi namcu hu:Emberi fejlettségi index mk:Индекс на човековиот развој ml:മാനവ വികസന സൂചിക mr:मानवी विकास निर्देशांक ms:Indeks Pembangunan Manusia mn:Хүний хөгжлийн илтгэлцүүр nl:Index van de menselijke ontwikkeling ja:人間開発指数 no:HDI nn:Human Development Index nds:Human Development Index pl:Wskaźnik rozwoju społecznego pt:Índice de Desenvolvimento Humano ro:Indicele dezvoltării umane rmy:Indekso le manushutne baryaripnasko ru:Индекс развития человеческого потенциала sah:Киhи сайдыытын индекса sq:Indeksi i zhvillimit njerëzor simple:Human Development Index sk:Index ľudského rozvoja sl:Indeks človekovega razvoja ckb:بنەمای پەرەسەندنی مرۆڤی sr:Индекс хуманог развоја sh:Indeks ljudskog razvoja su:Indéks Pangwangunan Manusa fi:Inhimillisen kehityksen indeksi sv:Human Development Index tl:Talatuntunan ng Kaunlaran ng Tao ta:மனித வளர்ச்சிச் சுட்டெண் th:ดัชนีการพัฒนามนุษย์ tr:İnsani Gelişme Endeksi udm:Human development index uk:Індекс розвитку людського потенціалу ur:انسانی ترقیاتی اشاریہ vi:Chỉ số phát triển con người yi:מענטשליכע אנטוויקלונג אינדעקס zh:人类发展指数 Retrieved from "http://www.worldafropedia.com/wiki/index.php?title=Human_Development_Index&oldid=25173" Articles with dead external links from September 2011
CommonCrawl
Associated bundle In mathematics, the theory of fiber bundles with a structure group $G$ (a topological group) allows an operation of creating an associated bundle, in which the typical fiber of a bundle changes from $F_{1}$ to $F_{2}$, which are both topological spaces with a group action of $G$. For a fiber bundle F with structure group G, the transition functions of the fiber (i.e., the cocycle) in an overlap of two coordinate systems Uα and Uβ are given as a G-valued function gαβ on Uα∩Uβ. One may then construct a fiber bundle F′ as a new fiber bundle having the same transition functions, but possibly a different fiber. An example A simple case comes with the Möbius strip, for which $G$ is the cyclic group of order 2, $\mathbb {Z} _{2}$. We can take as $F$ any of: the real number line $\mathbb {R} $, the interval $[-1,\ 1]$, the real number line less the point 0, or the two-point set $\{-1,\ 1\}$. The action of $G$ on these (the non-identity element acting as $x\ \rightarrow \ -x$ in each case) is comparable, in an intuitive sense. We could say that more formally in terms of gluing two rectangles $[-1,\ 1]\times I$ and $[-1,\ 1]\times J$ together: what we really need is the data to identify $[-1,\ 1]$ to itself directly at one end, and with the twist over at the other end. This data can be written down as a patching function, with values in G. The associated bundle construction is just the observation that this data does just as well for $\{-1,\ 1\}$ as for $[-1,\ 1]$. Construction In general it is enough to explain the transition from a bundle with fiber $F$, on which $G$ acts, to the associated principal bundle (namely the bundle where the fiber is $G$, considered to act by translation on itself). For then we can go from $F_{1}$ to $F_{2}$, via the principal bundle. Details in terms of data for an open covering are given as a case of descent. This section is organized as follows. We first introduce the general procedure for producing an associated bundle, with specified fibre, from a given fibre bundle. This then specializes to the case when the specified fibre is a principal homogeneous space for the left action of the group on itself, yielding the associated principal bundle. If, in addition, a right action is given on the fibre of the principal bundle, we describe how to construct any associated bundle by means of a fibre product construction.[1] Associated bundles in general Let $ \pi :E\to X$ be a fiber bundle over a topological space X with structure group G and typical fibre F. By definition, there is a left action of G (as a transformation group) on the fibre F. Suppose furthermore that this action is effective.[2] There is a local trivialization of the bundle E consisting of an open cover Ui of X, and a collection of fibre maps $\varphi _{i}:\pi ^{-1}(U_{i})\to U_{i}\times F$ such that the transition maps are given by elements of G. More precisely, there are continuous functions gij : (Ui ∩ Uj) → G such that $\psi _{ij}(u,f):=\varphi _{i}\circ \varphi _{j}^{-1}(u,f)={\big (}u,g_{ij}(u)f{\big )},\quad {\text{for each }}(u,f)\in (U_{i}\cap U_{j})\times F\,.$ Now let F′ be a specified topological space, equipped with a continuous left action of G. Then the bundle associated with E with fibre F′ is a bundle E′ with a local trivialization subordinate to the cover Ui whose transition functions are given by $\psi '_{ij}(u,f')={\big (}u,g_{ij}(u)f'{\big )},\quad {\text{for each }}(u,f')\in (U_{i}\cap U_{j})\times F'\,,$ where the G-valued functions gij(u) are the same as those obtained from the local trivialization of the original bundle E. This definition clearly respects the cocycle condition on the transition functions, since in each case they are given by the same system of G-valued functions. (Using another local trivialization, and passing to a common refinement if necessary, the gij transform via the same coboundary.) Hence, by the fiber bundle construction theorem, this produces a fibre bundle E′ with fibre F′ as claimed. Principal bundle associated with a fibre bundle As before, suppose that E is a fibre bundle with structure group G. In the special case when G has a free and transitive left action on F′, so that F′ is a principal homogeneous space for the left action of G on itself, then the associated bundle E′ is called the principal G-bundle associated with the fibre bundle E. If, moreover, the new fibre F′ is identified with G (so that F′ inherits a right action of G as well as a left action), then the right action of G on F′ induces a right action of G on E′. With this choice of identification, E′ becomes a principal bundle in the usual sense. Note that, although there is no canonical way to specify a right action on a principal homogeneous space for G, any two such actions will yield principal bundles which have the same underlying fibre bundle with structure group G (since this comes from the left action of G), and isomorphic as G-spaces in the sense that there is a G-equivariant isomorphism of bundles relating the two. In this way, a principal G-bundle equipped with a right action is often thought of as part of the data specifying a fibre bundle with structure group G, since to a fibre bundle one may construct the principal bundle via the associated bundle construction. One may then, as in the next section, go the other way around and derive any fibre bundle by using a fibre product. Fiber bundle associated with a principal bundle Let π : P → X be a principal G-bundle and let ρ : G → Homeo(F) be a continuous left action of G on a space F (in the smooth category, we should have a smooth action on a smooth manifold). Without loss of generality, we can take this action to be effective. Define a right action of G on P × F via[3][4] $(p,f)\cdot g=(p\cdot g,\rho (g^{-1})f)\,.$ We then identify by this action to obtain the space E = P ×ρ F = (P × F) /G. Denote the equivalence class of (p,f) by [p,f]. Note that $[p\cdot g,f]=[p,\rho (g)f]{\mbox{ for all }}g\in G.$ Define a projection map πρ : E → X by πρ([p,f]) = π(p). Note that this is well-defined. Then πρ : E → X is a fiber bundle with fiber F and structure group G. The transition functions are given by ρ(tij) where tij are the transition functions of the principal bundle P. This construction can also be seen categorically. More precisely, there are two continuous maps $P\times G\times F\to P\times F$, given by acting with G on the right on P and on the left on F. The associated vector bundle $P\times _{\rho }F$ is then the coequalizer of these maps. Reduction of the structure group Further information: reduction of the structure group The companion concept to associated bundles is the reduction of the structure group of a $G$-bundle $B$. We ask whether there is an $H$-bundle $C$, such that the associated $G$-bundle is $B$, up to isomorphism. More concretely, this asks whether the transition data for $B$ can consistently be written with values in $H$. In other words, we ask to identify the image of the associated bundle mapping (which is actually a functor). Examples of reduction Examples for vector bundles include: the introduction of a metric resulting in reduction of the structure group from a general linear group GL(n) to an orthogonal group O(n); and the existence of complex structure on a real bundle resulting in reduction of the structure group from real general linear group GL(2n,R) to complex general linear group GL(n,C). Another important case is finding a decomposition of a vector bundle V of rank n as a Whitney sum (direct sum) of sub-bundles of rank k and n-k, resulting in reduction of the structure group from GL(n,R) to GL(k,R) × GL(n-k,R). One can also express the condition for a foliation to be defined as a reduction of the tangent bundle to a block matrix subgroup - but here the reduction is only a necessary condition, there being an integrability condition so that the Frobenius theorem applies. See also • Spinor bundle References 1. All of these constructions are due to Ehresmann (1941-3). Attributed by Steenrod (1951) page 36 2. Effectiveness is a common requirement for fibre bundles; see Steenrod (1951). In particular, this condition is necessary to ensure the existence and uniqueness of the principal bundle associated with E. 3. Husemoller, Dale (1994), p. 45. 4. Sharpe, R. W. (1997), p. 37. Books • Steenrod, Norman (1951). The Topology of Fibre Bundles. Princeton: Princeton University Press. ISBN 0-691-00548-6. • Husemoller, Dale (1994). Fibre Bundles (Third ed.). New York: Springer. ISBN 978-0-387-94087-8. • Sharpe, R. W. (1997). Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. New York: Springer. ISBN 0-387-94732-9. Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
\begin{document} \title{Computing Minimum Time Paths With Bounded Acceleration} \begin{abstract} Solving for the minimum time bounded acceleration trajectory with prescribed position and velocity at endpoints is a highly nonlinear problem. The methods and bounds developed in this paper distinguish when there is a continuous acceleration solution and reduce the problem of computing the optimal trajectory to a search over two parameters, planar rotation $-\pi/2<\theta<\pi/2$ and spatiotemporal dilation $0<\alpha<\Lambda(\theta)$. \end{abstract} \begin{keywords} minimal time paths, bounded acceleration, bilinear tangent law \end{keywords} \begin{AMS} 49M05, 49M37, 34K35, 34K28, 65K05 \end{AMS} \section{Introduction} We seek better understanding of and numerical methods for computing time-minimizing planar trajectories $(x(t),y(t))$ that have bounded acceleration $|(x'',y'')|\le 1$. A variety of boundary conditions can be considered. In this work we assume position and velocity are fully specified at initial and terminal points of the trajectory. The problem of actually computing time minimizing trajectories has many difficulties. Continuous, bang-bang, and constant acceleration minimizers are all possible and can occur in close proximity to one another. Minimum time may depend discontinuously on boundary conditions, although the dependence is lower semi-continuous with constant acceleration solutions at points of discontinuity \cite{morgan}. Optimizing using Pontryagin's principle yields a well-known stationarity condition (often called the bilinear tangent law). The stationarity condition is not sufficient, and multiple non-optimal solutions may exist. It is also possible to have local, not global, minimizers. In \cite{morgan} there is an example of boundary conditions with a continuum of stationary solutions containing a non-optimal local minimizer. Even if it is known that the stationarity condition has a unique continuous acceleration solution, numerically approximating the solution is multidimensional and highly nonlinear \cite{feng}. This work addresses these difficulties with the following contributions. First, we give necessary and sufficient conditions to determine when we have constant, bang-bang, or continuously varying acceleration solutions. Secondly, in the case of continuously varying acceleration, we reduce the numerical problem of computing the trajectory to a search of two continuous parameters over a semi-bounded region. Minimizing time under constraints on acceleration is an interesting problem on its own \cite{bhat,aneesh,feng}, but also shows up in kinodynamic motion planning when acceleration is the dominant constraint, as in cases of limited traction \cite{marko}. The techniques in this work were applied to calculate the fastest path around the bases on a baseball diamond in \cite{morgan}, which garnered popular attention via National Public Radio, Huffington Post, Live Science, Science News, and Math Goes Pop. \section{Time Minimizing Paths} A time minimizing planar trajectory $(x(t),y(t))$ with bounded acceleration $|(x'',y'')|\le 1$, must satisfy \cite{morgan}: \begin{equation} (x'',y'') = {At+B \over |At+B|} \ \ \ \ A,B\in \hbox{\bbb\char82}^2 \label{eqn:morgan} \end{equation} on any open segment of the trajectory that is not restricted by boundary conditions. This form subsumes the classic bilinear tangent law \cite{bryson,lewis}, and also contains bang-bang and constant acceleration solutions by setting $B=(0,0)$. Note that by rescaling space we can assume any positive bound on the magnitude of acceleration. Assuming $B\not= (0,0)$ in (\ref{eqn:morgan}), then a rotation, spatiotemporal dilation, reflection, and time shift can transform acceleration to the form $(1,t)/\sqrt{1+t^2}$. Specifically, let \begin{equation} \begin{array}{lcl} f''(t) = 1/\sqrt{1+t^2} &\;& g''(t) = t/\sqrt{1+t^2}\cr f'(t)=\mathop{\mathrm{arcsinh}}(t) &\;& g'(t)= \sqrt{1+t^2} \cr f(t)=t \mathop{\mathrm{arcsinh}}(t)-\sqrt{1+t^2} &\;& g(t)= {1\over 2}\( t \sqrt{1+t^2}+\mathop{\mathrm{arcsinh}}(t)\) \cr \end{array} \label{eqn:fg} \end{equation} Then it follows: \begin{proposition} If $(\tilde x(t),\tilde y(t))$ is a minimal time path with unit magnitude continuous acceleration for $-\epsilon<t<\epsilon$ then there exist unique values for $\alpha>0$, $\theta\in(-\pi/2,\pi/2)$, $\sigma,\eta=\pm 1$, and $t_0$ such that $$ \(\matrix{ x(t) \cr y(t)}\) = {1\over\alpha^2}\mathbf{R}_\theta \mathbf{S} \(\matrix{ f(\alpha (t-t_0))+u_0 \alpha (t-t_0) \cr g(\alpha (t-t_0))+v_0 \alpha (t-t_0) }\)$$ for $-\epsilon<t<\epsilon$, with rotations and reflections $$\mathbf{R}_\theta = \left[\matrix{ \cos(\theta) & -\sin(\theta) \cr \sin(\theta) & \cos(\theta)}\right] \qquad \mathbf{S} = \left[\matrix{ \sigma & 0 \cr 0 & \eta }\right] $$ \label{prop:form} \end{proposition} This formulation of the solution preserves time direction, and uniquely covers all possibilities by a $180^\circ$ sweep of space together with vertical and horizontal flips. A variety of boundary conditions can be considered. This work focuses on taking position and velocity specified at initial and terminal locations: \begin{equation} \begin{array}{rcl} \(\matrix{ u_1 \\ v_1 }\)&=&\(\matrix{ x'(0) \\ y'(0)}\) \\[16pt] \(\matrix{ u_2 \\ v_2 }\)&=&\(\matrix{ x'(T) \\ y'(T)}\) \\[16pt] \(\matrix{ \delta_{ x} \\ \delta_{ y} }\)&=& \(\matrix{ x(T)- x(0) \\ y(T)- y(0)}\), \end{array} \label{eqn:bndry} \end{equation} where $T$ is free and minimized subject to $|(x'',y'')|\le 1$ There is a unique time-minimizing trajectory: existence is established by bounded Lipschitz convergence, and uniqueness follows in that averaging the acceleration of two solutions must also be a solution with maximal acceleration (see \cite{morgan} for details). Bang-bang or constant solutions happen when $B=(0,0)$ in (\ref{eqn:morgan}) and don't fit the formulation in proposition \ref{prop:form}. A complete characterization of boundary values when bang-bang and constant solutions exist is given in the following due to Frank Morgan. Fitting (\ref{eqn:morgan}) to any bang-bang or constant acceleration solution has $B=(0,0)$ and $A$ is parallel to the difference in endpoint velocities. The problem is computationally simplified by rotating the plane so the difference in endpoint velocities is horizontal. \begin{proposition}[F. Morgan] Assuming $v_1=v_2=v$ in (\ref{eqn:bndry}), the minimum time constant acceleration problem has bang-bang or constant acceleration solutions in precisely the following three cases: \begin{enumerate} \item $u_2=u_1$ and $(\delta_x, \delta_y)$ is nonzero, \item $\delta_y = v =0$ and $\delta_x \not = |u_2^2-u_1^2|/2$, \item $v\not = 0$, $u_1\not=u_2$, and $\delta_x =\delta_y (u_1+u_2)/2v \,\pm\, ((\delta_y/v)^2 - (u_2-u_1)^2)/4$. \end{enumerate} \label{prop:morgan} \end{proposition} \begin{proof} If the initial and final velocities are equal, $u_2=u_1$, and $\delta_x=\delta_y=0$ then total time is zero. If $(\delta_x,\delta_y)$ is non-zero, it is straightforward to construct a bang-bang solution with acceleration reversing direction at the halfway point $(\delta_x/2,\delta_y/2)$. If $v=0$ it is again a straightforward exercise to construct a bang-bang or constant acceleration solution. Henceforth we assume initial and final velocities are different and $v\not=0$. Assume that the solution is bang-bang, with $(x'',y'')=(1,0)$ for time $T_1\ge0$ and then $(x'',y'')=(-1,0)$ for time $T_2\ge 0$. Then compute $$ \begin{array}{rcl} \delta_x &=& T_1(u_1+T_1/2) + T_2(u_2+T_2/2), \\ \delta_y/v &=& T_1 + T_2, \\ u_2-u_1 &=& T_1 - T_2. \\ \end{array} $$ Solving this system of equations and allowing for the reversed order of acceleration yields (3). Note that $((y/v)^2 - (u_2-u_1)^2)=0$ iff $T_1$ or $T_2$ is zero, yielding a constant acceleration solution. Conversely, if a solution has $\delta_x =\delta_y (u_1+u_2)/2v \pm ((\delta_y/v)^2 - (u_2-u_1)^2)/4$, then there is a bang-bang or constant solution with acceleration of the form $(\pm 1,0)$. This is the unique minimizer for the horizontal dimension of the problem: $x'(0)=u_1$, $x'(T)=u_2$, $x(T)-x(0)=\delta_x$. Allowing any vertical component to acceleration would reduce the magnitude of horizontal acceleration, and so would take more time. \end{proof} The bang-bang and constant acceleration solutions are thus completely characterized and straightforward to calculate. However, computing solutions in the continuous acceleration case is a highly nonlinear problem. Using the above formulation, the problem can be reduced to a search over two continuous parameters (rotation and dilation) and one discrete (vertical flip). This is a significant improvement over other proposed methods \cite{feng,aneesh,marko}. The method is outlined here, and detailed in the remainder of the paper. Continuing with the assumption $v_1=v_2=v$, we can reflect about the $y$-axis to assume $u_2-u_1>0$ and rescale space and time (see section \ref{sec:norm}) so that $u_2-u_1=1$. This produces the normalized boundary values: \begin{equation} \begin{array}{ccc} \(\matrix{ u \\ v }\)&=&\(\matrix{ x'(t_1) \\ y'(t_1)}\) \\[16pt] \(\matrix{ u+1 \\ v }\)&=&\(\matrix{ x'(t_2) \\ y'(t_2)}\) \\[16pt] \(\matrix{ \delta_{ x} \\ \delta_{ y} }\)&=& \(\matrix{ x(t_2)- x(t_1) \\ y(t_2)- y(t_1)}\), \end{array} \label{eqn:normbndry} \end{equation} where $t_1>t_2$ are free and $t_2-t_1$ is minimized under unit magnitude acceleration. Proposition \ref{prop:form} implies the existence of a solution of the form \begin{equation} \(\matrix{ x(t) \cr y(t)}\) = {1\over\alpha^2}\mathbf{R}_\theta \mathbf{S} \(\matrix{ f(\alpha t)+u_0 \alpha t \cr g(\alpha t)+v_0 \alpha t }\) \label{eq:difvel} \end{equation} With $x'(t_2)-x'(t_1)=1$ and $y'(t_2)-y'(t_1)=0$ we get \begin{equation} \(\matrix{ \alpha\cos(\theta) \cr \alpha\sin(\theta)} \) = \left[\matrix{\sigma & 0 \cr 0 & \eta}\right] \(\matrix{ f'(\alpha t_2)-f'(\alpha t_2) \cr g'(\alpha t_2)-g'(\alpha t_1) }\) \label{eqn:difval} \end{equation} For $-\pi/2<\theta<\pi/2$ and $f'$ monotone increasing we must have horizontal orientation $\sigma=+1$. Given any $\alpha>0$, $\theta\in(-\pi/2,\pi/2)$, and vertical orientation $\eta=\pm1$, equations (\ref{eqn:difval}) can be rapidly solved to any precision. This is shown in section \ref{sec:time} where we first solve for $\tau_1=\alpha t_1$ as a root of a monotone function with initial upper bounds $T_{\scriptscriptstyle{LO}}<\tau_1<T_{\scriptscriptstyle{HI}}$, after which we get $t_2$, $u_0$, and $v_0$ by direct computation. Solving (\ref{eqn:normbndry}) is thus reduced to a search over $\theta\in(-\pi/2,\pi/2)$, $\alpha>0$, and $\eta=\pm1$ to match the displacements $\delta_x$, $\delta_y$. We can also get an upper bound for $\alpha$. \begin{definition} For $T>1$ and $-\pi/2<\theta<\pi/2$, let \begin{equation} \Lambda(T,\theta)= \max\{\alpha>0\; \big|\; \alpha T > \exp(\alpha\cos\theta/2)-1 \label{eqn:alphamaxUP} \end{equation} \end{definition} Values for $\Lambda$ can be rapidly calculated. The following is established in section \ref{sec:time} \begin{proposition} If $t_1,t_2,u_0,v_0,\alpha,\theta$ solve the boundary conditions (\ref{eqn:normbndry}), then $\alpha<\Lambda(t_2-t_1,\theta)$ \label{prop:lamb} \end{proposition} An upper bound for $\alpha$ results from using an a priori upper bound $T_{\scriptscriptstyle{MAX}}$ to total time $t_1-t_2$. Such a bound can be constructed from a zigzag trajectory with two zero velocity points (see section \ref{sec:dilate}). Normalization is carefully defined in the next section, and the above bounds and propositions are developed in section \ref{sec:num}. \section{Normalization}\label{sec:norm} \subsection{Transformations} Suppose $(\tilde x (\tilde t), \tilde y (\tilde t))$ is a minimal time curve with \begin{equation} \begin{array}{rcl} \(\matrix{ \tilde x'(\tilde t_1) \\ \tilde y'(\tilde t_1)}\) = \(\matrix{ \tilde u_1 \\ \tilde v_1 }\) \\[16pt] \(\matrix{ \tilde x'(\tilde t_2) \\ \tilde y'(\tilde t_2)}\) = \(\matrix{ \tilde u_2 \\ \tilde v_2 }\) \\[16pt] \(\matrix{ \tilde x(\tilde t_2)-\tilde x(\tilde t_1) \\ \tilde y(\tilde t_2)-\tilde y(\tilde t_1)}\) = \(\matrix{ \delta_{\tilde x} \\ \delta_{\tilde y} }\) \end{array} \end{equation} and acceleration $$\sqrt{\tilde x''(\tilde t)^2+\tilde y''(\tilde t)^2}=1$$ Then given any rotation angle $\phi$, dilation $\beta>0$, reflections $\tilde\sigma,\tilde\eta=\pm 1$, and transforming \begin{equation} \(\matrix{ x(t) \cr y(t) }\) = {1\over\beta^2} \mathbf{R}_\phi \tilde\mathbf{S} \(\matrix{ \tilde x(\beta t) \cr \tilde y(\beta t) }\) \end{equation} yields a minimal time path $(x(t),y(t))$ that satisfies boundary conditions \begin{equation} \begin{array}{rcl} t_1 &=& \tilde t_1/\beta\\ t_2 &=& \tilde t_2/\beta\\[10pt] \(\matrix{ x'(t_1) \\ y'(t_1)}\) &=& {1\over\beta}\mathbf{R}_\phi \tilde\mathbf{S}\(\matrix{ \tilde u_1 \\ \tilde v_1 }\) \\[16pt] \(\matrix{ x'(t_2) \\ y'(t_2)}\) &=& {1\over\beta}\mathbf{R}_\phi \tilde\mathbf{S}\(\matrix{ \tilde u_2 \\ \tilde v_2 }\) \\[16pt] \(\matrix{ x(t_2)- x(t_1) \\ y(t_2)- y(t_1)}\) &=& {1\over\beta^2} \mathbf{R}_\phi\tilde\mathbf{S}\(\matrix{ \delta_{\tilde x} \\ \delta_{\tilde y} }\) \end{array} \end{equation} and acceleration $$\sqrt{ x''(t)^2+ y''(t)^2}=1$$ \subsection{The Normalized Problem} Given real-world boundary values $\tilde u_1$, $\tilde v_1$, $\tilde u_2$, $\tilde v_2$, $\delta_{\tilde x}$, $\delta_{\tilde y}$, let $$ \begin{array}{rcl} \beta &=& \((\tilde u_2-\tilde u_1)^2+(\tilde v_2-\tilde v_1)^2\)^{1/2}\\ \phi &=& \arctan(\sigma {\tilde v_2- \tilde v_1 \over \tilde u_2 - \tilde u_1})\in(-{\pi\over 2},{\pi\over 2})\\ \sigma &=& \mathop{\mathrm{sgn}}(\tilde u_2-\tilde u_1)\\ \eta &=& 1 \end{array} $$ Applying the linear transformation ${1\over\beta^2} \mathbf{R}_\phi \tilde\mathbf{S}$ to the $\tilde x, \tilde y$ system and scaling time $\tilde t=\beta t$ yields boundary values $$ \begin{array}{rclcl} {1\over\beta} \mathbf{R}_\phi \tilde\mathbf{S} \(\matrix{ \tilde u_1 \\ \tilde v_1 }\) &=& \(\matrix{ u_1 \\ v_1 }\) \\[16pt] {1\over\beta} \mathbf{R}_\phi \tilde\mathbf{S} \(\matrix{ \tilde u_2 \\ \tilde v_2 }\) &=& \(\matrix{ u_2 \\ v_2 }\) &=& \(\matrix{ u_1+1 \\ v_1 }\) \\[16pt] {1\over\beta^2} \mathbf{R}_\phi \tilde\mathbf{S} \(\matrix{ \delta_{\tilde x} \\ \delta_{\tilde y} }\) &=& \(\matrix{ \delta_{x} \\ \delta_{y} }\) \\[10pt] \end{array} $$ with unit acceleration. Note that a problem with $(u_1,v_1)$ equal to $(u_2,v_2)$ cannot be normalized. In this case the solution is bang-bang (proposition \ref{prop:morgan}). The normalized problem is thus to estimate the minimal time path $(x(t),y(t))$ with unit acceleration and boundary conditions determined by $u_1,v_1,\delta_x,\delta_y$: \begin{equation} \begin{array}{rcl} \(\matrix{ x'(t_1) \\ y'(t_1)}\) &=& \(\matrix{ u_1 \\ v_1 }\) \\[16pt] \(\matrix{ x'(t_2) \\ y'(t_2)}\) &=& \(\matrix{ u_1+1 \\ v_1 }\) \\[16pt] \(\matrix{ x(t_2)- x(t_1) \\ y(t_2)- y(t_1)}\) &=& \(\matrix{ \delta_{ x} \\ \delta_{ y} }\) \end{array} \end{equation} Given a solution $(x(t),y(t))$ for $t_1<t<t_2$ to the normalized problem, we transform back to original coordinates $(\tilde x (\tilde t),\tilde y(\tilde t))$ as \begin{equation} \(\matrix{ \tilde x(\tilde t) \cr \tilde y(\tilde t)}\) = \beta^2\tilde\mathbf{S}\mathbf{R}_\phi^{-1}\(\matrix{ x(\tilde t/\beta) \cr y(\tilde t/\beta }\) \end{equation} for $$\beta t_1=\tilde t_1<\tilde t<\tilde t_2=\beta t_2$$ \section{Numerics}\label{sec:num} \subsection{Solving the Normalized Problem} To solve the normalized problem, numerical methods are developed to calculate values for six parameters $$ \begin{array}{ll} \hbox{Normalized Time Interval:} & [\tau_1,\tau_2]\\ \hbox{Velocity Translation:} & (u_0,v_0)\\ \hbox{Time/Space Dilation:} & \alpha>0 \\ \hbox{Vertical Reflection:} & \eta = \pm 1\\ \hbox{Rotation Angle:} & -{\pi\over 2}<\theta<{\pi\over 2}. \end{array} $$ to satisfy six constraint equations \begin{eqnarray} {\alpha}\mathbf{R}_\theta^{-1}\( \matrix{ u_1 \cr v_1 }\) &=& \(\matrix{ f'(\tau_1)+u_0 \cr \eta g'(\tau_1)+ v_0}\)\label{eqn:s1}\\[10pt] {\alpha}\mathbf{R}_\theta^{-1}\( \matrix{ u_2 \cr v_2 }\) &=& \(\matrix{ f'(\tau_2)+u_0 \cr \eta g'(\tau_2)+v_0}\)\label{eqn:s2}\\[10pt] {\alpha^2}\mathbf{R}_\theta^{-1}\( \matrix{ \delta_x \cr \delta_y }\) &=& \(\matrix{ f(\tau_2)-f(\tau_1) +u_0 (\tau_2-\tau_1) \cr \eta g(\tau_2)-\eta g(\tau_1) +v_0 (\tau_2 - \tau_1) }\)\label{eqn:s3} \end{eqnarray} for given boundary conditions $u_1,v_1,u_2,v_2,\delta_x,\delta_y$, with \begin{equation} \begin{array}{rcl} u_2-u_1&=&1\\ v_2-v_1&=&0 \end{array} \label{eqn:uvnorm} \end{equation} Subtracting equation (\ref{eqn:s1}) from (\ref{eqn:s2}) and using (\ref{eqn:uvnorm}) yields \begin{equation} \(\matrix{f'(\tau_2)-f'(\tau_1)\cr g'(\tau_2)-g'(\tau_1)}\) = \alpha \mathbf{R}_\theta^{-1} \(\matrix{ 1\cr 0}\) = \(\matrix{ \alpha\,\cos\theta \cr -\alpha\,\sin\theta}\)\label{eqn:deluv} \end{equation} which defines a map $(\alpha,\theta)\mapsto(\tau_1,\tau_2)$ that is independent of all boundary conditions. Given $\alpha,\theta$ this map can be quickly solved to arbitrary precision as shown in section \ref{sec:time}. For multiple calculations of the same precision, an interpolated hash table may be used. Using boundary velocity conditions, integration constants $u_0, v_0$ follow from (\ref{eqn:s1}) (or (\ref{eqn:s2})), and we thus get a map $(\theta,\alpha)\mapsto (\mu_x,\mu_y)$ as $$ \( \matrix{ \mu_x \cr \mu_y }\) = \(\matrix{ f(\tau_2)-f(\tau_1) +u_0 (\tau_2-\tau_1) \cr g(\tau_2)-g(\tau_1) +v_0 (\tau_2 - \tau_1) }\)$$ Then the minimal time solution is specified by finding the correct $(\theta, \alpha)$ to match the computed displacement $(\mu_x, \mu_y)$ to the target displacement $(\delta_x,\delta_y)$. \subsection{Bounds on Dilation}\label{sec:dilate} Fix $\theta\in(-\pi/2,\pi/2)$, and let $$\( \matrix{ \rho_u \cr \rho_v }\)=\mathbf{R}_\theta^{-1}\( \matrix{ 1 \cr 0 }\) = \( \matrix{ \cos \theta \cr -\sin \theta }\)$$ so that $\rho_u>0$ Then (\ref{eqn:deluv}) yields \begin{eqnarray} \alpha\rho_u &=& \mathop{\mathrm{arcsinh}}(\tau_2)-\mathop{\mathrm{arcsinh}}(\tau_1)\label{eqn:asnh}\\[10pt] \alpha\rho_v &=& \sqrt{1+\tau_2^{\,2}} - \sqrt{1+\tau_1^{\,2}}\label{eqn:vee} \end{eqnarray} Note that (\ref{eqn:asnh}) and $\alpha\rho_u>0$ make $\tau_2-\tau_1>0$. Recall that \begin{equation} \mathop{\mathrm{arcsinh}}(z)=\ln\(z+\sqrt{1+z^{\,2}}\)\label{eqn:lnsnh} \end{equation} Three readily verifiable bounds will be useful: \begin{equation} |z|<\sqrt{1+z^2}<|z|+1 \label{eqn:veebnd} \end{equation} \begin{equation} z+|z| < z+\sqrt{1+z^2} \label{eqn:hbnd} \end{equation} and if $z>0$ then \begin{equation} z+\sqrt{1+z^2}<1+2z \label{eqn:zposhbnd} \end{equation} The next lemma follows from (\ref{eqn:vee}) and (\ref{eqn:veebnd}). \begin{lemma} $$\alpha\rho_v - 1 < |\tau_2| - |\tau_1| < \alpha\rho_v + 1$$ \label{clm:diffbound} \end{lemma} Proposition \ref{prop:lamb} is a corollary of the following lemma: \begin{lemma} $$\tau_2-\tau_1 > \exp(\alpha\rho_u/2)-1$$ \label{clm:sumbound} \end{lemma} \begin{proof} From (\ref{eqn:asnh}), $$ \alpha\rho_u = \mathop{\mathrm{arcsinh}}(\tau_2)-\mathop{\mathrm{arcsinh}}(\tau_1) $$ An exercise of calculus demonstrates that for $\delta>0$ the maximum of $\mathop{\mathrm{arcsinh}}(\tau+\delta)-\mathop{\mathrm{arcsinh}}(\tau-\delta)$ is realized at $\tau=0$, hence \begin{eqnarray*} \alpha\rho_u &=& \mathop{\mathrm{arcsinh}}(\tau_2)-\mathop{\mathrm{arcsinh}}(\tau_1)\\ &\le& \mathop{\mathrm{arcsinh}}\((\tau_2-\tau_1)/ 2\)-\mathop{\mathrm{arcsinh}}\((\tau_2-\tau_1)/ 2\)\\ &=& 2\mathop{\mathrm{arcsinh}}\((\tau_2-\tau_1)/ 2\)\\ &\le& 2\ln(1+\tau_2-\tau_1) \end{eqnarray*} with the last step following from (\ref{eqn:lnsnh}) and (\ref{eqn:zposhbnd}). \end{proof} An a priori upper bound for $\tau_2-\tau_1$ comprised of three straight line segments joined at points of zero velocity. It takes a minimum of $|(u_1,v_1)|$ time units to bring initial velocity down to zero, and a minimum of $|(u_2,v_2)|$ to build up to final velocity from zero velocity. Connecting the two points of zero velocity with a straight bang-bang trajectory produces: $$ \begin{array}{c} \mu_1 = |(u_1,v_1)| \qquad \mu_2 = |(u_2,v_2)| \\ T_{\scriptscriptstyle{MAX}} = \mu_1 + \mu_2 +\sqrt{2}\((2\delta_x-\mu_1 u_1-\mu_2 u_2)^2+(2\delta_y-\mu_1 v_1-\mu_2 v_2)^2\)^{1/4} \end{array} $$ Thus the desired $\theta,\alpha$ solution will satisfy $\alpha<\Lambda(T_{\scriptscriptstyle{MAX}},\theta)$. \subsection{Solving for Time} \label{sec:time} Fix $\theta\in(-\pi/2,\pi/2),\alpha>0$, and let $$\( \matrix{ \mu_u \cr \mu_v }\)=\alpha \mathbf{R}_\theta^{-1}\( \matrix{ 1 \cr 0 }\)= \(\matrix{ \alpha\,\cos\theta \cr -\alpha\,\sin\theta}\) $$ Then constraint (\ref{eqn:deluv}) becomes: \begin{equation} \( \matrix{ \mu_u \cr \mu_v }\)=\(\matrix{\mathop{\mathrm{arcsinh}}(\tau_2)-\mathop{\mathrm{arcsinh}}(\tau_1)\cr \sqrt{1+\tau_2^{\,2}}-\sqrt{1+\tau_1^{\,2}}}\)\label{eqn:mu} \end{equation} Note that $\mu_v>0$ implies $\tau_2>\tau_1$. For simplicity, take $G(\tau)=g'(\tau)=\sqrt{1+\tau^2}$. With \begin{equation} \tau_2=\sinh(\mathop{\mathrm{arcsinh}}(\tau_1)+\mu_u), \label{eqn:tau2} \end{equation} equation (\ref{eqn:mu}) reduces to \begin{equation} \mu_v= G(\sinh(\mathop{\mathrm{arcsinh}}(\tau_1)+\mu_u))-G(\tau_1) \label{eqn:tau1} \end{equation} \begin{lemma} Fix $\mu_u>0$, then \begin{equation} \mu_v=G(\sinh(\mathop{\mathrm{arcsinh}}(\tau)+\mu_u))-G(\tau)\label{eqn:mono} \end{equation} is monotone in $\tau$, with $\mu_v\to\infty$ as $\tau\to\infty$ and $\mu_v\to-\infty$ as $\tau\to-\infty$. \end{lemma} \begin{proof} Computing: \begin{eqnarray*} \lefteqn{{\partial^2\over \partial \delta^2} G(\sinh(\mathop{\mathrm{arcsinh}}(\tau)+\delta))}\\ &=&{4\cosh(2\delta+2\mathop{\mathrm{arcsinh}} \tau)+\cosh(4\delta+4\mathop{\mathrm{arcsinh}} \tau)+3\over (2\cosh(2\delta+2\mathop{\mathrm{arcsinh}} \tau)+2)^{3/2} }\\ &>&0 \end{eqnarray*} hence for $\mu_u>0$, $$ \left.{\partial\over\partial\delta} G(\sinh(\mathop{\mathrm{arcsinh}}(\tau)+\delta))\right|_{\delta=0}^{\mu_u}>0$$ making $$ \begin{array}{c} G'(\sinh(\mathop{\mathrm{arcsinh}}(\tau)+\delta))\sinh'(\mathop{\mathrm{arcsinh}}(\tau)+\delta)\qquad\qquad \\ \qquad\qquad -G'(\sinh(\mathop{\mathrm{arcsinh}}(\tau)))\sinh'(\mathop{\mathrm{arcsinh}}(\tau))>0 \end{array} $$ and $$ G'(\sinh(\mathop{\mathrm{arcsinh}}(\tau)+\delta)){\sinh'(\mathop{\mathrm{arcsinh}}(\tau)+\delta)\over \sinh'(\mathop{\mathrm{arcsinh}}(\tau))} -G'(\tau)>0$$ thus $${\partial\mu_u\over\partial \tau} = G'(\sinh(\mathop{\mathrm{arcsinh}}(\tau)+\delta))\sinh'(\mathop{\mathrm{arcsinh}}(\tau)+\delta){\mathop{\mathrm{arcsinh}}}'(\tau)-G'(\tau)>0$$ \end{proof} As a corollary, we have: \begin{lemma} For any given $(\mu_u,\mu_v)$ with $\mu_u > 0$ there is a unique solution $(\tau_1,\tau_2)$ to (\ref{eqn:mu}). \end{lemma} The unique solution to (\ref{eqn:mu}) is estimated using bisection with the following bounds to initiate the algorithm. \begin{lemma} Given $\mu_u>0$, $\mu_v$, define \begin{equation} \begin{array}{lcl} T_{\scriptscriptstyle{LO}} &=& -e^{\mu_u}\max\{{1\over 2},{1-\mu_v\over\mu_u}\}\\ T_{\scriptscriptstyle{HI}} &=& \max\{0,{1+\mu_v\over\mu_u}\}, \end{array} \label{eqn:tlothi} \end{equation} then the unique solution $\tau$ to equation (\ref{eqn:mono}) satisfies $T_{\scriptscriptstyle{LO}}<\tau<T_{\scriptscriptstyle{HI}}$ \label{clm:tlothi} \end{lemma} The proof consists of analyzing the three cases $\tau_1<\tau_2<0$, $\tau_1<0<\tau_2$, and $0<\tau_1<\tau_2$, as contained in the following three lemmas. Given $\mu_u>0$, $\mu_v$, let $\tau_1$ be the solution to equation (\ref{eqn:mono}) and $\tau_2=\sinh(\mathop{\mathrm{arcsinh}}(\tau_1)+\mu_u)$. \begin{lemma} If $\tau_1<0<\tau_2$ then $-{1\over 2} e^{\mu_u}<\tau_1$ \end{lemma} \begin{proof} $$ \begin{array}{rcl} \mu_u &=& \mathop{\mathrm{arcsinh}}(\tau_2)-\mathop{\mathrm{arcsinh}}(\tau_1)\\ &>& -\mathop{\mathrm{arcsinh}}(\tau_1) \end{array} $$ Hence using $-{1\over 2} e^z < \sinh(z)$, $$-{1\over 2} e^{\mu_u}<\sinh(-\mu_u)<\tau_1 $$ \end{proof} \begin{lemma} If $\tau_1<\tau_2<0$ then $\mu_v<0$ and $\tau_1>-e^{\mu_u}{1-\mu_v\over \mu_u}$ \label{clm:belowzero} \end{lemma} \begin{proof} With $\tau_1 = \sinh(\mathop{\mathrm{arcsinh}}(\tau_1))$ and $\tau_2 = \sinh(\mathop{\mathrm{arcsinh}}(\tau_1)+\mu_u)$, $$\tau_2-\tau_1>\mu_u \left.{d\over dz} \sinh(z)\right|_{\mathop{\mathrm{arcsinh}}(\tau_2)} = \mu_u \cosh(\mathop{\mathrm{arcsinh}}(\tau_1)+\mu_u) $$ Lemma \ref{clm:diffbound} has $\mu_v-1<|\tau_2|-|\tau_1|$, and with $\tau_1<\tau_2<0$, $$-\mu_v+1>-|\tau_2|+|\tau_1|=\tau_2-\tau_1$$ hence using ${1\over 2}e^{-z}<\cosh(z)$, $$-\mu_v+1 > \mu_u \cosh\(\mathop{\mathrm{arcsinh}}(\tau_1)+\mu_u\) > {\textstyle{1\over 2}}\mu\exp(-\mathop{\mathrm{arcsinh}}(\tau_1)-\mu_u)$$ and $$\mathop{\mathrm{arcsinh}}(\tau_1)>-\ln\(2e^{\mu_u}\(1-\mu_v\over\mu_u\)\).$$ Using $\sinh(u)>-{1\over2}e^{-u}$ completes the proof. \end{proof} \begin{lemma} If $0<\tau_1<\tau_2$ then $\tau_1<{\mu_v+1\over\mu_u}$ \end{lemma} \begin{proof} Similar to the proof of claim \ref{clm:belowzero}, $$\tau_2-\tau_1>\mu_u \left.{d\over dz} \sinh(z)\right|_{\mathop{\mathrm{arcsinh}}(\tau_1)} = \mu_u \cosh(\mathop{\mathrm{arcsinh}}(\tau_1))>\mu_u \tau_1 $$ With $0<\tau_1<\tau_2$, claim \ref{clm:diffbound} implies $\mu_v+1>\tau_2-\tau_1$, and the result follows. \end{proof} \section{Conclusion} Bounds and methods for solving the minimum time bounded acceleration path in the plane subject to velocity and location endpoint conditions are presented in this paper. An example implementation in Python is available from the author. The methods will apply to other boundary restrictions, such as zero initial velocity, or free endpoint location, and the author would appreciate being informed of any adaptations. \end{document}
arXiv
For how many positive integers $n$ does $1+2+\cdots+n$ evenly divide $6n$? Because \[ 1 + 2 + \cdots + n = \frac{n(n+1)}{2}, \]$1+2+ \cdots + n$ divides the positive integer $6n$ if and only if \[ \frac{6n}{n(n+1)/2} = \frac{12}{n+1}\ \text{is an integer.} \]There are $\boxed{5}$ such positive values of $n$, namely, 1, 2, 3, 5, and 11.
Math Dataset
A generalized method for encoding and decoding run-length-limited binary sequences IEEE Transactions on Information Theory 29(5):751 - 754 DOI:10.1109/TIT.1983.1056728 Project: rll codes G.F.M. Beenker Kees Schouhamer Immink Turing Machines Inc Many modulation systems used in magnetic and optical recording are based on binary run-length-limited codes. We generalize the concept of dk -limited sequences of length n introduced by Tang and Bald by imposing constraints on the maximum number of consecutive zeros at the beginning and the end of the sequences. It is shown that the encoding and decoding procedures are similar to those of Tang and Bald. The additional constraints allow a more efficient merging of the sequences. We demonstrate two constructions of run-length-limited codes with merging rules of increasing complexity and efficiency and compare them to Tang and Bahl's method. Content uploaded by Kees Schouhamer Immink All content in this area was uploaded by Kees Schouhamer Immink on Apr 11, 2019 IEEE TRANSACTIONS ON INFORMATION THEORY,VOL. IT-29, NO. 5, SEPTEMBER 1983 751 TABLE I THE(~~,&)= (~,~)FI~ED(T= I)NONCATASTROPHICCOWOLUTIONAL ENCODERWITHMEMORY = ~ANDMAXIMUMFREEDISTANCE TOGETHERWITHSOMEBETTERTIME-VARYINGENCODERS' T G$ GT G2* G3 G4* d, Nm L 3 2 2 1 3 I 2 4 2 E A C 7 3/2 3 3 B 7 3 3A 1c 00 7 4/3 8/3 OE 3B 00 03 2D 30 4 E9 CO 7 3/2 13/4 3B 70 OD EC 5 3A7 000 7 e/5 13/5 OED 300 037 1co OOD 3B0 003 2DC 'd, The fixed encoder has maximum free distance and is optimum in the sense of smallest number of weight d, paths per time instant N, and in the sense of smallest average number of information bit errors along these paths I,, The encoders are specified by the (n,,T, k,,T) matrices G,* of (7); the rows of G,* are written in hexadecimal form; e.g., the upper row of G,f for = 2 is specified as "E" and hence is the row [ 1 1 1 01. where G,? for 0 < j 4 M* is the (Tk,) X (Tn 0) matrix GjT (0) GjT+l(l) ... G,T+T-I(T- 1) GjT- I (0) G,,(l) "' G,T+T-#- l> G;= . __ G,,-T+,(O) G,;-T+z(~) ... Gj,(T- 1) where by way of convention G,(u) = 0 for j > it4 and for j < 0 and where M* is the smallest integer equal to or greater than M/T, i.e., M* = [M/T]. Thus, every (n,, k,) periodic convolutional encoder with period T and memory M can be considered to be an (n*,, ki) fixed convolutional encoder with memory M* given by (8) and with k,* = Tk, (94 n; = Tn,. (9b) This equivalence permits upper bounds on code distance, as well as software, developed for FCE's to be applied to PCE(T)'s. NEW CODES We now report some positive results from a computer-aided search for noncatastrophic PCE( T)'s with (n,, k,) = (2,l) that are superior to the best noncatastrophic FCE with the same parameters n,, k,, and M. The search was concentrated on the case M = 4 as this is the smallest M such that Heller's upper bound [4] on d,, namely d, = 8, is not achieved by any FCE. The results of the search are given in the Table I. The T = entry is the fixed convolutional encoder found by Oldenwalder [5] that has maximum d,, namely dm = 7, and is optimum both in the sense of minimum ypo and m the sense of minimum 1,. For T > 1, the codes are time-varying but are specified by the corresponding fixed encoding matrices Cl*, 0 < j < M*, defined by (7). In Table I, the encoders with period T = 2 and 3 were found by an exhaustive search to be optimal in the sense of minimizing N,; the codes for T = 4 and 5 were the best found in a heuristic nonexhaustive search. The codes given in Table I for T = 2, 3, 4, and 5 are all superior to the best fixed code (T = 1) both in the sense of smaller N, and also in the sense of smaller 1,. It is somewhat disappointing that no time-varying codes with larger d, than the best fixed code were found. It seems likely that no such superiority is possible for M = 4 when (n,, k,) = (2, I); the next M for which such superiority is possible is M = 7 where Heller's bound gives d, < 11 but the best fixed code has d, = 10. It is encouraging, however, that periodic codes superior to the best fixed codes could be found at all, as no such instances could be found in- the prior literature. The author is very grateful to Prof. James L. Massey who not only suggested this work but also devoted much time and pa- tience in supervising it. [II [21 J. L. Massey, "Error bounds for tree codes, trellis codes and convolutional codes with encoding and decoding procedures," in Coding and Complexity, G. Longo, Ed., 'C.I.S.M. Courses and Lectures No. 216. New York: Springer-Verlag, 1974, pp. l-57. A. J. Viterbi, "Error bounds for convolutional codes and an asymptoti- cally optimum decoding algorithm," IEEE Trans. Inform. Theory, IT-13, pp. 260-269, Apr. 1967. -, "Convolutional codes and their performance in communication systems," IEEE Trans. Commun. Technol., vol. COM-19, pp. 751-771, Oct. 1971. J. A. Heller, "Sequential decoding: Short constraint length convolutional codes," Jet Propulsion Lab., California Inst. of Technology, Pasadena, Space Program Summary 37-54, vol. 3, pp. 171-174, Dec. 1968. J. P. Oldenwalder, "Optimal decoding of convolutional codes," Ph.D. dissertation, School of Engineering and Applied Science, University of California, Los Angeles, CA, 1970. A Generalized Method for Encoding and Decoding Run-Length-Limited Binary Sequences G. F. M. BEENKER AND K. A. SCHOUHAMER IMMINK Abstract-Many modulation systems used in magnetic and optical re- cording are based on binary run-length-limited codes. We generalize the concept of dk-limited sequences of length n introduced by Tang and Bahl by imposing constraints on the maximum number of consecutive zeros at the beginning and the end of the sequences. It is shown that the encoding and decoding procedures are similar to those of Tang and Bahl. The additional constraints allow a more efficient merging of the sequences. We demonstrate two constructions of run-length-limited codes with merging rules of increasing complexity and efficiency and compare them to Tang and Bahl's method. Many baseband modulation systems applied in magnetic and optical recording are based on binary run-length-limited codes [l], [2], [3]. A string of bits is said to be run-length-limited if the Manuscript received April 5, 1982; revised November 30, 1982. This work was partially presented at the IEEE International Symposium on Information Theory, Les Arcs, France, June 21-25, 1982. The authors are with the Philips Research Laboratories, 5600 MD Eindhoven, The Netherlands. 0018-9448/83,'0900-075 l$Ol .OO 01983 IEEE 152 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT - 29, NO. 5, SEPTEMBER 1983 number of consecutive zeros between adjacent ones is bounded between a certain minimum and a certain maximum value. The upper run-length constraint guarantees a transition within a specified time interval needed for clock regeneration at the re- ceiver. The lower run-length constraint is imposed to reduce the intersymbol interference. The latter constraint appears to have a bearing on the spectral properties of the sequences [4]. In this cvrrespondence we consider an encoding and decoding procedure for a special class of run-length-limited codes. The constraints we consider can be defined as followed. Let n, d, k, I, and r be given, where 1 6 k and r < k. A binary sequence of length n is called a dklr-sequence if it satisfies the following a) d-constraint: any two ones are separated by a run of at least d consecutive zeros; b) k-constraint: any run of consecutive zeros has a maximum length k; c) /-constraint: the number of consecutive leading zeros of the sequence is at most 1; d) r-constraint: the number of consecutive zeros at the end of the sequence is at most r. A sequence satisfying the d- and k-constraints is called a dk- sequence. Given certain desired run-length constraints, it is not trivial how to map uniquely the input data stream onto the encoded output data stream. A systematic design procedure on the basis of fixed length sequences has been given by Tang and Bahl [l]. Their method is based on mapping dk-sequences of length n onto consecutive integers and vice versa. In this correspondence we intend to generalize their results to dklr-sequences. We are also going to show that the application of dklr-sequences enables the sequences of length n to be merged efficiently without violation of the d- and k-constraints at the boundaries. We assume r > d. Theorems 1 and 2 of this correspondence remain valid for r < d and Theorem 3 can be generalized accord- ingly. The proofs, however, are less elegant, the reason being that for r < d there are no dklr-sequences (x,,- , , . . , x0) having a one at positions j where r < j < d. II. ENCODING AND DECODING In this section we consider a way of mapping the set of all dklr-sequences onto a set of consecutive integers and vice versa. Our results are similar to those obtained by Tang and Bahl for dk-sequences. Let A,, be the set of all dklr-sequences of length n. A, can be embedded in a larger set 6?,, consisting of the all-zero sequence of length n and of all binary sequences of length n satisfying the d- , k- , and r-constraints where the number of consecutive leading zeros is allowed to be greater than k. The set a,, can be ordered lexicographically as follows: if x = ( x!,~ , , . , x,,) and y = (y,,_ , , . ' . , yO) are elements of @n theny is called less than x, y -+ x, if there exists an i, 0 < i < n, such that y, < x, and x, ,= y, for i < j < n. The position of x in the lexicographical ordermg of a,,, is denoted by r(x); i.e., r(x) is the number of ally in @n withy + x. Consequently r(0) = 0. For the sake of convenience we introduce the residual of a vector. Lety = (y,,-,;..,~a) E a,,,, y * 0, and let t be such that yt = 1 and y, = 0 if t < j < n. Then the residual of y, res( y), is defined as follows: res( y): = y - A,, where At, = ifi= 0, elsewhere, res (0): = 0. It can easily be seen that y E &,, implies res ( y) E a,,,. The following observation is basic to the proof of Theorem 1. Let X, u E W,, and assume that x, = u, = 0 < j < n) and x, = nt = 1 (0 < < n). Then it is not difficult to show that r(n) - r(x) = r(res(u)) - r(res(x)). Let N,.(i), i > 0, be the number of dklr-sequences with I = k of length i and let N,.(O) = 1. Theorem 1: Let x = (x,~,;.., xc,) E @,. Then n-l r(x) = c x,W(j). j=O Proof: Let the nonzero coordinates of x be indexed by i, < i, < . . < i,, i.e., x, = 1 if and only if i E (i,; . ., iq}. Let I( be the smallest element of @,, with the property that ulq = 1. Then it is not difficult to see that the second 1 of I( occurs at position i, - k - 1, if i, - k - 1 2 0 (otherwise u has only one 1, if r > i,, or also u0 = 1 if r < i4). Here we recall that the all-zero sequence of length n is a dklr-sequence if and only if n < min(l, r}. Lete(i,, r) = 1 ifr > i,,andc(i,, r) = Oifr < i,. Then we obtain r(u) - r(res (u)) = the number of dklr-sequences with I = k of length i, with their leftmost 1 at position j where max(O, i, - k - l} < j < i, + c(i,, r) = N,.(i,). Furthermore, on the basis of the above mentioned observation it holds that r(x) = r(u) + r(x) -r(u) = r(u) + r(res(x)) - r(res(u)) = r(res(x)) + Nr(iq). The theorem then follows by induction. Q.E.D. We have found a simple method for mapping the elements of &,, onto consecutive integers. For practical applications we need a mapping of the elements of A, onto the set (0, 1, . , ]An] - l}, where ]A,,] is the cardinality of A,,. Obviously the set A, consists of the lA,1l largest elements of @,. In addition it is clear that the number a of elements of &,, that are smaller than all the elements of A,, is equal to r(u); i.e., cr = r(a), where a is the smallest element of A,,. In this way we have proved the following theorem. Theorem 2: The transformation t: A,, + N (0) defined by t(x) = r(x) - (Y for all x E A,, is a one-to-one mapping from A,, onto the set (0, 1, . . . , lAnl - l} which preserves the ordering of A,,, i.e., x ~0 y if and only if t(x) < t(y). The number (Y can also be expressed in another way. In order to do so we define N,!)(j), j > 0, to be the number of dk- sequences of length j with their leftmost element equal to 1 and satisfying the r-constraint, NrO(0): = 1. It is not difficult to show that n-l-l n-l-l a = c N:(j) + 1 = c NY(j). j=l j=O The numbers N,(j) and N!'(j) are easily computed. They can be found by a straightforward computer search. A more sophisti- cated approach to finding these numbers can be based on the finite state transition matrix corresponding to the dklr-sequences ]51. The conversion from integers to dklr-sequences of length n is analogous to Tang and Bahl's method and can be carried out as follows. Let T,, . . ., T,- , be integers defined by T = the number of elements y in W, smaller than the smallest element u in & with u, = 1 and u, = 0 for j > i, i.e., T, = i N:(j) + 1 = i N:(j). MERGINGOF~~~T-SEQUENCESWHERE OJ STANDS FOR~CONSECUTIVE $3 t Merging Bits s+t+dik+l Od s+t+d>k+l ifs<d o"-"lo"- I ifs > d 1od- From the definition and our assumption r 2 d it immediately follows that T, < T, < . . . < T,- ,. These integers are used for mapping consecutive integers onto dklr-sequences of length n as shown in the following theorem. Theorem 3: Let x = (x,- ,; . ., x0) be a dk/r-sequence of length n. Then a) x, = 1 and xj = 0 for < j < n e T, < r(x) < T,, , Furthermore, if x, = 1 and x, = 0 for < j < n, then b) T-,_, d r(x) - N,(t) < Ted. Proof: a) This statement follows from the definition of 7;. b) Let x be a dklr-sequence of length n with x, = 1 and x, = 0 for cj < n. Then Trek-, < r(res(x)) < Tr-d. Hence b) follows, since r(x) - r(res(x)) = N,(t). The conversion from integers to dk-sequences can therefore also be generalized to the conversion from. integers to dklr- sequences. The following simple encoding algorithm for dklr- sequences can be derived from this theorem. Given an integer I in the set R = {r(x)lx E A,} (1 = 0 if x = 0), we first locate the largest possible t, 0 < such that T < I < I;+, and we make x, = 1. Subtracting the contribution of x, in I, we get a new integer I - N,(t). Theorem 3 can be used again to find the next nonzero component of x. The second part of Theorem 3 assures us that x, will be followed by at least d, but no more than k zeros. III. THE EFFICIENCY OF dklr-SEQUENCES In modulation systems the dklr-sequences of length n cannot in general be cascaded without violating the dk-constraint at boundaries. Inserting a number /3 of merging bits between adja- cent n-sequences makes it possible to preserve the & and k-con- straints for the cascaded output sequence. The dk-sequences need p = d + 2 merging bits [l], whereas only /3 = d merging bits are required for dklr-sequences, provided that the parameters J and r are suitably chosen. Hence this method is more efficient, espe- cially for small values of n. We shall now demonstrate two constructions of codes with merging rules of increasing complex- ity and efficiency. Construction I: Choose d, k, r, and n such that r + d < k. Let I= k - d - r and p = d. Then the dklr-sequences of length n can be cascaded without violating the d- and k-constraints if the merging bits are all set to zero. Construction 2: Choose d, k, and n such that 2d - 1 < k. Let r = I = k - d and /? = d. Then the dklr-sequences of length n merging bits are determined by the following rules. Let an n-sequence end with a run of s zeros (s < r) while the next n-sequence start with t (t < f) leading zeros. Table I shows the merging rule for the /3 = d merging bits. The number m of data bits that can be represented uniquely by a dk[r-sequence of length n is given simply by m = [log2 IA,I~ f where Lx] is the greatest integer not greater than x. The ratio R of the number of data bits and the number of needed channel bits is called the information rate of the code. For example, the information rate of the codes based on the two above-mentioned constructions equals R = m/(n + d). The asymptotic informa- tion rate is the capacity C of Shannon's discrete noiseless run- TABLE II BLOCKCODESBASEDONCONSTRUCTION k n R C q = R/C 7 12 8/13 0.68 0.91 17 14 8/16 0.55 0.91 TABLE III 8/16 0.54 0.92 IO 17 8/20 0.45 0.90 TABLE IV BLOCKCODESBASEDONTANGANDBAHL'SCONSTRUCTION d k n R C q = R/C 1 5 12 8/15 0.65 0.82 3 8 17 a/22 0.43 0.86 4 10 19 8/25 0.38 0.85 length-limited channel [6], [7], log, IAnI C= lim p. n-tm The efficiency q can be defined as the ratio of the information rate R and the capacity C of the noiseless run-length-limited channel, q = R/C. In order to get some insight into the efficiency of the codes based on Constructions 1 and 2 we have considered some exam- ples. For m = 8 and for d = 1, 2, 3, 4 and k = 2d;. ., 20 we have determined n in such a way that the information rate R was maximized. In order to compare our two constructions to Tang and Bahl's method we have calculated the corresponding capaci- ties C and efficiencies r~. The capacity of the noiseless run- length-limited channels was calculated by a method given in [ 11. Our results can be summarized as follows. For small values of k, i.e., 2d $ k < 3d, Construction 2 is only slightly better than Tang and Bahl's method (approximately 5 percent), while the efficiency of Construction 1 was worse (5 to 10 percent). For larger values of k, however, Constructions 1 and 2 are clearly better. For those values of k the gain of Construction 2 compared to Tang and Bahl's method is most significant for d = (12 to 15 percent), while for d = 3,4 the gain is equal to 9 percent. For large values of k, Constructions 1 and 2 have the same efficiency; for the other values of k, Construction 2 has a better efficiency than Construction 1. Tables II, III, and IV give the results for m = 8 and d = 1, 2, 3, and 4; in order to limit the length of the tables, we have restricted k and n to those values which maximize the information rate R. We note that rates up to 95 percent of the channel capacity can be achieved. On average we observe a slight difference in the rates obtained by Constructions 1 and 2, ap- proximately 5 percent in favor of Construction 2. Methods are described for the construction of run- length-limited codes on the basis of sequences of fixed length. Additional constraints on the maximum number of zeros at the beginning and end of a sequence, a generalization of Tang and 154 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT - 29, NO. 5, SEPTEMBER Bahl's work, allow a more efficient merging of the sequences. For short lengths in particular, our method yields better efficiencies redundancy R s over A, than those of Tang and Bahl. R, = Qy$A ?~,H(f'i: QN), where H( P,,$ : QN) is the relative entropy between Ps and QN, D. T. Tang and L. R. Bahl, "Block codes for a class of constrained noiseless channels," Inform. Contr., vol. 17, pp. 436-461, 1970. PI H. Kobayashi, "A survey of coding schemes for transmission of recording of digital data," IEEE Trans. Comm. Tech., vol. COM-19, pp. 1087-l 100, H(Pi: QN) = N-' C P~(x)log(P{(x)/Q,(X)). XEAN [31 K. A. Immink, "Modulation systems for digital audio disks with optical read out," in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Henceforth, all the logarithms are of base 2. The interpretation of pp. 587-589, 1981. the source matching approach, as an approximation to the [41 M. G. Peichat and J. M. Geist, "Surprising properties of two-level band- minimax solution over the set of uniquely decodable codes, is width compaction codes," IEEE Trans. Corm-z., vol. COM-23, pp. well demonstrated in [2, th. 21. 878-883, 1973. [51 P. A. Franaszek, "Sequence-state methods for run-length-limited coding," A numerical approach to the source matching solution is based IBMJ. Res. Dev., vol. 14, pp. 376-383, 1970. on the next relationship [2, th. 31, C. E. Shannon, "A mathematical theory of communication," Be// Syst. Tech. J., [71 C. V. Freiman and A. D. Wyner, "Optimum block codes for noiseless input restricted channels," Inform. Contr., vol. 7, pp. 398-415, 1964. where Qc( x) is given by Q;(x) = J1,9":'-4 Me), for all x E AN. (2) Note that the right side of (I) is merely the channel capacity between the source parameter space A and the source output space AN and hence the source matching problem can be con- verted to the channel capacity problem. It is also known [3, p. 961 On The Source Matching Approach for Markov Sources that the number of values of the source parameter 0 with nonzero DONG H. LEE probabilities achieving the channel capacity is no larger than the cardinality of the source output space, which is (t + l)N for alphabet AN. The capacity of a finite-input to finite-output A&m&-The source matching approach is a universal noiseless source channel can be obtained numerically by applying the Blahut- coding scheme used to approximate the solution of minimax coding. A Arimoto algorithm [4], [5]. Furthermore, if any sufficient statistic numeric solution of the source matching approach is presented for the class of AN is found, its application will reduce the computational of binary first-order Markov sources. complexity of the numerical solution since its cardinality is much smaller than (t + l)N in most cases. Noiseless source coding, as applied to sources whose statistics III. MARKET are either partially or completely unknown, is called universal noiseless source coding. Based on previous results, Davisson [l] In this section we consider the source matching approach for formulated game-theoretic definitions for the problems of univer- the class of first-order Markov sources with binary alphabet sal coding. According to him, the problem of minimax coding is A = (0, 1) It is assumed that Markov source is stationary with stochastic matrix to find a code minimizing the maximal redundancy over a given class of sources. In [2], Davisson and Leon-Garcia presented a coding scheme called "source matching" to approximate the solutions of minimax codes over an arbitrary class of stationary [z ::]=[l;,eo l8"fJ sources. A numerical solution for this source matching approach where B,, for i, j = 0,l represents the transition probability from was obtained for the extended class of Bernoulli-like sources. the previous state i to the present state j. The stationary pdf This correspondence presents a numerical solution for the source r = ( ro, a,) is uniquely expressed as matching approach for the class of binary first-order Markov sources. 7r = (e,(eo + 6,)-l, e,(e, 8,)-I). SOURCE MATCHING APPROACH The domain A of the source parameter 0 = (0,, 19,) becomes the In this section, the results of [2] are briefly reviewed for Cartesian product [0, I] [0, I] over the closed interval [0, I]. reference. Let A = (0, I,. . , t} be a source alphabet set. We We now present a sufficient statistic defined on AN. Let n be consider fixed-length to variable-length binary block encoding on the Hamming weight of I's of the source message block source message blocks x = (x, , x2,. . . , xN) on N-tuple alphabet (x,9. "9 xN). With n fixed, let u, for i = I, 2; . ., n be the num- space AN. Let Pi(x) be the probability density function (pdf) of ber of runs of l's with run length i. As an example, N = 6, n = 4, x conditioned for each 6 taking values in some index set A. Let aI = 2, u2 = I, and us = a4 = 0 for x = (101011). The total W be the set of all the possible pdf's w on A. number of all the runs of I's is upper-bounded as Let A be the set of all the real-valued (t + l)N-dimensional pdf's QN. The source matching approach finds the minimax Q, = t a,<N-n+l. Manuscript received July 6, 1982; revised January 11, 1983. This paper is a With x, and xN, the first and last digits of x = (x,; . ., x,), part of a dissertation submitted to the University of Maryland, College Park, in respectively, (n, D,, , x, , xN) uniquely specifies Pi(x) and is an partial fulfillment of the requirements for Ph.D. degree. eligible sufficient statistic. Realizing that the probability of x with The author is with Hewlett-Packard Laboratories, Palo Alto, CA 94304. (n, D,,, 0, I) is identical to that with (iz, D,,, I, 0), the sufficient 001 S-9448/83/0900-0754$01 .OO 01983 IEEE ... The traditional methods [1] for RLL coding and decoding use the enumeration principle based on the weighted sum of the symbols in the codeword. A new and more efficient method for cascading (d,k)-sequences was described by Beenker and Immink [2], [3], using a special class of runlength limited codes: (d,k,l,r)sequences. The new two constraints are: -the number of consecutive leading zeros sequence is at most l, -the number of consecutive trailing zeros sequence is at most r. ... ... At this point we can present the proposed method for computing the number of (d,k,l,r)-sequences that generalizes Beenker and Immink"s Construction 2 [2]. ... Computing the Parameters Used in Construction of (d,k,l,r)-Sequences Radu Rădescu Radu Călin Sorin Constantin A.T. Murgan The construction procedure for (d,k,l,r)-sequences by traditional methods (based on the enumeration principle) requires two sets of weighting coefficients. Based on a set of parameters and recursive relationships, the proposed algorithm with just one set of weighting coefficients is presented. A new formula to determine the number of the messages permitted on constrained channels is introduced. ... Later, this result inspired Immink and others to design enumerative constrained codes [35], [36]. Works focusing on the enumeration of constrained sequences include [37] and [38], while other works introducing constrained code constructions based on enumerative approaches include [39], [40], and [41]. This result of Cover will play a fundamental role in the general method for designing constrained codes based on lexicographic indexing we present in this paper. ... The Secret Arithmetic of Patterns: A General Method for Designing Constrained Codes Based on Lexicographic Indexing IEEE T INFORM THEORY Ahmed Hareedy Beyza Dabak Robert Calderbank Constrained codes are used to prevent errors from occurring in various data storage and data transmission systems. They can help in increasing the storage density of magnetic storage devices, in managing the lifetime of solid-state storage devices, and in increasing the reliability of data transmission over wires. Over the years, designing practical (complexity-wise) capacity-achieving constrained codes has been an area of research gaining significant interest. We recently designed various constrained codes based on lexicographic indexing. We introduced binary symmetric lexicographically-ordered constrained (S-LOCO) codes, $q$ -ary asymmetric LOCO (QA-LOCO) codes, and a class of two-dimensional LOCO (TD-LOCO) codes. These families of codes achieve capacity with simple encoding and decoding, and they are easy to reconfigure. We demonstrated that these codes can contribute to notable density and lifetime gains in magnetic recording (MR) and Flash systems, and they find application in other systems too. In this paper, we generalize our work on LOCO codes by presenting a systematic method that guides the code designer to build any constrained code based on lexicographic indexing once the finite set of data patterns to forbid is known. In particular, we connect the set of forbidden patterns directly to the cardinality of the LOCO code and most importantly to the rule that uncovers the index associated with a LOCO codeword. By doing that, we reveal the secret arithmetic of patterns, and make the design of such constrained codes significantly easier. We give examples illustrating the method via codes based on lexicographic indexing from the literature. We then design optimal (rate-wise) constrained codes for the new two-dimensional magnetic recording (TDMR) technology. Over a practical TDMR model, we show notable performance gains as a result of solely applying the new codes. Moreover, we show how near-optimal constrained codes for TDMR can be designed and used to further reduce complexity and error propagation. All the newly introduced LOCO codes are designed using the proposed general method, and they inherit all the desirable properties in our previously designed LOCO codes. Constrained codes are used to prevent errors from occurring in various data storage and data transmission systems. They can help in increasing the storage density of magnetic storage devices, in managing the lifetime of electronic storage devices, and in increasing the reliability of data transmission over wires. We recently introduced families of lexicographically-ordered constrained (LOCO) codes. These codes achieve capacity with simple encoding and decoding, and they are easy to reconfigure. In this paper, we generalize our work on LOCO codes by presenting a systematic method that guides the code designer to build any constrained code based on lexicographic indexing once the finite set of data patterns to forbid is known. In particular, we connect the set of forbidden patterns directly to the cardinality of the code and to the rule that uncovers the index associated with a codeword. By doing that, we reveal the secret arithmetic of patterns, and make the code design significantly easier. We design optimal (rate-wise) constrained codes for the new two-dimensional magnetic recording (TDMR) technology. We show notable performance gains as a result of solely applying the new codes. Moreover, we show how near-optimal constrained codes be designed and used to further reduce complexity. ... Franaszek"s recursive elimination algorithm [4] or the Tjalkens counting algorithm [8] could not be used to generate this code because there are no principal states. The Beenker and Immink"s "Construction 2" [6] and the Gu and Fuja optimal procedure [7] are operating in this case to construct the EFM code. ... A Generalized LUT Method for RLL Constraints in Storage Media Tudor Murgan Bogdan Măruntu This paper proposes a method of construction for data translation codes using a look-up table (LUT) dictionary. These codes are used for binary RLL constraint channels. It is studied the case of constraints for one symbol of the binary channel. Based on back-tracking algorithm, it is developed an iterative technique to obtain the channel sequences (code words) under the (d,k) constraints from the source sequences (information words). As applications, two examples are given: the EFM code used in data translation for the optical channel from the Compact Disc system and the RLL (2,7) & (1,8) codes used in playback/recording systems for magnetic storage. However, the proposed method could be extended to any binary RLL channel with constraints for one symbol. ... The Franaszek's recursive elimination algorithm [5] or the Tjalkens counting algorithm [6] could not be used to generate this code because there are no principal states. Beenker and Immink's "Construction 2" [7] and Gu and Fuja optimal procedure [8] are operating in this case to construct these codes. ... Backtracking Algorithm Applied to Generate the RLL Dictionary for Magnetic Channels This original paper proposes a construction method for data translation codes using a look-up table (LUT) dictionary. These codes are used for binary RLL constraint channels. It is studied the general case of constraints for both symbols of the binary channel. Based on the backtracking algorithm, an iterative technique to obtain the channel sequences (code words) under the (d,k) constraints from the source sequences (information words) is developed. It is given the example of the RLL (2,7) and RLL (1,8) codes used in playback/recording magnetic storage systems. ... The concept of enumerating allowable sequences in implementing single-track (d, k) block codes was introduced by Tang and Bahl [10] and extended by Beenker and Immink [11]. Their methods do not require a state description, but the resulting codes require merging bits for the codewords to be concatenable. ... Encoding of multi-track (d,k) modulation codes. E.K. Orcutt ... • Enumeration [35], [36]. For block codes using look-up tables, the sizes of the encoding and decoding tables increases exponentially with the length of the sourceword/codeword, which means that hardware for storing these tables will exponentially increase accordingly. ... Near-lossless image compression and multi-track (d,k) modulation codes. Ligang. Ke Channel Coding for Optical Disc Systems, Chapter 7 The modulation and coding format to be used has to be chosen in such a way that it matches the special requirements of the optical channel. The main characteristics of the channel are bandwidth limitation and asymmetry of the recording process. Codes for digital recorders KAS Immink Paul Siegel J.K. Wolf Constrained codes are a kev component in the digital recording devices that have become ubiquitous in computer data storage and electronic entertainment applications. This paper surveys the theory and practice of constrained coding, tracing the evolution of the subject from its origins in Shannon's classic 1948 paper to present-day applications in high-density digital recorders. Open problems and future research directions are also addressed. Ferroelectric Nonvolatile Processor Design, Optimization, and Application Yongpan Liu Huazhong Yang Yiqun Wang Yinan Sun Nonvolatile processor (NVP) is one of the most promising techniques to realize energy-efficient computing systems with zero standby power, instant-on features, high resilience to power failures, and fine-grained power management. As flip-flops as well as static random access memories (SRAM) should be replaced by nonvolatile memory in an NVP, it puts rigid requirements on the nonvolatile memories, such as nearly unlimited operation cycles, ultra-short access time and easy integration to CMOS technology. Ferroelectric memory can meet those metrics with good energy efficiency, which is appropriate to realize an NVP. However, there are several major design problems, such as the unknown design flow of a ferroelectric NVP, the nontrivial area overheads as well as the absent of the real application systems. To overcome those challenges, we present the first fabricated NVP with zero standby power, \(7\,\upmu \text {s}\) sleep time and \(3\,\upmu \text {s}\) wake-up time, consisting of a flip-flop controller (FFC), a distributed memory architecture and a voltage detection system. Compared with an existing industry processor, it can achieve over 30–\(100\times \) speedup on the wake-up/sleep time and \(70\times \) energy savings on the data backup and recall operations. Meanwhile, the ferroelectric NVP exhibits comparative performance and power consumption in normal operations. To attack its area challenges, we design a parallel compare-and-compress architecture (PaCC) and an appropriate vector selecting method to reduce the number of nonvolatile registers by 70–80 % with less than 1 % overflow possibility, which leads to up to 30 % processor area savings. Another segment-based parallel compression (SPaC) architecture is proposed to trade off the chip area and the backup speed. It divides the system vector into several segments and compresses them in parallel. Compared with the PaCC solution, it can improve the backup speed by 83 % with 16 % area savings over the full replacement architecture. Finally, we demonstrate two kinds of battery-less sensor nodes based on the NVP for the first time. They aimed at the moving object detection and the body sensor applications. As both systems are powered by energy-harvesting systems, they eliminate the battery lifetime constraints and work reliably under frequency power failures. A Survey of Coding Schemes For Transmission Or Recording Of Digital Data Hisashi Kobayashi In this survey we shall review coding techniques and results which pertain to such problems as reduction of dc wandering, suppression of intersymbol interference, and inclusion of selfclocking capability. These problems are of engineering interest in the transmission or recording of digital data. The topics to be discussed include: 1) dc free codes such as bipolar signals and feedback balanced codes, 2) correlative level codes and optimal decoding methods, 3) Fibonacci codes and run-length constraint codes, and 4) state-oriented codes. Modulation systems for digital audio discs with optical readout This paper is the first public disclosure of Eight to Fourteen Modulation (EFM) used in the Compact Disc. Error Bounds for Tree Codes, Trellis Codes, and Convolutional Codes with Encoding and Decoding Procedures James L. Massey One of the anomalies of coding theory has been that while block parity-check codes form the subject for the overwhelming majority of theoretical studies, convolutional codes have been used in the majority of practical applications of "error-correcting codes." There are many reasons for this, not the least being the elegant algebraic characterizations that have been formulated for block codes. But while we may for aesthetic reasons prefer to speculate about linear block codes rather than convolutional codes, it seems to me that an information-theorist can no longer be inculpably ignorant of non-block codes. It is the purpose of these lectures to provide a reasonably complete and self-contained treatment of non-block codes for a reader having some general familiarity with block codes. Optimum block codes for noiseless input restricted channels * Inform Contr C.V. Freiman A. D. Wyner A method for determining maximum-size block codes, with the property that no concatenation of codewords violates the input restrictions of a given channel, is presented. The class of channels considered is essentially that of Shannon (1948) in which input restrictions are represented through use of a finite-state machine. The principal results apply to channels of finite memory and codes of length greater than the channel memory but shorter codes and non-finite memory channels are discussed briefly. A partial ordering is first defined over the set of states. On the basis of this ordering, complete terminal sets of states are determined. Use is then made of Mason's general gain formula to obtain a generating function for the size of the code which is associated with each complete terminal set. Comparison of coefficients for a particular block length establishes an optimum terminal set and codewords of the maximum-size code are then obtained directly. Two important classes of binary channels are then considered. In the first class, an upper bound is placed on the separation of 1's during transmission while, in the second class, a lower bound is placed on this separation. Universal solutions are obtained for both classes. A Mathematical Theory of Communication Claude Elwood Shannon Bell System Technical Journal, also pp. 623-656 (October) Surprising Properties of Two-Level Bandwidth Compaction Codes IEEE T COMMUN M.G. Pelchat John M. Geist This paper investigates some of the properties of a class of two-level codes with constrained run length, whose use has been proposed for purposes of bandwidth compression. It is shown that such codes can indeed reduce the bandwidth containing a given percentage of the transmitted power. To communicate information, however, different transmitted codewords must be distinguishable at the receiver, and this requires that the channel bandwidth be sufficiently wide to allow the difference waveform to propagate. It is demonstrated that decreasing the X -percent bandwidth using these codes leads to a rapid increase in the difference waveform bandwidth, and hence in the channel bandwidth necessary to maintain error rate performance. Thus, these codes are bandwidth expansion codes in disguise. Signal-to-noise ratio and channel bandwidth requirements for these codes are discussed and compared with those of M -level codes [pulse-amplitude modulation (PAM)] for two kinds of receivers. Sequence-state Methods for Run-length-limited Coding IBM J RES DEV Peter A. Franaszek Methods are presented for the encoding of information into binary sequences in which the number of ZEROS occurring between each pair of successive ONES has both an upper and a lower bound. The techniques, based on the state structure of the constraints, permit the construction of short, efficient codes with favorable error-propagation-limiting properties. Claude E. Shannon Block Codes for a Class of Constrained Noiseless Channels Donald T. Tang Lalit R. Bahl A special case with binary sequences was presented at the IEEE 1969 International Symposium on Information Theory in a paper titled "Run-Length-Limited Codes. An abstract is not available. balanced codes J.H. Weber Kui Cai rll codes Adriaan De Lind van Wijngaarden Henk D.L. Hollmann J. A. H. Kahlman optical recording ecc codes Frederic Sala Study of the Objective Focal Properties for Asymmetrical Double Polepiece Magnetic Lens Talib M. Abbas Ban A. Naser In this study the design of asymmetrical double polepiece magnetic lens and study their objective focal properties are presented by studying the effect of diameter of the axial air-gap on the magnetic field generated between lens polepieces is presented. The calculations were performed by using codes written with FORTRAN language to calculate the magnetic flux density and the optical properties ... [Show full abstract] for the double polepiece magnetic lenses. Effect of Vacuum Magnetic Well on Magnetohydrodynamic Stability in Heliotron-E-like Configurations w... October 1987 · Zeitschrift fur Naturforschung a F. Herrnegger An analysis of n ≥ 1 free-boundary modes in Heliotron-E-like configurations using the stellarator expansion code STEP is given. It is shown that an axisymmetric quadrupol field improves the stability properties of these high-shear equilibria. The corresponding vacuum quadrupole field is used to find vacuum field configurations with an average magnetic well of (ΔV' / V')min = -4 % . A modulation coding scheme for multi-channel PRML T. Guzeloglu J.G. Proakis A modulation coding scheme is described for multi-channel partial-response systems with maximum likelihood sequence detection. A linear equalization scheme is considered for interchannel interference problem An advanced read/write channel for magnetic disk storage November 1992 · Circuits, Systems and Computers, 1977. Conference Record. 1977 11th Asilomar Conference on R.T. Behrens A.J. Armstrong An advanced read channel for magnetic disk recording is proposed. The channel incorporates a (1,7) run-length limited code with partial response equalization and Viterbi detection. The (1,7) code reduces the severe nonlinear effects that plague thin film disk recording at high densities. It is shown through analysis and simulation that the proposed read channel is suitable for user densities of ... [Show full abstract] 1.5 to 2.0 and higher, for reasonable SNR and error rates
CommonCrawl
\begin{document} \title{Nilpotent aspherical Sasakian manifolds} \begin{abstract} We show that every compact aspherical Sasakian manifold with nilpotent fundamental group is diffeomorphic to a Heisenberg nilmanifold. \end{abstract} \section{Introduction} The interaction between topological constraints and geometric structures is a classical topic. If we are interested only in the homotopy type of an underlying manifold then Algebraic Topology techniques are usually sufficient. For example, Benson and Gordon in~\cite{bensongordon} show that a compact nilpotent aspherical manifold that admits a K\"ahler structure is homotopy equivalent to a torus. It is known that Sasakian manifolds play the same role in contact geometry as Kähler manifolds in symplectic geometry. Similarly to the result of Benson and Gordon, a compact nilpotent aspherical manifold that admits a Sasakian structure is homotopy equivalent to a Heisenberg nilmanifold~\cite{imrn,bazzoni}. Recall that an aspherical manifold is a manifold whose homotopy groups besides the fundamental group are trivial. An aspherical manifold is nilpotent if and only if its fundamental group is nilpotent. According to the Borel conjecture any two compact aspherical manifolds with the same fundamental group are homeomorphic. This conjecture was proven for a wide class of groups, including nilpotent groups (cf.~\cite{lueck}). In particular, a compact nilpotent aspherical manifold that admits a Sasakian structure is homeomorphic to a Heisenberg nilmanifold. One of the most surprising results of 20th century was Milnor's discovery in~\cite{milnor} of exotic spheres, i.e. spheres that are homeomorphic but not diffeomorphic to the standard sphere. Now there are many known examples of topological manifolds that admit non-equivalent smooth structures. If one fixes a smooth structure on a topological manifold, which is quite natural from a differential geometer's point of view, then the problems of existence of compatible geometric structures on it become much harder to solve and require an \emph{ad hoc} approach. For example the conjecture of Boyer, Galicki and Kollár \cite{bgk} that predicts that every parallelizable exotic sphere admits a Sasakian structure is still open. In this paper we study the existence of Sasakian structures on compact aspherical manifolds with nilpotent fundamental group. Among all aspherical smooth manifolds with nilpotent fundamental group the most studied class is the class of nilmanifolds. A \emph{nilmanifold} is a compact quotient of a nilpotent Lie group by a discrete subgroup with the smooth structure inherited from the Lie group. One can show that a compact aspherical nilpotent manifold is homotopy equivalent to a nilmanifold (see Section~\ref{topology} for more details). Then, it follows from the truthfulness of the Borel conjecture in the nilpotent case that the manifold is homeomorphic to a nilmanifold. If it is not diffeomorphic to a nilmanifold, it is called an \emph{exotic nilmanifold}. By~\cite[Lemma~4]{exoticnil} a connected sum of a nilmanifold with an exotic sphere is always an exotic nilmanifold. Thus exotic nilmanifolds do exist. Moreover, it is not difficult to show the existence of contact exotic nilmanifolds. By the main result of Meckert in \cite{contactsum}, the connected sum of two contact manifolds carries a contact structure. Moreover, it is shown in~\cite{bgk} that there are infinitely many Sasakian (hence contact) exotic spheres. Thus the connected sum of a contact nilmanifold and a Sasakian exotic sphere provides an example of a contact exotic nilmanifold. The main result of the article is the following theorem. \begin{theorem} \label{main2} Let $M^{2n+1}$ be a compact aspherical Sasakian manifold with nilpotent fundamental group. Then $M$ is diffeomorphic to the Heisenberg nilmanifold $\Gamma \backslash H(1,n)$, where $\Gamma$ is a lattice in $H(1,n)$ isomorphic to $\pi_1(M)$. \end{theorem} Equivalently, the above result can be stated as the non-existence of compact Sasakian exotic nilmanifold. Theorem~\ref{main2} can be seen as an odd-dimensional version of~\cite{cortes,tralle}, where it is shown that every compact Kähler aspherical nilmanifold is biholomorphic to a complex torus. As a corollary, Baues and Kamishima derived in~\cite{baues2020} the result of Theorem~\ref{main2} under the stronger assumption that the Sasakian structure is \emph{regular}. Their approach, based on passing to the quotient of the Reeb vector field action, cannot be extended to the non-regular case. Our main result and the similar one for the Kähler case provide evidence that exotic nilmanifolds do not admit as rigid geometric structures as nilmanifolds. Another result that points in the same direction was proved in a recent article~\cite{anosov}, where it is shown that compact exotic nilmanifolds admit no Anosov $\mathbb{Z}^r$-action without rank-one factor. Baues and Cortés also showed in~\cite{cortes} that if $X$ is a compact aspherical Kähler manifold with virtually solvable fundamental group, then $X$ is biholomorphic to a finite quotient of a complex torus. In the same vain we obtain the following. \begin{corollary} \label{solvable} If $M^{2n+1}$ is a compact aspherical Sasakian manifold with (virtually) solvable fundamental group, then $M$ is diffeomorphic to a finite quotient of a Heisenberg nilmanifold. \end{corollary} \begin{proof} Let $M^{2n+1}$ be a compact aspherical Sasakian manifold with virtually solvable fundamental group. Denote by $\overline{ M }$ the finite cover of $M$ with solvable fundamental group. The Sasakian structure on $M$ transfers to a Sasakian structure on $\overline{ M }$. By the result of Bieri in~\cite{bieri} (see also \cite{bieri_book}), every solvable group, for which Poincaré duality holds, is torsion-free and polycyclic. In~\cite{kasuya}, Kasuya showed that if the fundamental group of a compact Sasakian manifold is polycyclic, then it is virtually nilpotent. Hence there is a finite (compact) cover $\widetilde{M}$ of $\overline{ M }$, such that $\pi_1(\widetilde{M})$ is nilpotent. The Sasakian structure on $\overline{ M }$ transfers to a Sasakian structure on $\widetilde{M}$. As $\widetilde{M}$ is a finite cover of an aspherical manifold $M$, the manifold $\widetilde{M}$ is also aspherical. By~Theorem~\ref{main2}, the manifold $\widetilde{M}$ is diffeomorphic to a Heisenberg nilmanifold of dimension $2n+1$. Hence $M$ is diffeomorphic to a finite quotient of a Heisenberg nilmanifold. \end{proof} Despite the analogy between Sasakian and Kähler geometry, the proof of Theorem~\ref{main2} is significantly different from the proofs in~\cite{cortes,tralle}. The main obstacle to imitate their proofs is that there is no suitable version of Albanese map for Sasakian manifolds. The main ingredients of the proof are the following. We start by obtaining an integration result for quasi-isomorphisms which is of independent interest, as it holds for all compact nilpotent aspherical manifolds. Namely, in Theorem~\ref{main1} we show that given an exotic nilmanifold $M$ homeomorphic to $N_\Gamma:=\Gamma \backslash G$ and a quasi-isomorphism $\rho\colon \bigwedge T^*_e G \to \Omega^\bullet(M)$, there exists a smooth homotopy equivalence $h\colon M \to N_\Gamma$ that induces $\rho$. Of course, such $h$ is not a diffeomorphism or even a homeomorphism in general. In Section~\ref{sasakianmanifolds} we prove that for every compact aspherical Sasakian manifold with nilpotent fundamental group $\Gamma$, there exists a quasi-isomorphism $\rho:\bigwedge T^*_e G \to \Omega^\bullet(M)$ with some good properties (see Corollaries~\ref{imquasiiso} and~\ref{etaheis}). To achieve this we will construct and study a new real homotopy model for compact Sasakian manifolds. Later on, the obtained properties of $\rho$ are used to show that the corresponding homotopy equivalence $h$ is a diffeomorphism. In the final section we complete the proof of our main result. Starting with a quasi-isomorphism $\rho$ with required rigidity properties, we consider the corresponding smooth homotopy equivalence $h\colon M \to N_\Gamma$. Write $(\varphi,\xi,\eta)$ for the almost contact structure on $M$. Then we show that there is a left-invariant contact form $\eta_\mathfrak{h}$ on $N_\Gamma$ and a basic function $f$ on $N_\Gamma$ such that $h^*\eta_\mathfrak{h} = \eta + (df)\circ \varphi$. Notice that $M\times \mathbb{R}$ and $N_\Gamma \times \mathbb{R}$ have natural Kähler structures induced from the Sasakian structures on $M$ and $N_\Gamma$. Since $h$ is not a contactomorphism in general, the product map $f\times \mathrm{Id}\colon M \times\mathbb{R} \to N_\Gamma \times \mathbb{R}$ is not a holomorphic map in general. So we cannot use the analytic geometry hammer directly. To circumvent this problem we precompose $h\times \mathrm{Id}$ with the translation function $\tilde{f} \colon M \times \mathbb{R}\to N_\Gamma \times \mathbb{R}$ given by $\tilde{f}(x,t) = (x, t + f(x))$. We denote the resulting map by $h_f$. It is straightforward that $h$ is a diffeomorphism if and only if $h_f$ is a diffeomorphism. It is also not very difficult to show that $h_f$ is surjective, universally closed and proper. Interestingly $h_f$ turns out to be holomorphic. To prove this fact is the most laborious step in Section~\ref{PROOF}. Then using the embedded Hironaka resolution of singularities we show that $h_f$ is a finite map. This, combined with several deep results from complex analytic geometry, implies that $h_f$ is a biholomorphism, and thus a diffeomorphism. The paper is organized as follows. Section~\ref{prelim} contains the necessary preliminaries about non-abelian calculus and Sasakian manifolds. In Section~\ref{topology} we prove the integration result discussed above. In Section~\ref{sasakianmanifolds} we construct a new real model for compact Sasakian manifolds, and establish the existence of a quasi-isomorphism $\rho$ with sufficiently rigid properties. In the final section we complete the proof of Theorem~\ref{main2}. \section{Preliminaries} \label{prelim} \subsection{\it Frölicher-Nijenhuis calculus} \label{FNc} For a general treatment of Frölicher-Nijenhuis calculus, we refer to~\cite{kolarbook}. Given a smooth manifold $M$ and a vector bundle $E$ over $M$, for every $\psi \in \Omega^k(M, TM)$ one defines an operator $i_\psi\colon \Omega^\bullet(M, E)\to \Omega^\bullet(M, E) $ of degree $k-1$. If $\xi$ is a vector field and $\phi$ is an endomorphism of $TM$, the general definition specializes to \begin{equation*} \begin{aligned} & i_\xi \omega (X_1,\dots, X_{p-1}) = \omega (\xi, X_1, \dots, X_p)\\ & i_\phi \omega (X_1,\dots, X_p) = \sum_{j=1}^p \omega ( X_1, \dots,\phi X_j, \dots, X_p), \end{aligned} \end{equation*} where $\omega \in \Omega^p(M, E)$. In the particular case when $E$ is the trivial one-dimensional vector bundle over $M$, we get operators $i_\psi$ on $\Omega^\bullet(M)$. Next, we define the operators $\mathcal{L}_\psi$ on $\Omega^\bullet (M)$ by $\mathcal{L}_\psi := i_\psi d + (-1)^{k} d i_\psi $. The Frölicher-Nijenhuis bracket on $\Omega^\bullet(M, TM)$ is defined by the characteristic property $\left[ \mathcal{L}_{\psi_1}, \mathcal{L}_{\psi_2} \right] = \mathcal{L}_{[\psi_1,\psi_2]_{FN}}$. Here on the left side $[\ ,\ ]$ stands for the graded commutator of operators. Interestingly, $[\xi,\psi]_{FN} = \mathcal{L}_\xi \psi$ for any vector field $\xi$. We will use the following formulas due to Frölicher and Nijenhuis \begin{equation*} \begin{aligned} {} [\mathcal{L}_{\psi}, i_\phi] = i_{[\psi,\phi]_{FN}} - (-1)^{pq} \mathcal{L}_{i_\phi \psi}, \end{aligned} \end{equation*} where $p$ and $q$ are the degrees of the operators $\mathcal{L}_\psi$ and $i_\phi$, respectively. We will also use the following ``linearity'' of $\mathcal{L}_{\psi}$ over $\Omega^\bullet(M)$ \begin{equation} \label{liewedge} \begin{aligned} \mathcal{L}_{\beta \wedge \psi}\, \omega = \beta\wedge \mathcal{L}_\psi\,\omega + (-1)^{p+k} {d\beta}\wedge i_\psi\, \omega, \end{aligned} \end{equation} where $\beta \in \Omega^k(M)$, $\psi\in \Omega^p(M,TM)$, and $\omega \in \Omega^\bullet(M)$. \subsection{\it Sasakian manifolds} \emph{An almost contact structure} on a manifold $M$ is a triple $(\varphi, \xi, \eta)$ where $\varphi\in \Omega^1(M, TM)$, $\xi$ is a vector field and $\eta \in \Omega^1(M)$ such that $\varphi^2 = - \mathrm{Id} + \xi \otimes \eta$ and $\eta(\xi)=1$. Given an almost contact structure on $M$, on can define an almost complex structure $J$ on $M \times \mathbb{R}$ by \begin{equation*} \begin{aligned} J\left(X, a\ddt\right) = \left( \varphi X - a \xi , \eta(X) \ddt \right). \end{aligned} \end{equation*} We say that $(\varphi, \xi, \eta)$ is \emph{normal} if $J$ is integrable. It can be verified by simple computation (cf.~\cite[Sec.~6.1]{blair}) that the above definition is equivalent to the vanishing of four tensors: \begin{equation} \label{sastensors} \begin{aligned} \mathcal{L}_\xi \eta=0, \quad \mathcal{L}_\xi\varphi=0,\quad \mathcal{L}_\varphi \eta=0,\quad [\varphi,\varphi]_{FN} + 2 d\eta\otimes \xi=0. \end{aligned} \end{equation} Actually, it can be shown that the vanishing of $\left[ \varphi, \varphi \right]_{FN} + 2d\eta \otimes \xi$ implies that $(\varphi, \xi, \eta)$ is normal. On every normal almost contact manifold, one also has \begin{equation} \label{etaphi} \begin{aligned} \eta\circ \varphi =0,\quad \varphi\xi =0,\quad i_\varphi d\eta =0,\quad i_\xi d\eta =0. \end{aligned} \end{equation} A normal almost contact manifold $(M,\varphi, \xi, \eta)$ with Riemannian metric $g$ is called \emph{Sasakian} if \begin{equation*} \begin{aligned} g(\varphi X , \varphi Y) = g(X,Y) - \eta(X)\eta(Y),\quad d\eta (X,Y) = 2\Phi(X , Y), \end{aligned} \end{equation*} where $\Phi(X,Y) = g(X, \varphi Y)$. It can be checked (cf. \cite[Section~6.5]{blair}) that $M$ is Sasakian if and only if $\tilde{g} := e^{2t}( g + dt^2)$ with $J$ defined above give a Kähler structure on $M \times \mathbb{R}$. \begin{remark} \label{exact} The corresponding Kähler form $e^{2t}(\Phi + dt \wedge \eta ) $ is exact. Indeed it equals to one half of $d ( e^{2t} \eta)$. \end{remark} \subsection{\it Sullivan models} A~\emph{differential graded algebra} is a graded algebra $A = \bigoplus_{k\ge 0} A_k$ equipped with a derivation $d$ of degree one such that $d^2=0$. It is \emph{commutative} if $ab = (-1)^{k\ell} ba$, for all $a \in A_k$ and~$b\in A_\ell$. The motivating example of a commutative differential graded algebra (CDGA) is provided by the de Rham algebra $\Omega^\bullet(M)$ of a manifold $M$. For CDGAs $A$ and $B$, a \emph{homomorphism} $f\colon A\to B$ of CDGAs is a degree preserving homomorphism of algebras which commutes with $d$. Each homomorphism of CDGAs $f\colon A\to B$ induces a homomorphism $H^\bullet(f) \colon H^\bullet(A) \to H^\bullet(B)$ of graded algebras by $H^\bullet(f)([a]) = [f(a)]$ for $a\in A$ such that $da =0$. We say that a homomorphism of CDGAs $f\colon A\to B$ is a quasi-isomorphism if $H^\bullet(f)$ is an isomorphism. Two CDGAs $A$ and $B$ are said to be \emph{quasi-isomorphic} if there are CDGAs \begin{equation*} \begin{aligned} A_0=A, A_1, \dots, A_{2k} = B, \end{aligned} \end{equation*} and quasi-isomorphisms $f_j\colon A_{2j+1} \to A_{2j}$, $h_j\colon A_{2j+1} \to A_{2j+2}$ for $j$ between $0$ and $k-1$. A CDGA $A$ is called a \emph{Sullivan} algebra, if there is a generating set of homogeneous elements $a_i\in A$ indexed by a well ordered set $I$, such that \begin{enumerate}[(i)] \item $da_k$ lies in the subalgebra generated by the elements $a_j$ with $j< k$; \item $A$ has a basis consisting of the elements \begin{equation*} \begin{aligned} a_{j_1}^{r_1} \dots a_{j_n}^{r_n}, \end{aligned} \end{equation*} with $j_1 < \dots < j_n$, $r_k=1$ if degree of $a_k$ is odd, and $r_k \in \mathbb{N}$ if degree of $a_k$ is even. \end{enumerate} The motivating example of Sullivan CDGA was the Chevalley-Eilenberg algebra $\bigwedge \mathfrak{g}^*$ for nilpotent Lie algebra $\mathfrak{g}$. We will need the following lifting property of Sullivan CDGAs. \begin{proposition}[{cf. \cite[Proposition 12.9 ]{felixhalperin}}] \label{lifting} Suppose $q\colon A\to B$ is a quasi-isomorphism of CDGAs and $f \colon D \to B $ a homomorphism of CDGAs. If $D$ is Sullivan, then there is a homomorphism $h\colon D \to A$ of CDGAs, such that $H^\bullet(q)\circ H^\bullet(h) = H^\bullet (f)$. \end{proposition} An algebra $A$ quasi-isomorphic to $\Omega^\bullet(M)$ is called a \emph{real homotopy model} of $M$. Suppose $M$ and $N$ are homotopy equivalent smooth manifolds. Let $F\colon M \to N$ and $G\colon N\to M$ be mutually inverse homotopy equivalences. By Whitney Approximation Theorem (cf.~\cite[Thm.~6.26]{leebook}), there are smooth maps $\widetilde{F} \colon M \to N$ and $\widetilde{G} \colon N \to M$ homotopy equivalent to $F$ and $G$, respectively. Then $\widetilde{F}$ and $\widetilde{G}$ are mutually inverse smooth homotopy equivalences. This implies that $\widetilde{F}^* \colon \Omega^\bullet(N)\to \Omega^\bullet(M)$ and $\widetilde{G}^* \colon \Omega^\bullet(M) \to \Omega^\bullet(N)$ are quasi-isomorphisms. Hence the following proposition holds. \begin{proposition} \label{realhomotopymodels} Suppose $M$ and $N$ are homotopy equivalent smooth manifolds. Then there are quasi-isomorphisms $\Omega^\bullet(M) \to \Omega^\bullet(N)$ and $\Omega^\bullet(N) \to \Omega^\bullet(M)$. In particular, $M$ and $N$ have the same real homotopy models. \end{proposition} \subsection{\it Elements of non-abelian calculus} In this subsection we give a short overview of non-abelian calculus, i.e. the extension of some classical calculus results to Lie-group-valued functions. A good reference for this topic is \cite[Ch.~3]{sharpe}, where it is attributed to Elie Cartan. Let $G$ be a $1$-connected Lie group and $M$ a manifold. Denote by $\mathfrak{g}$ the Lie algebra of $G$ considered as the tangent space $T_eG$. For every smooth map $f\colon M \to G$, we define the~\emph{(left) Darboux derivative} $\mathrm{D} f \colon T M \to \mathfrak{g}$ by \begin{equation*} \begin{aligned} \mathrm{D} f \colon & T_p M \longrightarrow\mathfrak{g}\\[0ex] & X \mapsto (T_e L_{f(p)})^{-1} (T_p f) X, \end{aligned} \end{equation*} where $L_{g}\colon G\to G$ denotes the left multiplication with $g\in G$. This differential can be expressed using the \emph{Maurer-Cartan form} $\mu_G \in \Omega^1(G, \mathfrak{g})$ on $G$. By definition, $\mu_G$ is given by \begin{equation*} \begin{aligned} \mu_G \colon & T_g G \longrightarrow\mathfrak{g}\\[0ex] & Y \mapsto (T_e L_{g})^{-1} Y, \end{aligned} \end{equation*} i.e. $\mu_G = \mathrm{D}(\mathrm{Id}_G)$. One can show that $\mathrm{D} f = \mu_G\circ Tf = f^* \mu_G$. Since $\mu_G$ is left-invariant we get for any $g\in G$ \begin{equation*} \begin{aligned} \mathrm{D} (L_g \circ f) = f^* L_g^* \mu_G = f^* \mu_G = \mathrm{D} f. \end{aligned} \end{equation*} Define $d\omega\in \Omega^2(M,\mathfrak{g})$ for $\omega \in \Omega^1(M, \mathfrak{g})$ as the unique $\mathfrak{g}$-valued $2$-form such that $d(\alpha \circ \omega) = \alpha \circ d \omega$ for all $\alpha \in \mathfrak{g}^*$. For $\mathfrak{g}$-valued $1$-forms $\omega_1$ and $\omega_2$, the commutator $[\omega_1, \omega_2] \in \Omega^2(M, \mathfrak{g})$ is defined by \begin{equation*} \begin{aligned} {} [\omega_1, \omega_2] (X,Y) = [\omega_1(X), \omega_2(Y)] - [ \omega_1(Y), \omega_2(X)]. \end{aligned} \end{equation*} In the case $\omega_1 = \omega_2 = \omega$ one gets \begin{equation*} \begin{aligned} {} [\omega, \omega] (X,Y) = 2[\omega(X), \omega(Y)]. \end{aligned} \end{equation*} It can be checked that $\mathrm{D} f \in \Omega^1(M, \mathfrak{g})$ is \emph{flat} in the sense that \begin{equation*} \begin{aligned} d(\mathrm{D} f) + \frac{1}{2}[\mathrm{D} f, \mathrm{D} f] =0. \end{aligned} \end{equation*} Let \begin{equation*} \begin{aligned} \Omega^1_{flat}(M, \mathfrak{g}) := \Big\{\, \omega \in \Omega^1(M, \mathfrak{g}) \,\,\Big|\,\, d \omega + \frac{1}{2} [\omega, \omega] = 0\,\Big\} \end{aligned} \end{equation*} be the set of flat $\mathfrak{g}$-valued forms on $M$. We say that $f\colon M \to G$ is a \emph{primitive} of $\beta \in \Omega^1_{flat}(M, \mathfrak{g})$ if $\mathrm{D} f = \beta$. The Fundamental Theorem of Calculus extends to the non-abelian setting as follows. \begin{theorem}[{\cite[Section 3.7]{sharpe}}] \label{sharpe} Given a $1$-connected pointed manifold $(X,x_0)$ and $\beta\in \Omega^1_{flat}(X,\mathfrak{g})$, for each $g\in G$ there is a unique primitive $P_{g,\beta}\colon X \to G $ of $\beta$ such that $P_{g,\beta}(x_0) = g$. Moreover, $L_{g'} \circ P_{g,\beta} = P_{g'g,\beta}$ for any $g'\in G$. \end{theorem} Consider an interval $[a,b] \subset \mathbb{R}$ as a pointed manifold with the base point $a$. By Theorem~\ref{sharpe}, for every $\beta \in \Omega^1([a,b], \mathfrak{g})$ there is a unique path $\gamma = P_{e,\beta}\colon [a,b] \to G$ such that $\gamma(a) = e$ and $D\gamma = \beta$. It is natural to denote the end point of this path by $ \int_a^b \beta$. Notice that with this notation $\gamma(t) = \int_a^t \beta$. Indeed $\gamma(t)$ is the end point of the path $\gamma|_{[a,t]} \colon [a,t] \to G$ whose Darboux derivative is $\beta$ and which starts at $a$. \begin{remark} \label{nau} If $G = (\mathbb{R},+)$, then the Darboux derivative coincides with the usual derivative, and the integrals of $\mathfrak{g}$-valued $1$-forms on $[a,b]$ coincide with the usual integrals of $1$-forms on $[a,b]$. \end{remark} Let $s\in[a,b]$. We can express $P_{e,\beta}$ as the concatenation of $P_{e,\beta|_{[a,s]}}$ and $P_{g,\beta|_{[s,b]}}$, where $g = \int_a^s \beta$ is the end point of the path $P_{e,\beta|_{[a,s]}}$. Hence the end point of $P_{e,\beta}$ coincides with the end point of $P_{g,\beta|_{[s,b]}} =L_g \circ P_{e,\beta|_{[s,b]}}$. In the integral notation this observation can be written as \begin{equation} \label{mult} \begin{aligned} \int_a^b \beta = g \cdot \int_s^b \beta = \int_a^s \beta \cdot \int_s^b \beta. \end{aligned} \end{equation} We will refer to the above formula as the \emph{multiplicative property} of non-abelian integrals. We will also use the invariance of non-abelian integration under reparametrization. Given an increasing bijection $h\colon [c,d]\to [a,b]$, we have \begin{equation*} \begin{aligned} \int_c^d h^* \beta = \int_a^b \beta. \end{aligned} \end{equation*} Indeed, the left side is the end point of $P_{e,h^*\beta} \colon [c,d] \to G$ which is the unique primitive of $h^*\beta$ starting at $e$. But the path $P_{e,\beta}\circ h$ is also a primitive of $h^*\beta$ and also starts at $e$. Hence $P_{e,h^*\beta} = P_{e,\beta} \circ h$. It is left to observe that the end point of $P_{e,\beta} \circ h$ is the end point of $P_{e,\beta}$, which is $\int_a^b \beta$. Now we return to a general $1$-connected pointed manifold $(X,x_0)$. Fix $\beta \in \Omega^1_{flat}(X, \mathfrak{g})$. If $\gamma\colon [0,1] \to X$ is a path in $X$ that starts at $x_0$, then $P_{e,\beta} \circ \gamma\colon [0,1] \to G$ starts at $e$ and its Darboux derivative is $\gamma^* \beta$. Indeed \begin{equation*} \begin{aligned} (P_{e,\beta} \circ \gamma)^* \mu_G = \gamma^*\, P_{e,\beta}^*\,\mu_G = \gamma^*\, \mathrm{D} P_{e,\beta} = \gamma^* \beta. \end{aligned} \end{equation*} Hence $P_{e,\beta} \circ \gamma = P_{e,\gamma^*\beta}$. The path $P_{e,\gamma^*\beta}$ will be sometimes referred to as the \emph{development of $\beta$ along $\gamma$}. We have \begin{equation*} \begin{aligned} P_{e,\beta}(\gamma(1)) = P_{e,\gamma^*\beta}(1) = \int_0^1 \gamma^* \beta \in G, \end{aligned} \end{equation*} i.e. the map $P_{e,\beta}$ can be computed using path developments. Namely, given $x \in X$, choose an arbitrary path $\gamma\colon [0,1] \to X$ that starts at $x_0$ and ends at $x$. Then \begin{equation} \label{pebeta} \begin{aligned} P_{e,\beta}(x) = P_{e,\gamma^*\beta}(1) = \int_0^1 \gamma^* \beta. \end{aligned} \end{equation} Now let $(M, x_0)$ be an arbitrary connected pointed manifold. Denote by $\pi \colon \uc{M} \to M$ the universal cover of $M$. We will consider the points in $\uc{M}$ as homotopy equivalence classes of paths $\gamma \colon [0,1] \to M$ starting at the base point $x_0$. Then $\pi([\gamma]) = \gamma(1)$ for $[\gamma] \in \widetilde{M}$. We will consider $\widetilde{M}$ as a pointed manifold with the base point $\tilde{x}_0 = [ t\mapsto x_0]$. If $\omega$ is a flat $\mathfrak{g}$-valued $1$-form on $M$, then it is not difficult to check that $\pi^* \omega \in \Omega^1_{flat}(\uc{M},\mathfrak{g})$. Now we will express the primitive $P_{e,\pi^* \omega}$ of $\pi^* \omega$ in terms of developments of $\omega$ along paths in $M$. For every $[\gamma] \in \uc{M}$ there is a canonical path $\tilde{\gamma}$ in $\uc{M}$ defined by $\tilde\gamma (t ) = [s \mapsto \gamma(st)]$. The path $\tilde{\gamma}$ lifts $\gamma$ and connects $\tilde{x}_0$ to $[\gamma]$ . As $\pi \circ \tilde\gamma = \gamma$, we get $\tilde\gamma^* \pi^* \omega = \gamma^* \omega$. Applying~\eqref{pebeta}, we get \begin{equation} \label{pepiomega} \begin{aligned} P_{e,\pi^*\omega}([\gamma]) = \int_0^1 \tilde\gamma^* \pi^* \omega = \int_0^1 \gamma^*\omega . \end{aligned} \end{equation} This formula implies the following multiplicative properties of $P_{e,\pi^*\omega}$. \begin{proposition} \label{prop:mult} Let $(M,x_0)$ be a connected pointed manifold and $\omega\in \Omega^1_{flat}(M, \mathfrak{g})$. Consider $\pi_1(M) = \pi_1(M,x_0)$ as a subset of $\uc{M}$. Given $[\ell] \in \pi_1(M)$ and $[\gamma] \in \uc{M}$, we have $P_{e,\pi^*\omega} ([\ell * \gamma]) = P_{e,\pi^*\omega}([\ell]) P_{e,\pi^*\omega}([\gamma]) $, where \begin{equation*} \begin{aligned} (\ell * \gamma) (t) = \begin{cases} \ell(2t), & t\le \frac{1}{2}\\ \gamma(2t-1), & t\ge \frac{1}{2}. \end{cases} \end{aligned} \end{equation*} In particular, the restriction of $P_{e,\pi^* \omega} \colon \uc{M} \to G$ to $\pi_1(M)$ is a homomorphism of groups. \end{proposition} \begin{proof} Define $h\colon [0,2] \to [0,1]$ by $h(t) = t/2$ and $\sigma \colon [1,2] \to [0,1]$ by $\sigma(t) = t-1$. Then $(\ell * \gamma) \circ h$ is the concatenation of the paths $\ell$ and $\gamma \circ \sigma$. Using the multiplicative property and the invariance under reparametrization of non-abelian integrals, we get \begin{equation*} \begin{aligned} P_{e,\pi^*\omega}([\ell*\gamma]) & = \int_0^1 (\ell*\gamma)^* \omega = \int_0^2 h^* (\ell*\gamma)^* \omega = \int_0^1 \ell^*\omega \cdot \int_1^2 \sigma^*\gamma^* \omega \\[2ex] & = \int_0^1 \ell^*\omega \cdot \int_0^1 \gamma^* \omega = P_{e,\pi^*\omega}([\ell]) P_{e,\pi^*\omega}([\gamma]). \end{aligned} \end{equation*} \end{proof} \section[Nilpotent aspherical manifolds]{Aspherical manifolds with nilpotent fundamental group} \label{topology} Suppose $M$ is a smooth compact manifold which is homotopy equivalent to $K(\Gamma, 1)$, where $\Gamma$ is a nilpotent group. Then $\Gamma$ is finitely generated (cf.~\cite[Prop.\!~1.26]{hatcher}) and torsion-free (cf.~\cite[Lemma~3.1]{luecksurvey}). Every torsion-free finitely generated nilpotent group $\Gamma$ can be realized as a lattice in a nilpotent Lie group $G(\Gamma)$ which is unique up to isomorphism. More precisely, for every field $\mathbb{K}$ one can construct a nilpotent algebraic group $G_\mathbb{K}(\Gamma)$ over $\mathbb{K}$ that contains $\Gamma$ (cf. \cite[Ch.\!~4]{clement}). In the case $\mathbb{K}=\mathbb{R}$, the group $G(\Gamma) = G_\mathbb{R}(\Gamma)$ is a Lie group and $\Gamma$ is a lattice in $G(\Gamma)$. We will use the following extension principle for lattices in nilpotent Lie groups. \begin{theorem} \label{extension} Let $H$ and $G$ be nilpotent Lie groups, and $\Gamma$ a lattice in $H$. Then every homomorphism of groups $f \colon \Gamma \to G$ has a unique extension to a smooth homomorphism $\tilde{f} \colon H \to G$. \end{theorem} \begin{proof} This is essentially \cite[Theorem~2.11]{discrete}, but there the author claims only the existence of continuous homomorphism $\tilde{f}\colon H \to G$ that extends $f$. However, it is known (cf. \cite[Theorem~3.39]{warner}) that every continuous homomorphism of Lie groups is smooth. \end{proof} Denote the Lie algebra of $G(\Gamma)$ by $\mathfrak{g}$ and the nilmanifold $\Gamma\backslash G(\Gamma)$ by $N_\Gamma$. We will identify $\mathfrak{g}^*$ with the set of left-invariant $1$-forms on $G\left( \Gamma \right)$. As $1$-forms on $N_\Gamma$ can be seen as $\Gamma$-invariant $1$-forms on $G(\Gamma)$, we get an embedding $\mathfrak{g}^* \to \Omega^1(N_\Gamma)$ that can be extended to a homomorphism of CDGAs $\psi_\Gamma \colon \bigwedge \mathfrak{g}^* \to \Omega^\bullet (N_\Gamma)$. It was shown in~\cite{nomizu} that $\psi_\Gamma$ is a quasi-isomorphism. Given a smooth map $f \colon M \to N_\Gamma$ one gets an induced homomorphism $\rho = f^* \circ \psi_\Gamma$ of CDGAs. We will show in Theorem~\ref{main1} that every quasi-isomorphism $\rho \colon \bigwedge \mathfrak{g}^* \to \Omega^\bullet(M)$ is induced, modulo an automorphism of $\bigwedge \mathfrak{g}^*$, by a smooth homotopy equivalence $f \colon M \to N_\Gamma$. The interest of this result lies in the fact that there is a correspondence between geometric properties of $f$ and properties of $\rho$, and it is easier to prove existence of $\rho$ with good properties then the existence of $f$. We start by relating the Darboux derivatives to homomorphisms of CDGAs. For commutative graded algebras $A$ and $B$, we write $\cga(A,B)$ for the set of homomorphisms from $A$ to $B$. Similarly, for CDGAs $A$ and $B$, we write $\cdga(A,B)$ for the set of CDGA's homomorphisms from $A$ to $B$. The following is a consequence of standard linear algebra manipulations. \begin{lemma} \label{OmegaMg} There are natural isomorphisms of vector spaces \begin{equation*} \begin{aligned} &\begin{aligned} i\colon \Omega^1(M, \mathfrak{g}) & \longrightarrow \Hom(\mathfrak{g}^*, \Omega^1(M)) \\ \omega & \longmapsto ( \alpha \mapsto \alpha \circ \omega) \end{aligned}\\[2ex] &\begin{aligned} j \colon \cga(\bigwedge \mathfrak{g}^*,\Omega^\bullet(M)) &\to \Hom (\mathfrak{g}^*, \Omega^1(M))\\ h & \mapsto h|_{\mathfrak{g}^*}. \end{aligned} \end{aligned} \end{equation*} \end{lemma} \noindent We use these isomorphisms to obtain the following identification. \begin{proposition} Under the chain of isomorphisms \begin{align} \label{corresponds} \Omega^1(M, \mathfrak{g}) & \xrightarrow{i} \Hom(\mathfrak{g}^*, \Omega^1(M)) \xrightarrow{j^{-1}} \cga\Big( \bigwedge \mathfrak{g}^*, \Omega^\bullet (M)\Big) \end{align} the flat $\mathfrak{g}$-valued $1$-forms correspond exactly to the homomorphisms of CDGAs from $(\bigwedge \mathfrak{g}^*, d_{CE})$ to $(\Omega^\bullet(M, d_{DR})$. \end{proposition} \begin{proof} It is easy to check that a map $h \colon \mathfrak{g}^* \to \Omega^1(M)$ corresponds to a homomorphism of CDGAs from $(\bigwedge \mathfrak{g}^*, d_{CE})$ to $(\Omega^\bullet(M, d_{DR})$ if and only if the following diagram is commutative \begin{equation} \label{diagram} \begin{aligned} \begin{gathered} \xymatrix{ \mathfrak{g}^* \ar[r]^{d_{CE}} \ar[d]_{h} & \bigwedge^2 \mathfrak{g}^* \ar[d]^{h^{\wedge 2;}}\\ \Omega^1(M) \ar[r]^{d_{DR}} & \Omega^2(M), } \end{gathered} \end{aligned} \end{equation} where $h^{\wedge 2} (\alpha \wedge \beta ) := h(\alpha)\wedge h(\beta)$. Upon identifying $\bigwedge^2 \mathfrak{g}^*$ with $\Hom\left( \bigwedge^2 \mathfrak{g}, \mathbb{R} \right)$, the map $d_{CE}$ becomes \begin{equation*} \begin{aligned} \alpha \mapsto ( x \wedge y \mapsto -\alpha ([x,y])). \end{aligned} \end{equation*} By Lemma~\ref{OmegaMg}, we can write $h$ in the form $(-\circ \omega)$ for some $\omega \in \Omega^1(M, \mathfrak{g})$. Then $h^{\wedge 2}$ can be identified with \begin{equation*} \begin{aligned} \Hom\Big( \mbox{$\bigwedge^2 \mathfrak{g}$}, \mathbb{R}\Big) \ni \beta \mapsto \beta\circ (\omega \wedge \omega), \end{aligned} \end{equation*} where $\omega\wedge \omega \in \Omega^2(M,\bigwedge^2 \mathfrak{g})$ is defined by $(\omega \wedge \omega)(X,Y) := \omega(X) \wedge \omega (Y)$. Hence the commutativity of~\eqref{diagram} is equivalent to \begin{equation*} \begin{aligned} d_{DR} (\alpha \circ \omega) + \alpha \circ [-,-] \circ (\omega \wedge \omega) =0,\quad \forall \alpha \in \mathfrak{g}^*. \end{aligned} \end{equation*} As $d_{DR} (\alpha \circ \omega) = \alpha \circ (d\omega)$, this is equivalent to \begin{equation*} \begin{aligned} \alpha\circ ( d \omega + [-,-] \circ (\omega \wedge \omega) ) =0,\quad \forall \alpha \in \mathfrak{g}^*. \end{aligned} \end{equation*} This is the same as to ask $d\omega + [-,- ] \circ (\omega \wedge \omega) =0$. But $[-,-] \circ (\omega \wedge \omega) = (1/2)[\omega ,\omega]$. Hence the diagram~\eqref{diagram} with $h = - \circ \omega$ commutes if and only if $\omega$ is flat. \end{proof} Let $\omega \in \Omega^1_{flat}(M, \mathfrak{g})$. Denote by $\rho \in \Hom_{cdga}(\bigwedge \mathfrak{g}^*, \Omega^\bullet(M))$ the corresponding homomorphism of CDGA. If $f\colon M \to G$ is a Darboux primitive of $\omega$, then $\rho|_{\mathfrak{g}^*} = - \circ \omega = - \circ \mathrm{D} f$. The condition $\rho|_{\mathfrak{g}^*} = -\circ \mathrm{D} f$ is equivalent to $\rho = f^*|_{\bigwedge \mathfrak{g}^*}$, where we identify $\bigwedge \mathfrak{g}^*$ with the set of left-invariant forms on $G$. Indeed, in view of Lemma~\ref{OmegaMg}, it is enough to show that $f^*|_{\mathfrak{g}^*} = - \circ Df$. For $\alpha \in \mathfrak{g}^* = \Omega^1(G)^G$ and $X \in T_p M$, we have \begin{equation*} \begin{aligned} (f^*\alpha)(X) = \alpha ( (T_p f) X) = \alpha ( (T_e L_{f(p)})^{-1} (T_p f) (X)) = (\alpha \circ \mathrm{D} f) (X). \end{aligned} \end{equation*} Thus $f\colon M \to G$ ``integrates'' $\rho \in \cdga(\bigwedge \mathfrak{g}^*, \Omega^\bullet(M))$ if and only if~$\rho = f^*|_{\bigwedge \mathfrak{g}^*}$. Now we are ready to prove one of the main results of the article. \begin{theorem} \label{main1} Let $M$ be a compact aspherical manifold with nilpotent fundamental group $\Gamma$ and $\mathfrak{g}$ the Lie algebra of $G(\Gamma)$. For every quasi-isomorphism $\rho \colon \bigwedge \mathfrak{g}^* \to \Omega^*(M)$ of CDGAs there exist a smooth homotopy equivalence $h\colon M \to N_\Gamma$ and an automorphism of CDGAs $a\colon \bigwedge \mathfrak{g}^* \to \bigwedge \mathfrak{g}^* $, such that $\rho = h^* \circ \psi_\Gamma \circ a$, where $\psi_\Gamma \colon \bigwedge \mathfrak{g}^* \to \Omega^\bullet(N_\Gamma)$ is the Nomizu quasi-isomorphism. \end{theorem} \begin{proof} Fix $x_0 \in M$ and consider $M$ as a pointed manifold with the base point $x_0$. Write $\pi\colon \uc{M} \to M$ for the universal cover of $M$. We consider $\uc{M}$ as a pointed manifold with the base point $\tilde{x}_0 = [ t \mapsto x_0 ]$. We will identify $\Gamma$ with $\pi_1(M, x_0)$ considered as a subset of $\tilde{M}$. Denote by $\omega$ the flat $\mathfrak{g}$-valued $1$-form that corresponds to $\rho$ under the chain of isomorphisms~\eqref{corresponds}. Then $\pi^* \omega$ corresponds to $\pi^* \circ \rho$ under these isomorphisms. Write $f$ for the primitive $P_{e,\pi^* \omega} \colon \uc{M} \to G(\Gamma)$ of $\pi^*\omega$. Then $\mathrm{D} f = \pi^* \omega$ and, hence, $\pi^*\circ \rho = f^*|_{\bigwedge \mathfrak{g}^*} $. By Proposition~\ref{prop:mult} the restriction of $f$ to $\Gamma = \pi_1(M,x_0) \subset \uc{M}$ is a homomorphism of groups. By the extension property for lattices in nilpotent Lie groups, there is a unique smooth homomorphism of Lie groups $\tilde{f} \colon G(\Gamma) \to G(\Gamma)$ that extends $f|_{\Gamma}$. We will show that $\tilde{f}$ is an isomorphism. Then we verify that \mbox{$\tilde{f}^{-1} \circ f \colon \widetilde{M} \to G(\Gamma)$} is $\Gamma$-invariant and thus induces a map $h\colon M \to N_\Gamma$. Finally, we will check that $h$ satisfies the required properties. In the course of the proof we will use several times that the exponential map of a connected nilpotent group is a surjective local diffeomorphism. Moreover, it is a diffeomorphism if the Lie group is $1$-connected. Our first step is to show that $\tilde{f}$ is an epimorphism of Lie groups. Denote by $H$ the image of $\tilde{f}$. It is shown in \cite[Theorem 7.18]{clement}, that for a nilpotent group $G$ and a subgroup $N<G$, one has $N[G,G]=G$ if and only if $N=G$. Thus $\tilde{f}$ is epimorphic if and only if $$H[G(\Gamma), G(\Gamma)] = G(\Gamma).$$ Suppose to the contrary $H[G(\Gamma), G(\Gamma)]$ is a proper subgroup of $G(\Gamma)$. The subgroup $H[G(\Gamma), G(\Gamma)]$ is path-connected. Indeed, $H$ is path-connected since it is the image of a path-connected space under a continuous map. Next, $[G(\Gamma), G(\Gamma)]$ is the union of the images $X_n$ of the maps \begin{equation*} \begin{aligned} \nu_n \colon (G(\Gamma) \times G(\Gamma))^n & \to G(\Gamma)\\ (x_1,y_1,x_2,y_2,\dots,x_n,y_n) &\mapsto [x_1,y_1][x_2,y_2]\dots [x_n,y_n]. \end{aligned} \end{equation*} Each $X_n$ is path-connected. As the sequence of subsets $X_n$ in $G(\Gamma)$ is increasing, their union is also path-connected. Now, $H[G(\Gamma), G(\Gamma)]$ is the image of $H \times [G(\Gamma), G(\Gamma)]$ in $G(\Gamma)$ under the restriction of the product map, hence it is path-connected. By the result of Yamabe~\cite{yamabe} (see also~\cite{goto}), every path-connected subgroup of $G(\Gamma)$ is an analytical subgroup, i.e. it is the image of a Lie group under an injective immersive homomorphism of Lie groups. Let $j\colon \widetilde{H} \to G(\Gamma)$ be a monomorphism of Lie groups whose image is $H[G(\Gamma), G(\Gamma)]$. We get the commutative square \begin{equation} \label{tetildeh} \begin{gathered} \xymatrix{ T_e \widetilde{H} \ar[r]^-{T_e j} \ar[d]_{\exp} & T_e G(\Gamma)\ar[d]^{\exp} \\ \widetilde{H} \ar[r]^-{j} & G(\Gamma). } \end{gathered} \end{equation} As $T_e j$ is an inclusion of Lie algebras, the Lie algebra $T_e \widetilde{H}$ is nilpotent. Hence the left arrow in the diagram~\eqref{tetildeh} is surjective. Since the top and the right arrows in~\eqref{tetildeh} are injective, the left arrow is also injective, and, hence, bijective. As $T_e \widetilde{H}$ is nilpotent, this implies that $\exp \colon T_e \widetilde{H} \to \widetilde{H}$ is a diffeomorphism. Now, the diagram~\eqref{tetildeh} implies that the image $H[G(\Gamma),G(\Gamma)]$ of $j$ is a closed subset of $G(\Gamma)$. Thus it is a closed Lie subgroup of $G(\Gamma)$. As $H[G(\Gamma), G(\Gamma)]$ contains $[G(\Gamma), G(\Gamma)]$, it is a normal subgroup of~$G(\Gamma)$. Hence, the quotient $Q := G(\Gamma)/ H[G(\Gamma), G(\Gamma))]$ is an abelian Lie group. Moreover, from the long exact sequence for the fibration $G\left( \Gamma \right) \twoheadrightarrow Q$, we conclude that all the homotopy groups of $Q$ are trivial and thus $Q$ is contractible. Hence $Q \cong (\mathbb{R}^k, +)$ as a Lie group for some $k>0$. Thus there is a non-zero homomorphism $\widehat\varphi\colon Q \to \mathbb{R}$ of Lie groups. Denote by $\varphi$ the composition of $\widehat\varphi$ with the projection $G(\Gamma)\twoheadrightarrow Q$. Then $\varphi\colon G(\Gamma) \to \mathbb{R}$ is a non-zero homomorphism of Lie groups such that $\varphi(H[G(\Gamma), G\left( \Gamma \right)]) =0$. In particular, since $f(\Gamma) = \tilde{f}(\Gamma) \subset \tilde{f}(G(\Gamma)) = H$, we get $\varphi(f([\ell])) = 0$ for every $[\ell] \in \Gamma$. Since $\varphi(f(\tilde{x}_0)) = 0$ and $\mathrm{D} ( \varphi \circ f) = d(\varphi \circ f) = d(f^*\varphi)= f^* d\varphi$, from Theorem~\ref{sharpe} we get $ \varphi \circ f =P_{e,f^*d\varphi}$. Next we verify that $f^*d\varphi = \pi^* \rho(d\varphi)$. Since $f^*|_{\bigwedge \mathfrak{g}^*} = \pi^* \circ \rho$, it is enough to check that $d\varphi$ is left invariant. For every $g\in G(\Gamma)$, $\varphi \circ L_g (g') = \varphi(g) + \varphi(g')$. As $\varphi(g)$ is a constant, we get \begin{equation*} \begin{aligned} L_g^* d\varphi = d(\varphi\circ L_g) = d(\varphi(g) + \varphi) = d\varphi, \end{aligned} \end{equation*} i.e. $d\varphi$ is left invariant. Now, using \eqref{pepiomega}, $\varphi \circ f = P_{e, f^*d\varphi}$ and $f^*d\varphi = \pi^* \rho$, we obtain for~$[\gamma] \in \widetilde{M}$ \begin{equation*} \label{phifgamma2} \begin{aligned} \varphi(f([\gamma])) &= P_{e, f^*d\varphi}([\gamma]) = P_{e,\pi^* \rho(d\varphi)} ([\gamma])= \int_0^1 \gamma^* \rho(d\varphi) = \int_\gamma \rho(d\varphi). \end{aligned} \end{equation*} In particular, for every loop $\ell$ at $x_0$, we have \begin{equation} \label{intgammarhodphi} \begin{aligned} \int_\ell \rho(d\varphi) = \varphi(f([\ell])) = 0. \end{aligned} \end{equation} As $\rho$ commutes with the differential, the form $\rho(d\varphi)$ is closed. Applying de Rham theorem, we deduce from~\eqref{intgammarhodphi} that the form $\rho(d\varphi)$ is exact. As $\rho$ is a quasi-isomorphism, $d\varphi \in \mathfrak{g}^*$ is an exact element in the Chevalley-Eilenberg complex. But the Chevalley-Eilenberg differential from $\mathbb{R}= \bigwedge^0 \mathfrak{g}^*$ to $\mathfrak{g}^* = \bigwedge^1 \mathfrak{g}^*$ is the zero map. Hence $d\varphi =0$. As $\varphi\colon G(\Gamma) \to \mathbb{R}$ is a homomorphism of groups, this implies that $\varphi$ is the zero map, which gives a contradiction. Hence $H[G(\Gamma),G(\Gamma)] =G(\Gamma)$, $H=G(\Gamma)$ and $\tilde{f}\colon G(\Gamma))\to G(\Gamma)$ is an epimorphism of Lie groups. Thus $T_e \tilde{f} \colon T_e G(\Gamma)\to T_e G(\Gamma)$ is an epimorphims of vector spaces of the same dimension, and thus an isomorphism. Since the exponential map $\exp \colon T_e G(\Gamma) \to G(\Gamma)$ is a diffeomorphism, we get from the commutative diagram \begin{equation*} \begin{aligned} \xymatrix{ T_e G(\Gamma) \ar[r]^-{T_e \widetilde{f}}\ar[d]_{\exp} & T_e G(\Gamma)\ar[d]^{\exp} \\ G(\Gamma) \ar[r]^{\widetilde{f}} & G(\Gamma) } \end{aligned} \end{equation*} that $\tilde{f}$ is a bijection, and thus an isomorphisms of Lie groups. {\bf } Define $\tilde{h} \colon \widetilde{M} \to G(\Gamma)$ as the composite map $\tilde{f}^{-1} \circ f $. The map $\tilde{h}$ is $\Gamma$-equivariant. Indeed, by Proposition~\ref{prop:mult} and the definition of $f$, for every $g \in \Gamma$ and $x \in \widetilde{M}$, we have \begin{equation*} \begin{aligned} \tilde{h}(gx) = \tilde{f}^{-1} \left(\, f(g) f( x) \,\right) = \tilde{f}^{-1} (\tilde{f}(g)) \tilde{f}^{-1} (f(x)) =g \tilde{h}(x), \end{aligned} \end{equation*} since $\tilde{f}(g) = f(g)$ for $g\in \Gamma$ by definition of $\tilde{f}$. Hence $\tilde{h}$ induces the smooth map $h\colon M \cong \Gamma \backslash \widetilde{M}\to N_\Gamma = \Gamma \backslash G(\Gamma)$. This map fits into the commutative diagram \begin{equation} \label{GammawidetildeM} \begin{aligned} \begin{gathered} \xymatrix{ \Gamma \ar[r]\ar[d]_{\mathrm{Id}} & \uc{M} \ar@{->>}[r]^{\pi}\ar[d]_{\tilde{h}} & M\ar[d]^{h} \\ \Gamma \ar[r] & G(\Gamma)\ar@{->>}[r]^-{\pi_\Gamma} & N_\Gamma } \end{gathered} \end{aligned} \end{equation} The map $\tilde{h}$ is a homotopy equivalence as it is a continuous map between two contractible spaces. From the functoriality of the long exact sequence of homotopy groups applied to~\eqref{GammawidetildeM}, we deduce that $\pi_k(h)$ are isomorphisms for all $k$. By Whitehead theorem $h$ is a homotopy equivalence. Now we relate $h$ to $\rho$. Restricting the map $\tilde{f}\strut^*$ to $\Omega^\bullet(G(\Gamma))^{G(\Gamma)} = \bigwedge \mathfrak{g}^*$, we get a CDGA automorphism $a \colon \bigwedge \mathfrak{g}^* \to \bigwedge \mathfrak{g}^*$. From $\tilde{f} \circ \tilde{h} = f$, we get \begin{equation} \label{pistarrho} \begin{aligned} \pi^* \circ \rho = f^*|_{\bigwedge \mathfrak{g}^*} = \tilde{h}\strut^* \circ \tilde{f}\strut^*|_{\bigwedge \mathfrak{g}^*} = \tilde{h}\strut^*|_{\bigwedge \mathfrak{g}^*} \circ a. \end{aligned} \end{equation} The map $\pi_\Gamma^* \circ \psi_\Gamma$ is the canonical inclusion of $\bigwedge \mathfrak{g}^* = \Omega^\bullet(G(\Gamma))^{G(\Gamma)}$ into $\Omega^\bullet(G(\Gamma))$. Thus, from~\eqref{GammawidetildeM}, we get \begin{equation*} \begin{aligned} \tilde{h}\strut^*|_{\bigwedge \mathfrak{g}^*} = \tilde{h}\strut^* \circ \pi_\Gamma^* \circ \psi_\Gamma = \pi^* \circ h^* \circ \psi_\Gamma. \end{aligned} \end{equation*} Combining this equation with~\eqref{pistarrho}, we obtain \begin{equation*} \begin{aligned} \pi^* \circ \rho = \pi^* \circ h^* \circ \psi_\Gamma \circ a. \end{aligned} \end{equation*} Since $\pi^* \colon \Omega^\bullet(M) \to \Omega^\bullet(\uc{M})$ is injective, we get that $\rho =h^* \circ \psi_\Gamma \circ a$. \end{proof} \begin{remark} \label{hsurjective} The map $h$ constructed in Theorem~\ref{main1} is surjective and has degree $\pm 1$. In fact, the degree of $h\colon M \to N_\Gamma$ is the unique integer $k$ such that the map $H_n(h, \mathbb{Z}) \colon H_n(M,\mathbb{Z}) \to H_n(N_\Gamma,\mathbb{Z})$ is given by the multiplication with~$k$. Here $H_n(M, \mathbb{Z}) \cong \mathbb{Z} \cong H_n(N_\Gamma, \mathbb{Z})$. As $h$ is a homotopy equivalence, the map $H_n(h,\mathbb{Z})$ is an isomorphism, and thus $k$ is either $1$ or~$-1$. Finally, by \cite[Prop. I, Sec. 6.1]{greub1} a smooth map of non-zero degree between compact orientable manifolds is surjective. \end{remark} \begin{corollary} \label{rhoinjective} Let $M^n$ be a compact aspherical manifold with nilpotent fundamental group $\Gamma$ and $\mathfrak{g}$ the Lie algebra of $G(\Gamma)$. If $\rho\colon \bigwedge \mathfrak{g}^* \to \Omega^\bullet(M)$ is a quasi-isomorphism of CDGAs then $\rho$ is an injective map. \end{corollary} \begin{proof} By Theorem~\ref{main1}, $\rho = h^* \circ \psi_\Gamma \circ a$, where $a$ is an automorphism and $h\colon M \to N_\Gamma$. Is is clear that $\psi_\Gamma \colon \bigwedge \mathfrak{g}^* \to \Omega^*(N_\Gamma)$ is injective. The map $h$ is surjective by the previous remark. Thus $h^*\colon \Omega^\bullet(N_\Gamma) \to \Omega^\bullet(M)$ is injective. Hence $\rho$ is a composition of three injective map. \end{proof} \section{ On de Rham algebra of Sasakian manifolds} \label{sasakianmanifolds} The main objective of this section is to show that for every compact aspherical Sasakian manifold with nilpotent fundamental group $\Gamma$, there is a quasi-isomorphism $\bigwedge T^*_e G(\Gamma) \to \Omega^\bullet(M)$ with sufficiently rigid properties that will imply that a corresponding smooth homotopy equivalence $h\colon M \to N_\Gamma$ is a diffeomorphism. To achieve this we will construct and study a new real homotopy model for compact Sasakian manifolds in Theorem~\ref{sasakian} and Proposition~\ref{Omega1liexiliephi}. The following result relates compact nilpotent aspherical Sasakian manifolds and the Heisenberg groups. Denote by $H(1,n)$ the Heisenberg group of dimension $2n+1$. \begin{proposition} \label{heisenberg} Let $M^{2n+1}$ be a compact aspherical Sasakian manifold with nilpotent fundamental group $\Gamma$. Then $G(\Gamma) \cong H(1,n)$. \end{proposition} \begin{proof} Let $\Gamma$ be the fundamental group of $M$ and $G =G(\Gamma)$. Denote by $H^\bullet_B(M)$ the basic cohomology algebra of $M$ with respect to the one-dimensional foliation generated by $\xi$. We consider $H^\bullet_B(M)$ as a CDGA with the zero differential. It was shown in~\cite{tievsky} that $\Omega^\bullet(M)$ is quasi-isomorphic to $H^\bullet_B(M)[t]/t^2$ where $t$ has degree one and $dt = [d\eta]_B$ (cf. also~\cite[Theorem~3.5]{israel}). As $M$ and $N_\Gamma$ are homotopy equivalent, $H_B^\bullet(M)[t]/t^2$ is a real homotopy model of $N_\Gamma$ by Proposition~\ref{realhomotopymodels}. It was shown in~\cite[Thm.~5.3]{israel} that if a nilmanifold $N_\Gamma$ of dimension $2n+1$ has a real homotopy model of the form $A[t]/t^2$ where $\deg(t)=1$, $dt\in A$, $[dt]^n \not=0$ and the CDGA $A$ has the zero differential, then $G$ is isomorphic to the Heisenberg group $H(1,n)$. Denote by $\Lambda$ the image of $\Gamma$ under this isomorphism. Then $M$ is homotopy equivalent to $\Lambda \backslash H(1,n)$. \end{proof} Now we move to the construction of a suitable real homotopy model for a general compact Sasakian manifold $M$. We start by computing useful commutators between operators on $\Omega^\bullet(M)$. For $\beta \in \Omega^\bullet(M)$ we denote by $\epsilon_\beta$ the \emph{exterior product operator} $\alpha \mapsto \beta \wedge \alpha$ on $\Omega^\bullet(M)$. \begin{lemma} \label{lem:commutators} Let $M^{2n+1}$ be a Sasakian manifold. Then $[i_\xi, i_\varphi] =0$ and \begin{equation} \label{commutators} \begin{aligned} & {} [\mathcal{L}_\xi, \epsilon_\eta] =0,\quad [ \mathcal{L}_\xi, i_\xi] =0,\quad [\mathcal{L}_\xi, i_\varphi] =0\\ & {} [\mathcal{L}_\varphi,\epsilon_\eta] =0, \quad \left[ \mathcal{L}_\varphi, i_\xi \right] =0,\quad {} [ \mathcal{L}_\varphi, i_\varphi ] = d - \epsilon_{d\eta} i_\xi- \epsilon_\eta \mathcal{L}_\xi. & \end{aligned} \end{equation} \end{lemma} \begin{proof} It is easy to check that for every derivation $D$ of $\Omega^\bullet(M)$ and a differential form $\omega$ on $M$ \begin{equation} \label{depsomega} \begin{aligned} {}[D, \epsilon_\omega] = \epsilon_{D\omega}. \end{aligned} \end{equation} By~\eqref{sastensors}, we know that $\mathcal{L}_\xi \eta =0$ and $\mathcal{L}_\varphi \eta =0$. Thus $\left[ \mathcal{L}_\xi, \epsilon_\eta \right] = \epsilon_{\mathcal{L}_\xi \eta} =0$ and $\left[ \mathcal{L}_\varphi, \epsilon_\eta \right] = \epsilon_{\mathcal{L}_\varphi \eta} =0$ . Since $i_\xi$ is a derivation of odd-degree and $i_\xi^2=0$, we get \begin{equation*} \begin{aligned} \left[ \mathcal{L}_\xi, i_\xi \right] = \left[ \left[ i_\xi , d \right], i_\xi \right] = i_\xi d i_\xi + di_\xi^2 - i_\xi^2 d - i_\xi d i_\xi =0. \end{aligned} \end{equation*} By Frölicher-Nijenhuis calculus \begin{equation*} \begin{aligned} \left[ \mathcal{L}_\varphi, i_\xi \right]& =i_{[\varphi,\xi]_{FN}} + \mathcal{L}_{i_\xi \varphi} = -i_{\mathcal{L}_\xi \varphi} + \mathcal{L}_{\varphi \xi} = 0\\ \left[ \mathcal{L}_\xi, i_\varphi \right] & = i_{[\xi, \varphi]_{FN}} - \mathcal{L}_{i_\varphi \xi} = i_{\mathcal{L}_\xi \varphi} =0. \end{aligned} \end{equation*} We used that $\mathcal{L}_\xi \varphi =0$ by~\eqref{sastensors} and $\varphi\xi =0$ by~\eqref{etaphi}. Next using that $\left[ \varphi, \varphi \right]_{FN} = -2d\eta \otimes \xi$ and $\varphi^2 = -\mathrm{Id} + \eta \otimes \xi$, we obtain \begin{equation*} \begin{aligned} \left[ \mathcal{L}_\varphi, i_\varphi \right] & = i_{[\varphi,\varphi]_{FN}} - \mathcal{L}_{i_\varphi \varphi} = -2i_{d\eta \otimes \xi} - \mathcal{L}_{ \eta \otimes \xi - \mathrm{Id}} \\ & = -2\epsilon_{d\eta} i_\xi - (\epsilon_\eta \mathcal{L}_\xi -\epsilon_{d\eta} i_\xi) + \mathcal{L}_\mathrm{Id} = - \epsilon_{d\eta} i_\xi - \epsilon_\eta \mathcal{L}_\xi + d. \end{aligned} \end{equation*} Finally, reusing $\varphi \xi=0$, we get $\left[ i_\xi, i_\varphi \right] = i_{i_\xi \varphi} = i_{\varphi \xi} =0$. \end{proof} \begin{lemma} \label{lem:commutators2} Let $M^{2n+1}$ be a Sasakian manifold. Then \begin{equation*} \begin{aligned} {} [\delta, \mathcal{L}_\varphi] & = - 2 \mathcal{L}_\xi (n - i_\mathrm{Id}) + 2 \epsilon_\eta \delta\\ {} [\Delta , \mathcal{L}_\varphi] & = - 2 \mathcal{L}_\xi d + 2 \epsilon_{d\eta} \delta - 2 \epsilon_\eta \Delta. \end{aligned} \end{equation*} Moreover, if $M$ is compact, then \begin{equation*} \begin{aligned} {} \Pi_\Delta \circ \mathcal{L}_\varphi = \mathcal{L}_\varphi \circ \Pi_\Delta =0\quad \mbox{and} \quad [G, \mathcal{L}_\varphi] & = -G[\Delta, \mathcal{L}_\varphi] G, \end{aligned} \end{equation*} where $G$ is the Green operator and $\Pi_\Delta$ is the orthogonal projection to~$\Omega^\bullet_\Delta(M)$. \end{lemma} \begin{proof} To compute the commutator $[\delta, \mathcal{L}_\varphi]$, we use \cite[Eq. (3.3)]{fujitani} \begin{equation*} \begin{aligned} {} \mathcal{L}_\varphi = - [\delta, \epsilon_{ \Phi} ] + 2 \epsilon_\eta (n- i_\mathrm{Id}) \end{aligned} \end{equation*} and that $\left[ \delta, \epsilon_\eta \right] = -\mathcal{L}_\xi$ as $\xi$ is Killing (cf.~\cite[p.~109]{goldberg}). We get \begin{equation*} \begin{aligned} {} [\delta, \mathcal{L}_\varphi] & = - \left[ \delta, \left[ \delta, \epsilon_{\Phi} \right] \right] + 2 [\delta, \epsilon_\eta] (n - i_\mathrm{Id}) + 2 \epsilon_\eta [\delta, i_\mathrm{Id}] \\& = - 2 \mathcal{L}_\xi (n - i_\mathrm{Id}) + 2 \epsilon_\eta \delta. \end{aligned} \end{equation*} Next \begin{equation*} \begin{aligned} { } [\Delta , \mathcal{L}_\varphi] &= \left[ \left[ d, \delta \right], \mathcal{L}_\varphi \right] = \left[ d, \left[ \delta, \mathcal{L}_\varphi \right] \right] = \left[ d, - 2 \mathcal{L}_\xi (n -i_\mathrm{Id}) + 2\epsilon_\eta \delta] \right] \\[2ex] & = 2 \mathcal{L}_\xi [d, i_\mathrm{Id}] + 2 \epsilon_{d\eta} \delta - 2 \epsilon_\eta \Delta = - 2 \mathcal{L}_\xi d + 2 \epsilon_{d\eta} \delta - 2 \epsilon_\eta \Delta. \end{aligned} \end{equation*} Now we assume that $M$ is a compact Sasakian manifold. By~\cite[Thm.~4.1]{fujitani} the operator $i_\varphi$ preserves harmonic forms. Hence for every harmonic form $\omega$, we have $\mathcal{L}_\varphi \omega = i_\varphi d \omega - di_\varphi \omega =0$. This implies~$\mathcal{L}_\varphi \circ \Pi_\Delta =0$. To show that $\Pi_\Delta \circ \mathcal{L}_\varphi=0$, we will check that for every form $\beta$ the form $\mathcal{L}_\varphi \beta$ is orthogonal to $\Omega^\bullet_\Delta(M)$. We will use that the adjoint operator of $i_\varphi$ equals to $-i_\varphi$ (cf.~\cite[Eq. (1.4)]{fujitani}) and that $d^* = \delta$. For every $\omega \in \Omega^\bullet_\Delta(M)$ of the same degree as $\mathcal{L}_\varphi \beta$, we get \begin{equation*} \begin{aligned} (\mathcal{L}_\varphi \beta , \omega) & = ([i_\varphi, d] \beta , \omega) = (\beta , [i_\varphi, d]^*\omega)= (\beta, [d^*, i_\varphi^*] \omega) \\ &= -(\beta, [\delta, i_\varphi] \omega) = - (\beta, \delta i_\varphi \omega) = 0. \end{aligned} \end{equation*} In the last equality we again used that $i_\varphi$ preserves harmonic forms. By Hodge theory $G\Delta = \mathrm{Id} - \Pi_\Delta$. Applying $[-, \mathcal{L}_\varphi]$ to this identity and using $\mathcal{L}_\varphi \circ \Pi_\Delta = \Pi_\Delta \circ \mathcal{L}_\varphi$, we get \begin{equation*} \begin{aligned} { } [G, \mathcal{L}_\varphi] \Delta + G [\Delta , \mathcal{L}_\varphi] =0. \end{aligned} \end{equation*} Now we compose the last equation with $G$ on the right side and use $\Delta G = \mathrm{Id} - \Pi_\Delta$. As a result, we get \begin{equation*} \begin{aligned} { } [G, \mathcal{L}_\varphi] (\mathrm{Id} -\Pi_\Delta) + G [\Delta , \mathcal{L}_\varphi]G =0. \end{aligned} \end{equation*} Since $G \circ \Pi_\Delta = 0$ and $\mathcal{L}_\varphi \circ \Pi_\Delta=0$, the last equation is equivalent to \begin{equation*} \begin{aligned} { } [G, \mathcal{L}_\varphi] =- G [\Delta , \mathcal{L}_\varphi]G . \end{aligned} \end{equation*} \end{proof} The following theorem provides the announced real homotopy model for compact Sasakian manifolds. Given linear operators $A_1$, \dots, $A_k$ on $\Omega^\bullet(M)$, we will write $\Omega^\bullet_{A_1,\dots, A_k}(M)$ for the intersection of the kernels of the operators $A_1$, \dots, $A_k$. \begin{theorem} \label{sasakian} Let $M^{2n+1}$ be a compact Sasakian manifold. Then the inclusion $\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) \hookrightarrow \Omega^\bullet(M)$ is a quasi-isomorphism of CDGAs. \end{theorem} \begin{proof} By Hodge theory, the graded commutator $ \left[ d, \delta G \right]$ equals to~\mbox{$\mathrm{Id} - \Pi_\Delta$}. In other words, $\delta G$ is a homotopy between $\mathrm{Id}$ and $\Pi_\Delta$. This implies that the inclusion $\Omega^\bullet_\Delta(M) \hookrightarrow \Omega^\bullet(M)$ and the projection $\Pi_\Delta \colon \Omega^\bullet(M) \to \Omega^\bullet_\Delta(M)$ are mutually inverse homotopy equivalences, where we consider $\Omega^\bullet_\Delta(M)$ as a chain complex with the zero differential. We will show that $\Omega^\bullet_\Delta(M) \subset \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$ and that operators $d$, $\delta G$ preserve $\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. This will imply that the inclusion $\Omega^\bullet_\Delta(M) \hookrightarrow \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$ and $\Pi_\Delta \colon \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)\to \Omega^\bullet_\Delta(M)$ are mutually inverse homotopy equivalences. Then, passing to cohomology, we will get \begin{align} \label{isos} \Omega^\bullet_\Delta(M) \xrightarrow{\cong} H^\bullet(\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)) \rightarrow H^\bullet_{DR}(M). \end{align} The composite of the above two maps is an isomorphism by Hodge theory. Thence also the right arrow is an isomorphism and this will imply the claim of the theorem. We start by checking that $\Omega^\bullet_\Delta(M) \subset \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) = \Omega^\bullet_{\mathcal{L}_\xi}(M) \cap \Omega^\bullet_{\mathcal{L}_\varphi}(M)$. We have $\Omega^\bullet_\Delta(M) \subset \Omega^\bullet_{\mathcal{L}_\xi}(M)$ since $\xi$ is a Killing vector field (cf.~\cite[Thm.~3.7.1]{goldberg}). By Lemma~\ref{lem:commutators2}, for $\omega \in \Omega^\bullet_\Delta(M)$, we have $\mathcal{L}_\varphi \omega = \mathcal{L}_\varphi \Pi_\Delta \omega =0$. Hence $\omega \in \Omega^\bullet_{\mathcal{L}_\varphi} (M)$. Thus $\Omega^\bullet_\Delta(M) \subset \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. It is left to check that $d$ and $\delta G$ preserve $\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. The differential $d$ preserves $\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$, since $d$ commutes with both $\mathcal{L}_\xi$ and $\mathcal{L}_\varphi$. Now, we verify that $\delta G$ preserves $\Omega^\bullet_{\mathcal{L}_\xi}(M)$. By~\cite[Prop.~1.2]{fujitani}, $\delta$ commutes with $\mathcal{L}_\xi$. By~\cite[Prop.~6.10]{warner}, if a linear operator $A$ on $\Omega^\bullet(M)$ commutes with $d$ and $\delta$, then it also commutes with the Green operator $G$. Hence $G$, and thus also $\delta G$, commute with $\mathcal{L}_\xi$. This implies $\delta G(\Omega^\bullet_{\mathcal{L}_\xi}(M) )\subset \Omega^\bullet_{\mathcal{L}_\xi}(M)$. It is left to show that $\delta G( \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) ) \subset \Omega^\bullet_{ \mathcal{L}_\varphi}(M)$. Let $\beta \in \Omega^p_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. We will verify that $\mathcal{L}_\varphi \delta G \beta =0$. Applying Lemma~\ref{lem:commutators2}, we get \begin{equation} \label{liephideltaGbeta} \begin{aligned} \mathcal{L}_\varphi (\delta G \beta) & = \mathcal{L}_\varphi (G \delta \beta) = [\mathcal{L}_\varphi, G] \delta\beta + G [\mathcal{L}_\varphi , \delta] \beta - G\delta \mathcal{L}_\varphi \beta \\ & = - G[\mathcal{L}_\varphi, \Delta] G \delta\beta + G( -2 \mathcal{L}_\xi (n - i_\mathrm{Id}) + 2 \epsilon_\eta \delta ) \beta. \end{aligned} \end{equation} Now, applying formula for $\left[ \Delta, \mathcal{L}_\varphi \right]$ from Lemma~\ref{lem:commutators2} and using that $\mathcal{L}_\xi$ commutes with $d$, $\delta$ and $G$, we get \begin{equation*} \begin{aligned} G \left[ \mathcal{L}_\varphi, \Delta \right]G \delta \beta & = G ( 2 \mathcal{L}_\xi d - 2 \epsilon_{d\eta} \delta + 2 \epsilon_\eta \Delta) \delta G \beta \\[2ex] & = 2 G d \delta G \mathcal{L}_\xi \beta - 2 G\epsilon_{d\eta} \delta^2 G \beta + 2 G \epsilon_\eta \delta \Delta G \beta \\[2ex] & = 2G \epsilon_\eta \delta (\mathrm{Id} - \Pi_\Delta) \beta = 2G \epsilon_\eta \delta \beta. \end{aligned} \end{equation*} Next \begin{equation*} \begin{aligned} G( -2 \mathcal{L}_\xi (n - i_\mathrm{Id}) + 2 \epsilon_\eta \delta ) \beta =-2 G \mathcal{L}_\xi (n-p) \beta+ 2 G \epsilon_\eta \delta\beta = 2 G \epsilon_\eta\delta \beta. \end{aligned} \end{equation*} Thus~\eqref{liephideltaGbeta} becomes \begin{equation*} \begin{aligned} \mathcal{L}_\varphi(\delta G \beta) =- 2 G \epsilon_\eta \delta \beta + 2 G \epsilon_\eta \delta \beta =0, \end{aligned} \end{equation*} as required. \end{proof} Let $\mathfrak{h}(1,n)$ be the Lie algebra of $H(1,n)$. We write $\mathfrak{h}^*(1,n)$ for the dual vector space of $\mathfrak{h}(1,n)$. \begin{corollary} \label{imquasiiso} Let $M^{2n+1}$ be a compact aspherical Sasakian manifold with nilpotent fundamental group. Then there is a quasi-isomorphism of CDGAs $\rho\colon \bigwedge \mathfrak{h}^*(1,n) \to \Omega^\bullet(M) $ such that $\im\rho \subset \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. \end{corollary} \begin{proof} By Proposition~\ref{heisenberg}, we know that $G(\Gamma) \cong H(1,n)$. Denote by $\Lambda$ the image of $\Gamma$ in $H(1,n)$ under this isomorphism. Then $M$ is homotopy equivalent to $\Lambda \backslash H(1,n)$. By Proposition~\ref{realhomotopymodels} there is a quasi-isomorphism $f\colon \Omega^\bullet(\Lambda \backslash H(1,n)) \to \Omega^\bullet( M)$. Consider the diagram \begin{equation*} \begin{aligned} \xymatrix{ && \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) \ar[d]^{i}\\ \bigwedge \mathfrak{h}^*(1,n) \ar[r]^-{\psi_\Lambda} & \Omega^\bullet(\Lambda \backslash H(1,n)) \ar[r]^-{f} & \Omega^\bullet(M), } \end{aligned} \end{equation*} where $\psi_\Lambda$ is the Nomizu quasi-isomorphism and $i$ is the inclusion map. By Theorem~\ref{sasakian}, $i$ is a quasi-isomorphism. Since $\mathfrak{h}(1,n)$ is nilpotent, the CDGA $\bigwedge \mathfrak{h}^*(1,n)$ is Sullivan. By the lifting property for Sullivan algebras (see Prop.~\ref{lifting}), there is a homomorphism of CDGAs $h\colon \bigwedge \mathfrak{h}^*(1,n) \to \Omega^*_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$ such that $H^\bullet(i) \circ H^\bullet(h) = H^\bullet(f) \circ H^\bullet(\psi_\Lambda) $. Since $f$, $i$ and $\psi_\Lambda$ are quasi-isomorphisms, we get that also $h$ is a quasi-isomorphism. Thus $\rho := i\circ h \colon \bigwedge \mathfrak{h}^*(1,n) \to \Omega^\bullet(M)$ is a quasi-isomorphism of CDGAs with the required property. \end{proof} In the next two propositions we study the properties of the real homotopy model for Sasakian manifolds obtained in Theorem~\ref{sasakian}. \begin{proposition} \label{splitting} Let $M^{2n+1}$ be a Sasakian manifold. Then \begin{equation*} \begin{aligned} \Omega^k_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) \cong \Omega^k_{i_{\xi},\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)\oplus \epsilon_\eta \Omega^{k-1}_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M). \end{aligned} \end{equation*} \end{proposition} \begin{proof} It follows from Lemma~\ref{lem:commutators} that $\Omega^\bullet_{\mathcal{L}_\xi,\mathcal{L}_\varphi}(M)$ is invariant under the action of $i_\xi$ and $\epsilon_\eta$. Now the claim of the proposition follows from the identities \begin{equation*} \begin{aligned} {} [i_\xi,\epsilon_\eta] =\mathrm{Id},\quad (i_\xi \epsilon_\eta)^2 = i_\xi \epsilon_\eta,\quad (\epsilon_\eta i_\xi)^2 = \epsilon_\eta i_\xi. \end{aligned} \end{equation*} \end{proof} For $k=0,1$, we have a more explicit description of $\Omega^k_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. Namely, in the case $k=0$, we get \begin{equation} \label{Omega0liexilievarphi} \begin{aligned} \Omega^0_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) = \mathbb{R}. \end{aligned} \end{equation} To see this, let $f \in \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. Then $i_\xi d f= i_\varphi d f =0$, which imply \begin{equation*} \begin{aligned} df = df ( - \varphi^2 + \eta \otimes \xi) = - i_\varphi^2 df + (i_\xi df) \eta =0. \end{aligned} \end{equation*} Thus $f$ is a constant function. The case $k=1$ is treated in the following proposition. \begin{proposition} \label{Omega1liexiliephi} Let $M^{2n+1}$ be a compact Sasakian manifold. Then \begin{equation*} \begin{aligned} \Omega^1_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) \cong \Omega^1_{\Delta}(M) \oplus \mathcal{L}_\varphi \Omega^0_{\mathcal{L}_\xi}(M)\oplus \left\langle \eta \right\rangle . \end{aligned} \end{equation*} \end{proposition} \begin{proof} By Proposition~\ref{splitting}, we have \begin{equation*} \begin{aligned} \Omega^1_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) \cong \Omega^1_{i_{\xi},\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)\oplus \epsilon_\eta \Omega^{0}_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M). \end{aligned} \end{equation*} Now, since $\Omega^0_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) = \mathbb{R}$ by~\eqref{Omega0liexilievarphi}, it is enough to show that \begin{equation} \label{dirsum} \begin{aligned} \Omega^1_{i_\xi, \mathcal{L}_\xi, \mathcal{L}_\varphi}(M) = \Omega^1_{\Delta}(M) \oplus \mathcal{L}_\varphi \Omega^0_{\mathcal{L}_\xi}(M). \end{aligned} \end{equation} From Lemma~\ref{lem:commutators} it follows that the subcomplex of $\xi$-basic forms $\Omega^\bullet_{i_\xi, \mathcal{L}_\xi}(M)$ in $\Omega^\bullet(M)$ is invariant under the action of $i_\varphi$. Notice that for every $\alpha \in \Omega^1_{i_\xi, \mathcal{L}_\xi}(M)$ we have \begin{equation*} \begin{aligned} i_\varphi^2 \alpha = \alpha \circ \varphi^2 = \alpha \circ ( - \mathrm{Id} + \eta \otimes \xi) = - \alpha + (i_\xi \alpha) \eta = - \alpha. \end{aligned} \end{equation*} Thus $i_\varphi$ is an automorphism of $\Omega^1_{i_\xi, \mathcal{L}_\xi}(M)$ and induces an involution on the set of vector subspaces in $\Omega^1_{i_\xi, \mathcal{L}_\xi}(M)$. We claim that $i_\varphi$ swaps the subspace $\Omega^1_{i_\xi,\mathcal{L}_\xi,\mathcal{L}_\varphi}(M)$ with $\Omega^1_{i_\xi, \mathcal{L}_\xi,d}(M)$ and the subspace $\mathcal{L}_\varphi \Omega^0_{\mathcal{L}_\xi}(M)$ with $d \Omega^0_{\mathcal{L}_\xi}(M)$. This follows from the inclusions \begin{equation} \label{inclusions} \begin{aligned} & i_\varphi \mathcal{L}_\varphi \Omega^0_{\mathcal{L}_\xi}(M) \subset d \Omega^0_{\mathcal{L}_\xi}(M),& & i_\varphi d \Omega^0_{\mathcal{L}_\xi}(M) \subset \mathcal{L}_\varphi \Omega^0_{\mathcal{L}_\xi}(M) \\ & i_\varphi \Omega^1_{i_\xi,\mathcal{L}_\xi,\mathcal{L}_\varphi}(M) \subset \Omega^1_{i_\xi, \mathcal{L}_\xi,d}(M),& & i_\varphi \Omega^1_{i_\xi,\mathcal{L}_\xi,d}(M) \subset \Omega^1_{i_\xi, \mathcal{L}_\xi,\mathcal{L}_\varphi}(M). \end{aligned} \end{equation} To prove these inclusions we will use that by Lemma~\ref{lem:commutators} $[\mathcal{L}_\varphi,i_\varphi] \omega = d \omega$ for every $\omega \in \Omega^\bullet_{i_\xi, \mathcal{L}_\xi}(M)$. The first two inclusions follow from \begin{equation*} \begin{aligned} i_\varphi \mathcal{L}_\varphi f = \mathcal{L}_\varphi i_\varphi f - [\mathcal{L}_\varphi ,i_\varphi] f = - df,\quad i_\varphi d f = \mathcal{L}_\varphi f + d i_\varphi f = \mathcal{L}_\varphi f, \end{aligned} \end{equation*} where $f\in \Omega^0_{\mathcal{L}_\xi}(M) = \Omega^0_{i_\xi, \mathcal{L}_\xi}(M)$. To show the last two inclusions in~\eqref{inclusions}, we proceed as follows. Let $\alpha\in \Omega^1_{i_\xi, \mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. We already know that $i_\varphi \alpha$ lies in $\Omega^1_{i_\xi, \mathcal{L}_\xi}(M)$. Next \begin{equation*} \begin{aligned} i_\varphi d \alpha & = i_\varphi [\mathcal{L}_\varphi, i_\varphi] \alpha = i_\varphi \mathcal{L}_\varphi i_\varphi \alpha - i_\varphi^2 \mathcal{L}_\varphi \alpha = i_\varphi \mathcal{L}_\varphi i_\varphi \alpha\\ d i_\varphi \alpha & = [\mathcal{L}_\varphi, i_\varphi] i_\varphi \alpha = \mathcal{L}_\varphi i_\varphi^2 \alpha - i_\varphi \mathcal{L}_\varphi i_\varphi \alpha =\! - \mathcal{L}_\varphi \alpha - i_\varphi \mathcal{L}_\varphi i_\varphi \alpha =\! - i_\varphi \mathcal{L}_\varphi i_\varphi \alpha. \end{aligned} \end{equation*} Taking the difference of these equalities we get $0 = \mathcal{L}_\varphi \alpha = 2 i_\varphi \mathcal{L}_\varphi i_\varphi \alpha$. Hence $di_\varphi \alpha = - i_\varphi \mathcal{L}_\varphi i_\varphi \alpha =0$, which shows $i_\varphi \alpha \in \Omega^1_{i_\xi, \mathcal{L}_\xi, d}(M)$. Now let $\beta \in \Omega^1_{i_\xi, \mathcal{L}_\xi, d}(M)$. We know that $i_\varphi \beta \in \Omega^1_{i_\xi, \mathcal{L}_\xi}(M)$. Next \begin{equation*} \begin{aligned} \mathcal{L}_\varphi i_\varphi \beta &= i_\varphi d i_\varphi \beta - d i_\varphi^2 \beta = i_\varphi d i_\varphi \beta + d \beta = i_\varphi d i_\varphi \beta \\ i_\varphi \mathcal{L}_\varphi \beta &= i_\varphi^2 d \beta - i_\varphi d i_\varphi \beta = - i_\varphi d i_\varphi \beta. \end{aligned} \end{equation*} Taking the difference, we get $0 = d\beta = [\mathcal{L}_\varphi, i_\varphi] \beta = 2 i_\varphi d i_\varphi \beta$. Hence $\mathcal{L}_\varphi i_\varphi \beta = i_\varphi d i_\varphi \beta =0$. This shows that $i_\varphi \beta \in \Omega^1_{i_\xi, \mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. According to~\cite[Thm.~4.1]{fujitani}, $i_\varphi$ preserves harmonic forms, and so $i_\varphi \Omega^1_\Delta(M) = \Omega^1_\Delta(M)$. Thus, applying $i_\varphi$, we see that the direct sum decomposition~\eqref{dirsum} is equivalent to \begin{equation*} \begin{aligned} \Omega^1_{i_\xi, \mathcal{L}_\xi, d}(M) = \Omega^1_{\Delta} \oplus d\Omega^0_{\mathcal{L}_\xi}(M). \end{aligned} \end{equation*} For Sasakian manifolds $\Omega^1_{\Delta} (M) = \Omega^1_{\Delta_B}(M, \xi)$ (cf.~\cite[Prop.~7.4.13]{galicki}). Now the result follows from the Hodge decomposition for $\xi$-basic forms in the case $\xi$ is Killing(cf.~\cite{kamber}). \end{proof} Now we use the above proposition to obtain useful properties for the quasi-isomorphisms whose existence was shown in Corollary~\ref{imquasiiso}. \begin{corollary} \label{etaheis} Let $M^{2n+1}$ be a compact aspherical Sasakian manifold with nilpotent fundamental group and $\rho\colon \bigwedge \mathfrak{h}^*(1,n) \to \Omega^\bullet(M) $ a quasi-isomorphism such that $\im\rho$ is contained in $\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. Then $\rho$ induces an isomorphism between the space of closed elements in $\bigwedge^1 \mathfrak{h}^*(1,n)$ and $\Omega^1_\Delta(M)$. Moreover, there are $\eta_{\mathfrak{h}}\in \mathfrak{h}^*(1,n)$ and $f\in \Omega^0_{\mathcal{L}_\xi}(M)$ such that $\rho(\eta_{\mathfrak{h}}) = \eta + \mathcal{L}_\varphi f$. \end{corollary} \begin{proof} Denote by $\hat\rho\colon \bigwedge \mathfrak{h}^*(1,n) \to \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$ the map induced by $\rho$. The inclusion $\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) \hookrightarrow \Omega^\bullet(M)$ is a quasi-isomorphism by Theorem~\ref{sasakian}. As $\rho$ is also a quasi-isomorphism, we conclude that $\hat\rho$ is a quasi-isomorphism. By~\eqref{Omega0liexilievarphi}, we know that $\Omega^0_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$ contains only constant functions. Hence $H^1 (\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M))$ coincides with the set of closed forms in $\Omega^1_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$. By examining the decomposition of $\Omega^1_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$ obtained in Proposition~\ref{Omega1liexiliephi}, we see that $\Omega^1_\Delta(M)$ is contained in $H^1(\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M))$. Since the inclusion $\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M) \hookrightarrow \Omega^\bullet(M)$ is a quasi-isomorphism and $\Omega^1_\Delta(M)\cong H^1(M)$, the vector spaces $\Omega^1_\Delta(M)$ and $H^1(\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M))$ have the same dimension. Hence $\Omega^1_\Delta(M) = H^1(\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M))$. Since $ Z^1 := \ker(d|_{\mathfrak{h}^*(1,n)}) = H^1(\bigwedge \mathfrak{h}^*(1,n))$, the quasi-isomorphism $\hat\rho$ induces an isomorphism $ Z^1 \to H^1(\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)) = \Omega^1_\Delta(M)$. Now we will show the existence of $\eta_\mathfrak{h} \in \mathfrak{h}^*(1,n)$ with the claimed properties. It is known that the codimension of $Z^1$ in $\mathfrak{h}^*(1,n)$ is one. Choose an arbitrary $\beta \in \mathfrak{h}^*(1,n) \setminus Z^1$. From Proposition~\ref{Omega1liexiliephi}, we get that $\rho(\beta) = \omega + \mathcal{L}_\varphi h + \lambda\eta $ for some $\omega \in \Omega^1_\Delta(M)$, $h \in \Omega^0_{\mathcal{L}_\xi}(M)$ and $\lambda \in \mathbb{R}$. As $\rho|_{Z^1}\colon Z^1 \to \Omega^1_{\Delta}(M)$ is an isomorphism, there is $\alpha \in Z^1$ such that $\rho (\alpha) =\omega$. Notice that $\beta-\alpha \not \in Z^1$. Hence replacing $\beta$ with $\beta -\alpha$, we can assume that $\omega =0$. Next, we show that $\lambda\not=0$. If this is not the case, then using Lemma~\ref{lem:commutators}, we have \begin{equation*} \begin{aligned} i_\xi \rho(\beta) = i_\xi \mathcal{L}_\varphi h = \left[ i_\xi, \mathcal{L}_\varphi \right] h - \mathcal{L}_\varphi i_\xi h = 0. \end{aligned} \end{equation*} Now let $\alpha_1$, \dots, $\alpha_{2n}$ be a basis of $Z^1$. Then $\beta \wedge \alpha_1 \wedge \dots \wedge \alpha_{2n}$ generates the one-dimensional space $\bigwedge^{2n+1} \mathfrak{h}^*(1,n) = H^{2n+1}( \bigwedge \mathfrak{h}^*(1,n))$. As $\rho$ is a quasi-isomorphism, the class of the form \begin{equation*} \begin{aligned} \Psi = \rho(\beta)\wedge \rho(\alpha_1) \wedge \dots \wedge \rho(\alpha_{2n}) \end{aligned} \end{equation*} should generate $H^{2n+1}(M)$. But $\rho(\alpha_k)$ are harmonic $1$-forms, and hence by~\cite{tachibana}, $i_\xi \rho(\alpha_k) =0$ for each $1\le k \le 2n$. As we also have $i_\xi \rho(\beta)=0$, we get $i_\xi \Psi =0$. We also have $d\Psi =0$ for degree reasons. Hence $\Psi \in \Omega^{2n+1}_{i_\xi, \mathcal{L}_\xi}(M) = \Omega^{2n+1}_B(M, \xi) =0$. We get a contradiction. This shows that $\lambda\not=0$. Now define $\eta_\mathfrak{h} = (1/\lambda) \beta$. We get $\rho(\eta_\mathfrak{h}) = \mathcal{L}_\varphi f + \eta$, where $f = (1/\lambda) h$. \end{proof} \section{Proof of Theorem~\ref{main2}} \label{PROOF} Let $M^{2n+1}$ be a compact nilpotent aspherical Sasakian manifold. By Corollaries~\ref{imquasiiso} and~\ref{etaheis}, there are a quasi-isomorphism \begin{equation*} \begin{aligned} \hat{\rho} \colon \bigwedge \mathfrak{h}^*(1,n)\to \Omega^\bullet(M), \end{aligned} \end{equation*} a left-invariant $1$-form $\hat\eta_\mathfrak{h}\in \mathfrak{h}^*(1,n)$ and a smooth function $f \in \Omega^0_{\mathcal{L}_\xi}(M)$ such that $\im \hat{\rho} \subset \Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$ and $\hat\rho (\hat\eta_\mathfrak{h}) = \eta + \mathcal{L}_\varphi f$. By Theorem~\ref{main1}, there are a smooth homotopy equivalence \mbox{$h \colon M \to N_\Gamma$} and an automorphism $a$ of $\bigwedge \mathfrak{h}^*(1,n)$ such that \begin{equation*} \begin{aligned} \hat \rho = h^* \circ \psi_\Gamma \circ a, \end{aligned} \end{equation*} where $\psi_\Gamma \colon \bigwedge \mathfrak{h}^*(1,n) \to \Omega^\bullet(N_\Gamma)$ is the Nomizu quasi-isomorphism. Define $\rho = \hat\rho\circ a^{-1}$ and $\eta_\mathfrak{h} = a(\hat\eta_\mathfrak{h})$. Then the image of $\rho$ still lies in $\Omega^\bullet_{\mathcal{L}_\xi, \mathcal{L}_\varphi}(M)$ and $\rho(\eta_\mathfrak{h}) = \eta + \mathcal{L}_\varphi f$. We also have $\rho = h^* \circ \psi_\Gamma$. By Remark~\ref{hsurjective}, we know that $h\colon M \to N_\Gamma$ is surjective and has degree $\pm 1$. To show that $h$ is a diffeomorphism we will use an auxiliary map $h_f \colon M \times \mathbb{R}\to N_\Gamma\times \mathbb{R}$. We define $h_f$ as the composition $\tilde{h}\circ \tilde{f}$, where the maps $\tilde{f}$ and $\tilde{h}$ are given by \begin{equation*} \begin{aligned} \tilde{f} \colon M \times \mathbb{R}& \to M \times \mathbb{R} &\phantom{==}&& \tilde{h}\colon M\times \mathbb{R} &\to N_\Gamma \times \mathbb{R} \\ (x, t) & \mapsto (x , t+ f(x)) &\phantom{==}&& (x,t) & \mapsto ( h(x) , t). \end{aligned} \end{equation*} The map $\tilde{f}$ is a diffeomorphism. The map $\tilde{h}$ is a diffeomorphism if and only if $h$ is a diffeomorphism. Hence also $h_f$ is a diffeomorphism if and only if $h$ is a diffeomorphism. \begin{claim} \label{hfsurjectiveproper} The map $h_f$ is surjective, universally closed and proper. \end{claim} \begin{proof} The map $h_f$ is surjective being a composition of two surjective maps. Next, the map $h$ is proper since it is a continuous map between compact topological spaces. Given a Hausdorff topological space $X$ and a locally compact Hausdorff space $Y$, a continuous map $X\to Y$ is proper if and only if it is universally closed. Thus $h$ is universally closed, and, hence, also $\tilde{h}$ is universally closed and, thus, proper. The same properties hold for $h_f$, as $\tilde{f}$ is a homeomorphism. \end{proof} To have a better grip over properties of $h_f$, we will show that $h_f$ is a holomorphic map with respect to appropriate complex structures on $M \times \mathbb{R}$ and $N_\Gamma \times \mathbb{R}$. As $M$ is a Sasakian manifold, we know that $M\times \mathbb{R}$ is a Kähler manifold with the complex structure $J(X, c\ddt) = ( \varphi X -c\xi, \eta(X) \ddt )$. Similarly, if we equip $N_\Gamma$ with a normal almost contact structure then $N_\Gamma\times \mathbb{R}$ becomes a complex manifold. We extend $\eta_\mathfrak{h}$ to an almost contact structure $(\varphi_\mathfrak{h}, \xi_\mathfrak{h}, \eta_\mathfrak{h})$ on the Lie algebra $\mathfrak{h}(1,n)$. To have an endomorphism $\varphi_\mathfrak{h}$ of $\mathfrak{h}(1,n)$ is the same as to have an endomorphism $\varphi_\mathfrak{h}^*$ of $\mathfrak{h}^*(1,n)$. The vector space $\mathfrak{h}^*(1,n)$ is the direct sum of $\left\langle \eta_\mathfrak{h} \right\rangle$ and $Z^1 = \ker(d|_{\mathfrak{h}^*(1,n)})$. By Corollary~\ref{etaheis}, the quasi-isomorphism $\rho$ induces an isomorphism $\tau \colon Z^1 \to \Omega^1_\Delta(M)$. Recall that $\Omega^1_\Delta(M)$ is preserved by $i_\varphi$. Define $\varphi_\mathfrak{h}^* \colon \mathfrak{h}^*(1,n) \to \mathfrak{h}^*(1,n) $ by $\varphi_\mathfrak{h}^* ( z + \lambda \eta_\mathfrak{h}) = \tau^{-1} ( i_\varphi \tau(z))$ for $z \in Z^1$ and $\lambda\in \mathbb{R}$. Now we construct the Reeb vector $\xi_\mathfrak{h} \in \mathfrak{h}(1,n)$. Choose a basis $x_1$, \dots, $x_n$, $y_1$, \dots, $y_n$, $u$ in $\mathfrak{h}(1,n)$ such that $[x_k, y_k] =u$ for all $1\le k \le n$ and all the other commutators are zero. Then $x_1^*$, \dots, $x_n^*$, $y_1^*$, \dots, $y_n^*$ is a basis of $Z^1$. Since $\eta_\mathfrak{h}\not\in Z^1$, we have $\eta_\mathfrak{h} = a u^* + \tilde{z}$ with $a\not=0$ and $\tilde{z} \in Z^1$. Define $\xi_\mathfrak{h} =u/a$. Then $\eta_\mathfrak{h}(\xi_\mathfrak{h}) =1$ and $z(\xi_\mathfrak{h}) =0$ for all $z\in Z^1$. \begin{claim} The triple $(\varphi_\mathfrak{h}, \xi_\mathfrak{h}, \eta_\mathfrak{h})$ is a normal almost contact structure on $\mathfrak{h}(1,n)$. \end{claim} \begin{proof} By~\cite{tachibana}, $i_\xi \alpha =0$ for every harmonic $1$-form $\alpha$ on a Sasakian manifold. Thus for $z\in Z^1$ and $\lambda\in \mathbb{R}$, we get \begin{equation*} \begin{aligned} (\varphi_\mathfrak{h}^*)^2 ( z + \lambda \eta_\mathfrak{h}) & = \tau^{-1} ( i_\varphi^2 \tau(z))) = \tau^{-1} ( \tau(z) \circ (-\mathrm{Id} + \eta \otimes \xi) )\\ & =\tau^{-1} ( -\tau(z) + \eta \otimes i_\xi \tau(z) ) = -z. \end{aligned} \end{equation*} We also have \begin{equation*} \begin{aligned} (\eta_\mathfrak{h}\otimes \xi_\mathfrak{h})^* ( z + \lambda \eta_\mathfrak{h}) = (z + \lambda\eta_\mathfrak{h})(\xi_\mathfrak{h}) \eta_\mathfrak{h} = \lambda \eta_\mathfrak{h}. \end{aligned} \end{equation*} Thus $(\varphi_\mathfrak{h}^*)^2 = -\mathrm{Id} + (\eta_\mathfrak{h} \otimes \xi_\mathfrak{h})^*$ and, hence, $\varphi_\mathfrak{h}^2 = -\mathrm{Id} + \eta_\mathfrak{h}\otimes \xi_\mathfrak{h}$, i.e. $(\varphi_\mathfrak{h}, \xi_\mathfrak{h}, \eta_\mathfrak{h})$ is an almost contact structure on $\mathfrak{h}(1,n)$. Next, we check the normality of $(\varphi_\mathfrak{h}, \xi_\mathfrak{h}, \eta_\mathfrak{h})$. For this it is enough to show that $\left[ \varphi_\mathfrak{h}, \varphi_\mathfrak{h} \right]_{FN} + 2 d\eta_\mathfrak{h} \otimes \xi_\mathfrak{h} =0$, or, equivalently, that for every $\alpha \in \mathfrak{h}^*(1,n)$ one has \begin{equation} \label{aff} \begin{aligned} \alpha\circ \left[ \varphi_\mathfrak{h}, \varphi_\mathfrak{h} \right]_{FN} = - 2 \alpha ( \xi_\mathfrak{h}) d\eta_\mathfrak{h}. \end{aligned} \end{equation} Using Frölicher-Nijenhuis calculus (see Section~~\ref{FNc}), we get \begin{equation} \label{alphavarphivarphi} \begin{aligned} \alpha \circ \left[ \varphi_\mathfrak{h}, \varphi_\mathfrak{h} \right]_{FN} = i_{\left[ \varphi_\mathfrak{h}, \varphi_\mathfrak{h} \right]_{FN}} \alpha = [ \mathcal{L}_{\varphi_\mathfrak{h}}, i_{\varphi_\mathfrak{h}}] \alpha + \mathcal{L}_{i_{\varphi_\mathfrak{h}} \varphi_\mathfrak{h}}\alpha. \end{aligned} \end{equation} We evaluate the right-hand side of the above equality term by term. Since $\im(\varphi_\mathfrak{h}^*) \subset Z^1$, we have \begin{equation} \label{firstlievarphiivarphialpha} \begin{aligned} \left[ \mathcal{L}_{\varphi_\mathfrak{h}}, i_{\varphi_\mathfrak{h}} \right] \alpha & = i_{\varphi_\mathfrak{h}} d i_{\varphi_\mathfrak{h}} \alpha - d i_{\varphi_\mathfrak{h}}^2 \alpha - i_{\varphi_\mathfrak{h}}^2 d\alpha + i_{\varphi_\mathfrak{h}} d i_{\varphi_\mathfrak{h}} \alpha \\ & = 2i_{\varphi_\mathfrak{h}} d {\varphi_\mathfrak{h}}^*(\alpha) - d ( \alpha \circ ( -\mathrm{Id} + \eta_\mathfrak{h} \otimes \xi_\mathfrak{h})) - i_{\varphi_\mathfrak{h}}^2 d\alpha\\ & = 0 + d \alpha - d (\alpha(\xi_\mathfrak{h}) \eta_\mathfrak{h} ) - i_{\varphi_\mathfrak{h}}^2 d\alpha= d\alpha - \alpha (\xi_\mathfrak{h}) d \eta_\mathfrak{h}- i_{\varphi_\mathfrak{h}}^2 d\alpha. \end{aligned} \end{equation} We now check that $i_{\varphi_\mathfrak{h}} d\alpha =0$ for all $\alpha \in \mathfrak{h}^*$. Since the codimension of $Z^1$ in $\mathfrak{h}^*$ is one and $\eta_\mathfrak{h}\not\in Z^1$, we get that $d\alpha$ is a scalar multiple of $d\eta_\mathfrak{h}$ for every $\alpha \in \mathfrak{h}^*$. Thus it is enough to show that $i_{\varphi_\mathfrak{h}} d\eta_\mathfrak{h} =0$. First we check that $\rho(i_{\varphi_\mathfrak{h}} d\eta_\mathfrak{h}) = i_{\varphi} \rho(d \eta_\mathfrak{h})$. Since $\eta_\mathfrak{h} = \lambda u^* + \tilde{z}$ with $\lambda\not=0$ and $\tilde{z} \in Z^1$, we get $d\eta_\mathfrak{h} = \lambda du^* = - \lambda\sum_{k=1}^n x_k^* \wedge y_k^*$. Thus it is enough to verify that $\rho( i_{\varphi_\mathfrak{h}} ( x_k^*\wedge y_k^*))= i_{\varphi} \rho(x_k^* \wedge y_k^*)$ for all $k$. For every $z \in Z^1$ we have $\rho(z) =\tau(z)$ and $i_{\varphi_\mathfrak{h}} z = \varphi_\mathfrak{h}^* z = \tau^{-1} i_{\varphi} \rho(z)$. Hence \begin{equation*} \begin{aligned} \rho ( i_{\varphi_\mathfrak{h}} (x_k^*\wedge y_k^*))& = \rho ( {\varphi_\mathfrak{h}}^*(x_k^*) \wedge y_k^*) + \rho(x_k^* \wedge {\varphi_\mathfrak{h}}^*(y_k^*)) \\[1ex]& = \tau( {\varphi_\mathfrak{h}}^* (x_k^*)) \wedge \rho(y_k^*) + \rho(x_k^*) \wedge \tau ( {\varphi_\mathfrak{h}}^* (y_k^*)) \\[1ex]&= i_\varphi \rho(x_k^*) \wedge \rho(y_k^*) + \rho(x_k^*) \wedge i_\varphi \rho(y_k^*) = i_\varphi \rho ( x_k^* \wedge y_k^*). \end{aligned} \end{equation*} Now \begin{equation} \label{rhoivarphidetaheis} \begin{aligned} \rho(i_{\varphi_\mathfrak{h}} d\eta_\mathfrak{h}) &= i_\varphi (\rho d \eta_\mathfrak{h}) = i_\varphi d \rho(\eta_\mathfrak{h}) = i_\varphi d ( \eta + \mathcal{L}_\varphi f) \\ & = i_\varphi d\eta + \mathcal{L}_\varphi^2 f + di_\varphi \mathcal{L}_\varphi f. \end{aligned} \end{equation} By~\eqref{etaphi} we know that $i_\varphi d\eta =0$. Moreover, \begin{equation*} \begin{aligned} \mathcal{L}_\varphi^2 f =\frac12 \left[ \mathcal{L}_\varphi, \mathcal{L}_\varphi \right] = \frac12 \mathcal{L}_{[\varphi, \varphi]_{FN}} f = - \mathcal{L}_{d\eta\otimes \xi} f = - d\eta \wedge \mathcal{L}_\xi f = 0, \end{aligned} \end{equation*} since $f \in \Omega^0_{\mathcal{L}_\xi}(M)$. Next, using Lemma~\ref{lem:commutators}, we get \begin{equation*} \begin{aligned} i_\varphi \mathcal{L}_\varphi f = [i_\varphi , \mathcal{L}_\varphi] f + \mathcal{L}_\varphi i_\varphi f = -df + d\eta \wedge i_\xi f + \eta \wedge \mathcal{L}_\xi f = -df. \end{aligned} \end{equation*} Hence $d i_\varphi \mathcal{L}_\varphi f =0$. Thus~\eqref{rhoivarphidetaheis} becomes $\rho ( i_{\varphi_\mathfrak{h}} d\eta_\mathfrak{h}) =0$. By Corollary~\ref{rhoinjective}, $\rho$ is an injective map. Hence $i_{\varphi_\mathfrak{h}} d\eta_\mathfrak{h} =0$. This implies that $i_{\varphi_\mathfrak{h}} d\alpha =0$ for all $\alpha \in \mathfrak{h}^*(1,n)$ and hence~\eqref{firstlievarphiivarphialpha} becomes \begin{equation} \label{lievarphiivarphialpha} \begin{aligned} \left[ \mathcal{L}_{\varphi_\mathfrak{h}}, i_{\varphi_\mathfrak{h}} \right] \alpha &= d\alpha - \alpha (\xi_\mathfrak{h}) d \eta_\mathfrak{h}. \end{aligned} \end{equation} To compute the remaining term $\mathcal{L}_{i_{\varphi_\mathfrak{h}} \varphi_\mathfrak{h}} \alpha$ in~\eqref{alphavarphivarphi}, we will use that $i_{\varphi_\mathfrak{h}} \varphi_\mathfrak{h} = - \mathrm{Id} + \eta_\mathfrak{h} \otimes \xi_\mathfrak{h}$, $\mathcal{L}_\mathrm{Id} =d $ and that by~\eqref{liewedge} \begin{equation*} \begin{aligned} \mathcal{L}_{\eta_\mathfrak{h} \otimes \xi_\mathfrak{h}} \alpha =\eta_\mathfrak{h}\wedge \mathcal{L}_{\xi_\mathfrak{h}} \alpha - d\eta_\mathfrak{h} \wedge i_{\xi_\mathfrak{h}} \alpha . \end{aligned} \end{equation*} Also, we notice that for every $x \in \mathfrak{h}(1,n)$ \begin{equation*} \begin{aligned} \mathcal{L}_{\xi_\mathfrak{h}} \alpha (x) = \xi_\mathfrak{h} (\alpha(x)) - \alpha( [\xi_\mathfrak{h}, x]) = 0. \end{aligned} \end{equation*} since $\xi_\mathfrak{h}=u/a $ lies in the center of $\mathfrak{h}(1,n)$. Thence $\mathcal{L}_{\xi_\mathfrak{h}} \alpha =0$ and \begin{equation} \label{lieivarphivarphi} \begin{aligned} \mathcal{L}_{i_{\varphi_\mathfrak{h}} \varphi_\mathfrak{h} }\alpha& =\mathcal{L}_{-\mathrm{Id} + \eta_\mathfrak{h} \otimes \xi_\mathfrak{h}}\alpha = -d\alpha + \eta_\mathfrak{h} \wedge \mathcal{L}_{\xi_\mathfrak{h}} \alpha - d\eta_\mathfrak{h} \wedge i_{\xi_\mathfrak{h}} \alpha \\& = -d \alpha - \alpha(\xi_\mathfrak{h}) d \eta_\mathfrak{h}. \end{aligned} \end{equation} Substituting the result of~\eqref{lievarphiivarphialpha} and~\eqref{lieivarphivarphi} into~\eqref{alphavarphivarphi} we get \begin{equation*} \begin{aligned} \alpha\circ \left[ \varphi_\mathfrak{h}, \varphi_\mathfrak{h} \right]_{FN} = - 2 \alpha ( \xi_\mathfrak{h}) d\eta_\mathfrak{h}. \end{aligned} \end{equation*} As $\alpha\in \mathfrak{h}^*(1,n)$ was arbitrary, we get that the almost contact structure $(\varphi_\mathfrak{h}, \xi_\mathfrak{h}, \eta_\mathfrak{h})$ is normal. \end{proof} The almost contact structure $(\varphi_\mathfrak{h}, \xi_\mathfrak{h}, \eta_\mathfrak{h})$ on $\mathfrak{h}(1,n)$ induces a left-invariant normal almost contact structure on $H(1,n)$, which in turn induces a normal almost contact structure on $N_\Gamma$. Denote it by $(\varphi', \xi', \eta')$. Consider $\psi_\Gamma \colon \mathfrak{h}^*(1,n) \to \Omega^1(N_\Gamma)$. Since the image of $\psi_\Gamma$ generates $T^*_y N_\Gamma$ for every point $y\in N_\Gamma$, the almost contact structure $(\varphi', \xi',\eta')$ is determined by \begin{equation*} \begin{aligned} \eta' = \psi_\Gamma(\eta_\mathfrak{h}),\quad (\varphi')^* \psi_\Gamma(\alpha) = \psi_\Gamma(\varphi_\mathfrak{h}^* \alpha),\quad i_{\xi'} \psi_\Gamma (\alpha) = \psi_\Gamma (i_{\xi_\mathfrak{h}} \alpha), \end{aligned} \end{equation*} where $\alpha$ is an arbitrary element of $\mathfrak{h}^*(1,n)$. Write $J'$ for the complex structure on $N_\Gamma \times \mathbb{R}$ that corresponds to the normal almost contact structure $(\varphi', \xi', \eta')$ on $N_\Gamma$. Then for every $\beta \in \Omega^1(N_\Gamma)$ and $\lambda \in \mathbb{R}$, we have \begin{equation} \label{Jprimebetalambdadt} \begin{aligned} (J')^*(\beta, \lambda dt ) = (\, (\varphi')^*\beta + \lambda \eta'\, ,\, -i_{\xi'}\beta\, ) . \end{aligned} \end{equation} Notice that a formula similar to~\eqref{Jprimebetalambdadt} holds for the complex structure $J$ on $M\times \mathbb{R}$. \begin{claim} \label{hfholomorphic} The map $h_f$ is a holomorphic map between the complex manifolds $(M\times \mathbb{R}, J)$ and $(N_\Gamma \times \mathbb{R}, J')$. \end{claim} \begin{proof} It is enough to show that $h_f^* \circ (J')^* = J^* \circ h_f^*$. Since the image of $\mathrm{pr}_1^* \circ \psi_\Gamma$ together with $\mathrm{pr}_2^* (dt)$ generate $T^*_{(y,t)} (N_\Gamma \times \mathbb{R})$ for every $(y,t) \in N_\Gamma \times \mathbb{R}$, it is enough to show that for every $\alpha\in \mathfrak{h}^*(1,n)$ \begin{equation} \label{hfstarJprimeprpsiGamma} \begin{aligned} h_f^* \circ (J')^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\alpha) = J^* \circ h_f^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\alpha) \end{aligned} \end{equation} and that $h_f^* \circ (J')^* \circ \mathrm{pr}_2^* (dt) = J^* \circ h_f^* \circ \mathrm{pr}_2^*(dt)$. Since both sides of~\eqref{hfstarJprimeprpsiGamma} are linear in $\alpha$, it is enough to verify it for $\alpha\in Z^1$ and $\alpha= \eta_\mathfrak{h}$. We start with the case $\alpha \in Z^1$. For this we will evaluate both sides of~\eqref{hfstarJprimeprpsiGamma} and compare the results. Notice that $i_{\xi_\mathfrak{h}} \alpha =0$, since $Z^1$ is spanned by the elements $x_1^*$, \dots, $x_n^*$, $y_1^*$, \dots, $y_n^*$ and $\xi$ is proportional to $u^*$. Hence \begin{equation*} \begin{aligned} h_f^* \circ (J')^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\alpha)& = h_f^* \circ (J')^* ( \psi_\Gamma(\alpha), 0) \\& = h_f^* (\, (\varphi')^* \psi_\Gamma(\alpha)\,,\, -i_{\xi'} \psi_\Gamma (\alpha)\,) \\& = \tilde{f}^*\circ \tilde{h}^* (\, \psi_\Gamma(\varphi_\mathfrak{h}^* \alpha)\,,\, - \psi_\Gamma (i_{\xi_\mathfrak{h}} \alpha)\,) \\& = \tilde{f}^* (\, h^* \psi_\Gamma(\varphi_\mathfrak{h}^* \alpha)\,,\, 0\,). \end{aligned} \end{equation*} As $\mathrm{pr}_1\circ \tilde{f} = \mathrm{pr}_1$, for every $\beta \in \Omega^1(N_\Gamma)$ one has $\tilde{f}^* ( \beta,0 ) = (\beta,0)$. Since $h^* \circ \psi_\Gamma = \rho$, we get \begin{equation*} \begin{aligned} h_f^* \circ (J')^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\alpha)& = (\, \rho(\varphi_\mathfrak{h}^*(\alpha))\,,\, 0). \end{aligned} \end{equation*} On the other side \begin{equation*} \begin{aligned} J^*\circ h_f^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\alpha)& = J^* \circ \tilde{f}^* \circ \tilde{h}^* (\, \psi_\Gamma (\alpha)\,,\, 0\,) = J^* \circ \tilde{f}^* (\, h^* \psi_\Gamma (\alpha)\,,\, 0\,) \\& = J^* (\, \rho (\alpha)\,,\, 0\,) = (\, \varphi^* (\rho(\alpha))\, ,\, -i_{\xi} \rho(\alpha)\,). \end{aligned} \end{equation*} By Corollary~\ref{etaheis}, the form $\rho(\alpha)$ is harmonic, since $\alpha \in Z^1$. As we mentioned before, $i_{\xi}$ annihilates harmonic $1$-forms on a Sasakian manifold. Hence $i_{\xi} \rho(\alpha)=0$. Next, by definition of $\varphi_\mathfrak{h}$ on $\mathfrak{h}(1,n)$, we have $\varphi^* (\rho(\alpha)) = \rho(\varphi_\mathfrak{h}^*\alpha))$, as $\alpha \in Z^1$. This shows that~\eqref{hfstarJprimeprpsiGamma} holds for all $\alpha \in Z^1$. Now we check~\eqref{hfstarJprimeprpsiGamma} for $\alpha = \eta_\mathfrak{h}$. We have \begin{equation*} \begin{aligned} h_f^* \circ (J')^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\eta_\mathfrak{h})& = h_f^* \circ (J')^* (\eta', 0) = h_f^* (\, (\varphi')^* \eta'\, ,\, -i_{\xi'} \eta' \cdot dt\, ) \\& = h_f^* (\, 0 \, ,\, -dt\, ) = - (\mathrm{pr}_2 \circ h_f)^* (dt). \end{aligned} \end{equation*} As $\mathrm{pr}_2 \circ h_f(x,t) = t + f(x)$ for $(x,t) \in M\times \mathbb{R}$, we get $\mathrm{pr}_2 \circ h_f = f \circ \mathrm{pr}_1 + \mathrm{pr}_2$. Hence, since $f^* dt = df$, we have \begin{equation} \label{prhfstardt} \begin{aligned} (\mathrm{pr}_2 \circ h_f)^* (dt) = \mathrm{pr}_1^* \circ f^* (dt) + \mathrm{pr}_2^* (dt) = ( df, dt). \end{aligned} \end{equation} Hence \begin{equation*} \begin{aligned} h_f^* \circ (J')^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\eta_\mathfrak{h})= ( -df, -dt). \end{aligned} \end{equation*} On the other side, we get \begin{equation*} \begin{aligned} J^* \circ h_f^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\eta_\mathfrak{h})& = J^* \circ \tilde{f}^* \circ \tilde{h}^* (\psi_\Gamma(\eta_\mathfrak{h}), 0) = J^* \circ \tilde{f}^* (\, h^*\circ \psi_\Gamma(\eta_\mathfrak{h})\, ,\, 0\,) \\& = J^* ( \rho(\eta_\mathfrak{h}), 0) = J^* (\, \eta + \mathcal{L}_{\varphi} f\,,\, 0\,) \\& = ( \varphi^* \eta + i_{\varphi}\mathcal{L}_{\varphi} f, - (i_{\xi} \eta + i_{\xi} \mathcal{L}_{\varphi} f) dt). \end{aligned} \end{equation*} We have $\varphi^* \eta =0$ and $i_{\xi} \eta =1$. Since $i_{\varphi} f=0$, by Lemma~\ref{lem:commutators} we get \begin{equation*} \begin{aligned} i_{\varphi} \mathcal{L}_{\varphi} f = [i_{\varphi} , \mathcal{L}_{\varphi} ]f = - df + d\eta \wedge i_{\xi} f + \eta \wedge \mathcal{L}_{\xi} f = -df, \end{aligned} \end{equation*} as $f \in\Omega^0_{\mathcal{L}_{\xi}}(M)$. Next, by Lemma~\ref{lem:commutators}, $i_{\xi}$ and $\mathcal{L}_{\varphi}$ commute, thus $i_{\xi} \mathcal{L}_{\varphi} f =0$. Therefore \begin{equation*} \begin{aligned} J^* \circ h_f^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\eta_\mathfrak{h})& = (-df, -dt) = h_f^* \circ (J')^* \circ \mathrm{pr}_1^* \circ \psi_\Gamma(\eta_\mathfrak{h}). \end{aligned} \end{equation*} It is left to show that $h_f^* \circ (J')^* \circ \mathrm{pr}_2^* (dt) = J^* \circ h_f^* \circ \mathrm{pr}_2^*(dt)$. We have \begin{equation*} \begin{aligned} h_f^* \circ (J')^* \circ \mathrm{pr}_2^* (dt) & = h_f^* \circ (J')^* ( 0, dt) = h_f^* ( \eta', 0) = (h^* (\psi_\Gamma (\eta_\mathfrak{h})), 0)\\& = (\rho(\eta_\mathfrak{h}),0) = (\eta + \mathcal{L}_{\varphi} f, 0). \end{aligned} \end{equation*} On the other side, using~\eqref{prhfstardt} we get \begin{equation*} \begin{aligned} J^* \circ h_f^* \circ \mathrm{pr}_2^*(dt) = J^* ( df, dt ) = (i_{\varphi} df + \eta, - i_{\xi} df) = ( \mathcal{L}_{\varphi} f + \eta, 0), \end{aligned} \end{equation*} since $i_{\xi} f = i_{\varphi} f =0$ and $f \in \Omega^0_{\mathcal{L}_{\xi}}(M)$. Therefore $h_f^* \circ (J')^* = J^* \circ h_f^*$, i.e. $h_f$ is a holomorphic map. \end{proof} Recall that a continuous map $\psi\colon X \to Y$ between two topological spaces is called \emph{finite} if it is closed and has finite fibers. \begin{claim} \label{hffinite} The map $h_f\colon M \times \mathbb{R}\to N_\Gamma \times \mathbb{R}$ is finite. \end{claim} \begin{proof} We already saw in Claim~\ref{hfsurjectiveproper} that $h_f$ is closed. Thus it is left to show that $h_f$ has finite fibers. By Claim~\ref{hfholomorphic}, the map $h_f$ is holomorphic. Fix $y\in N_\Gamma \times \mathbb{R}$. Since $h_f$ is holomorphic , $h_f^{-1}(y)$ is a complex subvariety of $M\times \mathbb{R}$. By Claim~\ref{hfsurjectiveproper}, $h_f$ is proper. Thus $h_f^{-1}(y)$ is compact. Hence $h_f^{-1}(y)$ is a union of finitely many irreducible complex subvarieties (cf. \cite[Sec. 9.2.2]{grauert2}). By Remark~\ref{exact}, the Kähler form on $M \times \mathbb{R}$ is exact. We will show in Lemma~\ref{exactkaehler}, that every compact irreducible subvariety of a Kähler manifold with an exact Kähler form is a point. Hence $h_f^{-1}(y)$ is a union of finitely many points. \end{proof} \begin{lemma} \label{exactkaehler} Let $X$ be a Kähler manifold and $Z \subset X$ an irreducible compact complex analytic subvariety in $X$. If the Kähler form of $X$ is exact, then $Z$ is a point. \end{lemma} \begin{proof} By embedded Hironaka resolution of singularities for the pair $Z \subset X$, there is a proper birational holomorpic map $\pi \colon \widetilde{X}\to X$ of complex manifolds with exceptional locus $\Sigma$, such that the strict transform \begin{equation*} \begin{aligned} \widetilde{ Z } = \overline{\pi^{-1}(Z \setminus \Sigma ) } \end{aligned} \end{equation*} of $Z$ is a smooth submanifold of $\widetilde{X}$ and the restriction of $\pi$ to $\widetilde{Z}$ is an immersion. Denote this restriction by $\sigma$. In our case, we have that $\widetilde{Z}$ is a compact submanifold of $\widetilde{X}$. Indeed, $\widetilde{Z}$ is a closed subset of $\pi^{-1}(Z)$, which is compact since $\pi$ is proper and $Z$ is compact. Write $\omega $ for the Kähler form on $X$. Then $\omega = d\alpha$ for some $\alpha \in \Omega^1(X)$. The form $\sigma^*\omega$ is a Kähler form on $\widetilde{Z}$. Indeed, it is obviously closed and it is positive, since for every nonzero $X \in T\widetilde{Z}$ \begin{equation*} \begin{aligned} \sigma^*\omega (JX, X) = \omega ( \sigma_*(JX), \sigma_* X ) = \omega ( J\sigma_* X , \sigma_* X) = g( \sigma_* X , \sigma_* X)>0. \end{aligned} \end{equation*} We used that $\sigma$ is an immersion in the last step. As $\sigma^* \omega = d (\sigma^*\alpha)$, we get that $\widetilde{Z}$ is a compact Kähler manifold with an exact Kähler form, which is possible only if $\widetilde{Z}$ is a finite union of points. Hence $Z = \sigma(\widetilde{Z})$ is a finite union of points. Since $Z$ is irreducible, we conclude that $Z$ is a point. \end{proof} Now we are ready to finish the proof of Theorem~\ref{main2}. \begin{claim} The map $h_f$ is a biholomorphism. In particular, $h_f$ and $h$ are diffeomorphisms. \end{claim} \begin{proof} By~\cite[Sec. 9.3.3]{grauert2} a finite holomorphic surjection between irreducible complex spaces is an analytic covering. The map $h_f$ is surjective by Claim~\ref{hfsurjectiveproper}, holomorpic by Claim~\ref{hfholomorphic} and finite by Claim~\ref{hffinite}. Thus $h_f$ is an analytic covering. Hence there is a nowhere dense closed subset $T$ in $N_\Gamma \times \mathbb{R}$ such that the induced map \begin{equation*} \begin{aligned} h_T \colon (M\times \mathbb{R}) \setminus h_f^{-1}(T) \to (N_\Gamma \times \mathbb{R}) \setminus T \end{aligned} \end{equation*} is locally biholomorphic. To complete the proof it is enough to show that $h_T$ is a biholomorphism. Indeed, this will imply that $h_f$ is a one-sheeted analytic covering and then by a result in~\cite[Sec. 8.1.2]{grauert2} the map $h_f$ is a bijection. By~\cite[Corollary~8.6]{grauert}, every holomorphic bijection is a biholomorphism. Hence, we will get that $h_f$ is a biholomorphism. To show that $h_T$ is a biholomorphism it is enough to show that $h_T$ is bijective. Let $(y,b) \in R := (N_\Gamma \times \mathbb{R}) \setminus T$. Then $(y,b)$ is a regular value of $h_T$ and thus a regular value of $h_f$. Let $(x,a)$ be in the preimage of $(y,b)$. As $(x,a)$ is a regular point of the holomorpic map $h_f$, we have $\det T_{(x,a)} h_f >0$. But $\det T_{(x,a)}h_f = \det T_x h$, hence $\det T_x h > 0$ for all $x \in h^{-1}(y)$. In particular, $x$ is a regular point of $h$. Since $(x,a)$ was arbitrary, we conclude that $y$ is a regular value of $h$. By Remark~\ref{hsurjective}, the map $h$ has degree $\pm 1$. Hence \begin{equation*} \begin{aligned} \pm 1 = \deg(h) = \sum_{x \in h^{-1}(y)} \mathrm{sign} (\det T_x h) = |h^{-1}(y)|. \end{aligned} \end{equation*} Therefore the number of points in $h^{-1}(y)$ is one. Let $x$ be the unique point in $h^{-1}(y)$. Then $h_T^{-1}(y,b) = h_f^{-1}(y,b) = \{(x,b - f(x) \}$. Hence $h_T$ is a bijection. \end{proof} \section*{Acknowledgment} The authors are grateful to Hisashi Kasuya for suggesting that Corollary~\ref{solvable} should hold. \end{document}
arXiv
Is normality testing 'essentially useless'? A former colleague once argued to me as follows: We usually apply normality tests to the results of processes that, under the null, generate random variables that are only asymptotically or nearly normal (with the 'asymptotically' part dependent on some quantity which we cannot make large); In the era of cheap memory, big data, and fast processors, normality tests should always reject the null of normal distribution for large (though not insanely large) samples. And so, perversely, normality tests should only be used for small samples, when they presumably have lower power and less control over type I rate. Is this a valid argument? Is this a well-known argument? Are there well known tests for a 'fuzzier' null hypothesis than normality? hypothesis-testing normality-assumption philosophical Jeromy Anglim $\begingroup$ For reference: I don't think that this needed to be community wiki. $\endgroup$ – Shane Sep 8 '10 at 17:57 $\begingroup$ I wasn't sure there was a 'right answer'... $\endgroup$ – shabbychef Sep 8 '10 at 18:01 $\begingroup$ In a certain sense, this is true of all test of a finite number of parameters. With $k$ fixed (the number of parameters on which the test is caried) and $n$ growthing without bounds, any difference between the two groups (no matter how small) will always break the null at some point. Actually, this is an argument in favor of bayesian tests. $\endgroup$ – user603 Sep 8 '10 at 18:07 $\begingroup$ For me, it is not a valid argument. Anyway, before giving any answer you need to formalize things a little bit. You may be wrong and you may not be but now what you have is nothing more than an intuition: for me the sentence "In the era of cheap memory, big data, and fast processors, normality tests should always reject the null of normal " needs clarifications :) I think that if you try giving more formal precision the answer will be simple. $\endgroup$ – robin girard Sep 8 '10 at 19:01 $\begingroup$ The thread at "Are large datasets inappropriate for hypothesis testing" discusses a generalization of this question. (stats.stackexchange.com/questions/2516/… ) $\endgroup$ – whuber♦ Sep 9 '10 at 20:17 It's not an argument. It is a (a bit strongly stated) fact that formal normality tests always reject on the huge sample sizes we work with today. It's even easy to prove that when n gets large, even the smallest deviation from perfect normality will lead to a significant result. And as every dataset has some degree of randomness, no single dataset will be a perfectly normally distributed sample. But in applied statistics the question is not whether the data/residuals ... are perfectly normal, but normal enough for the assumptions to hold. Let me illustrate with the Shapiro-Wilk test. The code below constructs a set of distributions that approach normality but aren't completely normal. Next, we test with shapiro.test whether a sample from these almost-normal distributions deviate from normality. In R: x <- replicate(100, { # generates 100 different tests on each distribution c(shapiro.test(rnorm(10)+c(1,0,2,0,1))$p.value, #$ shapiro.test(rnorm(100)+c(1,0,2,0,1))$p.value, #$ shapiro.test(rnorm(1000)+c(1,0,2,0,1))$p.value, #$ shapiro.test(rnorm(5000)+c(1,0,2,0,1))$p.value) #$ } # rnorm gives a random draw from the normal distribution rownames(x) <- c("n10","n100","n1000","n5000") rowMeans(x<0.05) # the proportion of significant deviations n10 n100 n1000 n5000 0.04 0.04 0.20 0.87 The last line checks which fraction of the simulations for every sample size deviate significantly from normality. So in 87% of the cases, a sample of 5000 observations deviates significantly from normality according to Shapiro-Wilks. Yet, if you see the qq plots, you would never ever decide on a deviation from normality. Below you see as an example the qq-plots for one set of random samples with p-values edited Mar 5 '18 at 7:03 Joris Meys $\begingroup$ On a side note, the central limit theorem makes the formal normality check unnecessary in many cases when n is large. $\endgroup$ – Joris Meys Sep 8 '10 at 23:19 $\begingroup$ yes, the real question is not whether the data are actually distributed normally but are they sufficiently normal for the underlying assumption of normality to be reasonable for the practical purpose of the analysis, and I would have thought the CLT based argument is normally [sic] sufficient for that. $\endgroup$ – Dikran Marsupial Sep 9 '10 at 9:37 $\begingroup$ This answer appears not to address the question: it merely demonstrates that the S-W test does not achieve its nominal confidence level, and so it identifies a flaw in that test (or at least in the R implementation of it). But that's all--it has no bearing on the scope of usefulness of normality testing in general. The initial assertion that normality tests always reject on large sample sizes is simply incorrect. $\endgroup$ – whuber♦ Oct 24 '13 at 21:16 $\begingroup$ @whuber This answer addresses the question. The whole point of the question is the "near" in "near-normality". S-W tests what is the chance that the sample is drawn from a normal distribution. As the distributions I constructed are deliberately not normal, you'd expect the S-W test to do what it promises: reject the null. The whole point is that this rejection is meaningless in large samples, as the deviation from normality does not result in a loss of power there. So the test is correct, but meaningless, as shown by the QQplots $\endgroup$ – Joris Meys Oct 25 '13 at 9:36 $\begingroup$ I had relied on what you wrote and misunderstood what you meant by an "almost-Normal" distribution. I now see--but only by reading the code and carefully testing it--that you are simulating from three standard Normal distributions with means at $0,$ $1,$ and $2$ and combining the results in a $2:2:1$ ratio. Wouldn't you hope that a good test of Normality would reject the null in this case? What you have effectively demonstrated is that QQ plots are not very good at detecting such mixtures, that's all! $\endgroup$ – whuber♦ Oct 25 '13 at 14:17 When thinking about whether normality testing is 'essentially useless', one first has to think about what it is supposed to be useful for. Many people (well... at least, many scientists) misunderstand the question the normality test answers. The question normality tests answer: Is there convincing evidence of any deviation from the Gaussian ideal? With moderately large real data sets, the answer is almost always yes. The question scientists often expect the normality test to answer: Do the data deviate enough from the Gaussian ideal to "forbid" use of a test that assumes a Gaussian distribution? Scientists often want the normality test to be the referee that decides when to abandon conventional (ANOVA, etc.) tests and instead analyze transformed data or use a rank-based nonparametric test or a resampling or bootstrap approach. For this purpose, normality tests are not very useful. Harvey Motulsky $\begingroup$ +1 for a good and informative answer. I find it useful to see a good explanation for a common misunderstanding (which I have incidentally been experiencing myself: stats.stackexchange.com/questions/7022/…). What I miss though, is an alternative solution to this common misunderstanding. I mean, if normality tests are the wrong way to go, how does one go about checking if a normal approximation is acceptable/justified? $\endgroup$ – posdef Feb 10 '11 at 12:45 $\begingroup$ There's is not substitute for the (common) sense of the analyst (or, well, the researcher/scientist). And experience (learnt by trying and seeing: what conclusions do I get if I assume it is normal? What are the difference if not?). Graphics are your best friends. $\endgroup$ – FairMiles Apr 5 '13 at 15:33 $\begingroup$ I like this paper, which makes the point you made: Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105(1), 156-166. $\endgroup$ – Jeremy Miles Aug 20 '14 at 20:18 $\begingroup$ Looking at graphics is great, but what if there are too many to examine manually? Can we formulate reasonable statistical procedures to point out possible trouble spots? I'm thinking of situations like A/B experimenters at large scale: exp-platform.com/Pages/…. $\endgroup$ – dfrankow Dec 29 '14 at 17:41 I think that tests for normality can be useful as companions to graphical examinations. They have to be used in the right way, though. In my opinion, this means that many popular tests, such as the Shapiro-Wilk, Anderson-Darling and Jarque-Bera tests never should be used. Before I explain my standpoint, let me make a few remarks: In an interesting recent paper Rochon et al. studied the impact of the Shapiro-Wilk test on the two-sample t-test. The two-step procedure of testing for normality before carrying out for instance a t-test is not without problems. Then again, neither is the two-step procedure of graphically investigating normality before carrying out a t-test. The difference is that the impact of the latter is much more difficult to investigate (as it would require a statistician to graphically investigate normality $100,000$ or so times...). It is useful to quantify non-normality, for instance by computing the sample skewness, even if you don't want to perform a formal test. Multivariate normality can be difficult to assess graphically and convergence to asymptotic distributions can be slow for multivariate statistics. Tests for normality are therefore more useful in a multivariate setting. Tests for normality are perhaps especially useful for practitioners who use statistics as a set of black-box methods. When normality is rejected, the practitioner should be alarmed and, rather than carrying out a standard procedure based on the assumption of normality, consider using a nonparametric procedure, applying a transformation or consulting a more experienced statistician. As has been pointed out by others, if $n$ is large enough, the CLT usually saves the day. However, what is "large enough" differs for different classes of distributions. (In my definiton) a test for normality is directed against a class of alternatives if it is sensitive to alternatives from that class, but not sensitive to alternatives from other classes. Typical examples are tests that are directed towards skew or kurtotic alternatives. The simplest examples use the sample skewness and kurtosis as test statistics. Directed tests of normality are arguably often preferable to omnibus tests (such as the Shapiro-Wilk and Jarque-Bera tests) since it is common that only some types of non-normality are of concern for a particular inferential procedure. Let's consider Student's t-test as an example. Assume that we have an i.i.d. sample from a distribution with skewness $\gamma=\frac{E(X-\mu)^3}{\sigma^3}$ and (excess) kurtosis $\kappa=\frac{E(X-\mu)^4}{\sigma^4}-3.$ If $X$ is symmetric about its mean, $\gamma=0$. Both $\gamma$ and $\kappa$ are 0 for the normal distribution. Under regularity assumptions, we obtain the following asymptotic expansion for the cdf of the test statistic $T_n$: $$P(T_n\leq x)=\Phi(x)+n^{-1/2}\frac{1}{6}\gamma(2x^2+1)\phi(x)-n^{-1}x\Big(\frac{1}{12}\kappa (x^2-3)-\frac{1}{18}\gamma^2(x^4+2x^2-3)-\frac{1}{4}(x^2+3)\Big)\phi(x)+o(n^{-1}),$$ where $\Phi(\cdot)$ is the cdf and $\phi(\cdot)$ is the pdf of the standard normal distribution. $\gamma$ appears for the first time in the $n^{-1/2}$ term, whereas $\kappa$ appears in the $n^{-1}$ term. The asymptotic performance of $T_n$ is much more sensitive to deviations from normality in the form of skewness than in the form of kurtosis. It can be verified using simulations that this is true for small $n$ as well. Thus Student's t-test is sensitive to skewness but relatively robust against heavy tails, and it is reasonable to use a test for normality that is directed towards skew alternatives before applying the t-test. As a rule of thumb (not a law of nature), inference about means is sensitive to skewness and inference about variances is sensitive to kurtosis. Using a directed test for normality has the benefit of getting higher power against ''dangerous'' alternatives and lower power against alternatives that are less ''dangerous'', meaning that we are less likely to reject normality because of deviations from normality that won't affect the performance of our inferential procedure. The non-normality is quantified in a way that is relevant to the problem at hand. This is not always easy to do graphically. As $n$ gets larger, skewness and kurtosis become less important - and directed tests are likely to detect if these quantities deviate from 0 even by a small amount. In such cases, it seems reasonable to, for instance, test whether $|\gamma|\leq 1$ or (looking at the first term of the expansion above) $$|n^{-1/2}\frac{1}{6}\gamma(2z_{\alpha/2}^2+1)\phi(z_{\alpha/2})|\leq 0.01$$ rather than whether $\gamma=0$. This takes care of some of the problems that we otherwise face as $n$ gets larger. MånsT $\begingroup$ Now this is a great answer! $\endgroup$ – user603 Apr 4 '14 at 10:45 $\begingroup$ Yea this should be the accepted, really fantastic answer $\endgroup$ – jenesaisquoi Apr 14 '14 at 19:24 $\begingroup$ "it is common that only some types of non-normality are of concern for a particular inferential procedure." - of course one should then use a test directed towards that type of non-normality. But the fact that one is using a normality test implies that he cares about all aspects of normality. The question is: is a normality test in that case a good option. $\endgroup$ – rbm Jul 4 '15 at 11:12 $\begingroup$ Test for the sufficiency of assumptions for particular tests are becoming common, which thankfully removes some of the guesswork. $\endgroup$ – Carl Jan 7 '17 at 21:27 $\begingroup$ @Carl: Can you add some references/examples for that? $\endgroup$ – kjetil b halvorsen Feb 3 at 14:10 IMHO normality tests are absolutely useless for the following reasons: On small samples, there's a good chance that the true distribution of the population is substantially non-normal, but the normality test isn't powerful to pick it up. On large samples, things like the T-test and ANOVA are pretty robust to non-normality. The whole idea of a normally distributed population is just a convenient mathematical approximation anyhow. None of the quantities typically dealt with statistically could plausibly have distributions with a support of all real numbers. For example, people can't have a negative height. Something can't have negative mass or more mass than there is in the universe. Therefore, it's safe to say that nothing is exactly normally distributed in the real world. dsimcha $\begingroup$ Electrical potential difference is an example of a real-world quantity that can be negative. $\endgroup$ – nico Sep 19 '10 at 13:03 $\begingroup$ @nico: Sure it can be negative, but there's some finite limit to it because there are only so many protons and electrons in the Universe. Of course this is irrelevant in practice, but that's my point. Nothing is exactly normally distributed (the model is wrong), but there are lots of things that are close enough (the model is useful). Basically, you already knew the model was wrong, and rejecting or not rejecting the null gives essentially no information about whether it's nonetheless useful. $\endgroup$ – dsimcha Sep 22 '10 at 19:39 $\begingroup$ @dsimcha - I find that a really insightful, useful response. $\endgroup$ – rolando2 May 4 '12 at 21:34 $\begingroup$ @dsimcha, the $t$-test and ANOVA are not robust to non-normality. See papers by Rand Wilcox. $\endgroup$ – Frank Harrell Aug 1 '13 at 11:45 $\begingroup$ @dsimcha "the model is wrong". Aren't ALL models "wrong" though? $\endgroup$ – Atirag Dec 19 '17 at 21:09 I think that pre-testing for normality (which includes informal assessments using graphics) misses the point. Users of this approach assume that the normality assessment has in effect a power near 1.0. Nonparametric tests such as the Wilcoxon, Spearman, and Kruskal-Wallis have efficiency of 0.95 if normality holds. In view of 2. one can pre-specify the use of a nonparametric test if one even entertains the possibility that the data may not arise from a normal distribution. Ordinal cumulative probability models (the proportional odds model being a member of this class) generalize standard nonparametric tests. Ordinal models are completely transformation-invariant with respect to $Y$, are robust, powerful, and allow estimation of quantiles and mean of $Y$. Frank Harrell Before asking whether a test or any sort of rough check for normality is "useful" you have to answer the question behind the question: "Why are you asking?" For example, if you only want to put a confidence limit around the mean of a set of data, departures from normality may or not be important, depending on how much data you have and how big the departures are. However, departures from normality are apt to be crucial if you want to predict what the most extreme value will be in future observations or in the population you have sampled from. Emil Friedman Let me add one small thing: Performing a normality test without taking its alpha-error into account heightens your overall probability of performing an alpha-error. You shall never forget that each additional test does this as long as you don't control for alpha-error accumulation. Hence, another good reason to dismiss normality testing. $\begingroup$ I presume you are referring to a situation where one first does a normality test, and then uses the result of that test to decide which test to perform next. $\endgroup$ – Harvey Motulsky Sep 9 '10 at 16:07 $\begingroup$ I refer to the general utility of normality tests when used as method to determine whether or not it is appropriate to use a certain method. If you apply them in these cases, it is, in terms of probability of committing an alpha error, better to perform a more robust test to avoid the alpha error accumulation. $\endgroup$ – Henrik Sep 10 '10 at 10:42 $\begingroup$ This does not make sense to me. Even if you decide between, say, an ANOVA or a rank-based method based on a test of normality (a bad idea of course), at the end of the day you would still only perform one test of the comparison of interest. If you reject normality erroneously, you still haven't reached a wrong conclusion regarding this particular comparison. You might be performing two tests but the only case in which you can conclude that factor such-and-such have an effect is when the second test also rejects $H_0$, not when only the first one does. Hence, no alpha-error accumulation… $\endgroup$ – Gala Jun 8 '13 at 11:24 $\begingroup$ Another way a normality test could increase type I errors is if we're talking about "overall probability of performing an alpha-error." The test itself has an error rate, so overall, our probability of committing an error increases. Emphasis on one small thing too I suppose... $\endgroup$ – Nick Stauner Nov 8 '13 at 15:49 $\begingroup$ @NickStauner That is exactly what I wanted to convey. Thanks for making this point even clearer. $\endgroup$ – Henrik Nov 9 '13 at 12:25 Answers here have already addressed several important points. To quickly summarize: There is no consistent test that can determine whether a set of data truly follow a distribution or not. Tests are no substitute for visually inspecting the data and models to identify high leverage, high influence observations and commenting on their effects on models. The assumptions for many regression routines are often misquoted as requiring normally distributed "data" [residuals] and that this is interpreted by novice statisticians as requiring that the analyst formally evaluate this in some sense before proceeding with analyses. I am adding an answer firstly to cite to one of my, personally, most frequently accessed and read statistical articles: "The Importance of Normality Assumptions in Large Public Health Datasets" by Lumley et. al. It is worth reading in entirety. The summary states: The t-test and least-squares linear regression do not require any assumption of Normal distribution in sufficiently large samples. Previous simulations studies show that "sufficiently large" is often under 100, and even for our extremely non-Normal medical cost data it is less than 500. This means that in public health research, where samples are often substantially larger than this, the t-test and the linear model are useful default tools for analyzing differences and trends in many types of data, not just those with Normal distributions. Formal statistical tests for Normality are especially undesirable as they will have low power in the small samples where the distribution matters and high power only in large samples where the distribution is unimportant. While the large-sample properties of linear regression are well understood, there has been little research into the sample sizes needed for the Normality assumption to be unimportant. In particular, it is not clear how the necessary sample size depends on the number of predictors in the model. The focus on Normal distributions can distract from the real assumptions of these methods. Linear regression does assume that the variance of the outcome variable is approximately constant, but the primary restriction on both methods is that they assume that it is sufficient to examine changes in the mean of the outcome variable. If some other summary of the distribution is of greater interest, then the t-test and linear regression may not be appropriate. To summarize: normality is generally not worth the discussion or the attention it receives in contrast to the importance of answering a particular scientific question. If the desire is to summarize mean differences in data, then the t-test and ANOVA or linear regression are justified in a much broader sense. Tests based on these models remain of the correct alpha level, even when distributional assumptions are not met, although power may be adversely affected. The reasons why normal distributions may receive the attention they do may be for classical reasons, where exact tests based on F-distributions for ANOVAs and Student-T-distributions for the T-test could be obtained. The truth is, among the many modern advancements of science, we generally deal with larger datasets than were collected previously. If one is in fact dealing with a small dataset, the rationale that those data are normally distributed cannot come from those data themselves: there is simply not enough power. Remarking on other research, replications, or even the biology or science of the measurement process is, in my opinion, a much more justified approach to discussing a possible probability model underlying the observed data. For this reason, opting for a rank-based test as an alternative misses the point entirely. However, I will agree that using robust variance estimators like the jackknife or bootstrap offer important computational alternatives that permit conducting tests under a variety of more important violations of model specification, such as independence or identical distribution of those errors. I used to think that tests of normality were completely useless. However, now I do consulting for other researchers. Often, obtaining samples is extremely expensive, and so they will want to do inference with n = 8, say. In such a case, it is very difficult to find statistical significance with non-parametric tests, but t-tests with n = 8 are sensitive to deviations from normality. So what we get is that we can say "well, conditional on the assumption of normality, we find a statistically significant difference" (don't worry, these are usually pilot studies...). Then we need some way of evaluating that assumption. I'm half-way in the camp that looking at plots is a better way to go, but truth be told there can be a lot of disagreement about that, which can be very problematic if one of the people who disagrees with you is the reviewer of your manuscript. In many ways, I still think there are plenty of flaws in tests of normality: for example, we should be thinking about the type II error more than the type I. But there is a need for them. Cliff AB $\begingroup$ Note that the arguments here is that the tests are only useless in theory. In theory, we can always get as many samples as we want... You'll still need the tests to prove that your data is at least somehow close to normality. $\endgroup$ – SmallChess May 20 '15 at 2:43 $\begingroup$ Good point. I think what you're implying, and certainly what I believe, is that a measure of deviation from normality is more important than a hypothesis test. $\endgroup$ – Cliff AB May 20 '15 at 3:50 $\begingroup$ As long as they don't then switch to a non-parametric test and try to interpret the p-values (which are invalidated by conditionally pre-testing), perhaps that's okay?! $\endgroup$ – Björn Mar 26 '18 at 16:21 $\begingroup$ Power of a test of normality will be very low at n=8; in particular, deviations from normality that will substantively affect the properties of a test that assumes it may be quite hard to detect at small sample sizes (whether by test or visually). $\endgroup$ – Glen_b♦ Jul 13 '18 at 0:32 $\begingroup$ @Glen_b: I agree; I think this sentiment is in line with caring more about type II errors rather than type I. My point is that there is real world need to test for normality. Whether our current tools really fill that need is a different question. $\endgroup$ – Cliff AB Feb 3 at 16:40 For what it's worth, I once developed a fast sampler for the truncated normal distribution, and normality testing (KS) was very useful in debugging the function. This sampler passes the test with huge sample sizes but, interestingly, the GSL's ziggurat sampler didn't. Arthur B. The argument you gave is an opinion. I think that the importance of normality testing is to make sure that the data does not depart severely from the normal. I use it sometimes to decide between using a parametric versus a nonparametric test for my inference procedure. I think the test can be useful in moderate and large samples (when the central limit theorem does not come into play). I tend to use Wilk-Shapiro or Anderson-Darling tests but running SAS I get them all and they generally agree pretty well. On a different note I think that graphical procedures such as Q-Q plots work equally well. The advantage of a formal test is that it is objective. In small samples it is true that these goodness of fit tests have practically no power and that makes intuitive sense because a small sample from a normal distribution might by chance look rather non normal and that is accounted for in the test. Also high skewness and kurtosis that distinguish many non normal distributions from normal distributions are not easily seen in small samples. Michael Chernick $\begingroup$ While it certainly can be used that way, I don't think you will be more objective than with a QQ-Plot. The subjective part with the tests is when to decide that your data is to non-normal. With a large sample rejecting at p=0.05 might very well be excessive. $\endgroup$ – Erik May 4 '12 at 17:56 $\begingroup$ Pre-testing (as suggested here) can invalidate the Type I error rate of the overall process; one should take into account the fact that a pre-test was done when interpreting the results of whichever test it selected. More generally, hypothesis tests should be kept for testing null hypothesis one actually cares about, i.e. that there is no association between variables. The null hypothesis that the data is exactly Normal doesn't fall into this category. $\endgroup$ – guest May 4 '12 at 18:02 $\begingroup$ (+1) There is excellent advice here. Erik, the use of "objective" took me aback too, until I realized Michael's right: two people correctly conducting the same test on the same data will always get the same p-value, but they might interpret the same Q-Q plot differently. Guest: thank you for the cautionary note about Type I error. But why should we not care about the data distribution? Frequently that is interesting and valuable information. I at least want to know whether the data are consistent with the assumptions my tests are making about them! $\endgroup$ – whuber♦ May 4 '12 at 18:25 $\begingroup$ I strongly disagree. Both people get the same QQ-plot and same the p-value. To interpret the p-value you need to take into account the sample size and the violations of normality your test is particular sensitive to. So deciding what to do with your p-value is just as subjective. The reason you might prefer the p-value is that you believe the data could follow a perfect normal distribution - else it is just a question how quickly the p-value falls with sample size. Which is more, given a decent sample size the QQ-plot looks pretty much the same and remains stable with more samples. $\endgroup$ – Erik May 4 '12 at 20:30 $\begingroup$ Erik, I agree that test results and graphics require interpretation. But the test result is a number and there won't be any dispute about it. The QQ plot, however, admits of multiple descriptions. Although each may objectively be correct, the choice of what to pay attention to is...a choice. That's what "subjective" means: the result depends on the analyst, not just the procedure itself. This is why, for instance, in settings as varied as control charts and government regulations where "objectivity" is important, criteria are based on numerical tests and never graphical results. $\endgroup$ – whuber♦ May 4 '12 at 21:54 I think a maximum entropy approach could be useful here. We can assign a normal distribution because we believe the data is "normally distributed" (whatever that means) or because we only expect to see deviations of about the same Magnitude. Also, because the normal distribution has just two sufficient statistics, it is insensitive to changes in the data which do not alter these quantities. So in a sense you can think of a normal distribution as an "average" over all possible distributions with the same first and second moments. this provides one reason why least squares should work as well as it does. probabilityislogic $\begingroup$ Nice bridging of concepts. I also agree that in cases where such a distribution matters, it is far more illuminating to think about how the data are generated. We apply that principle in fitting mixed models. Concentrations or ratios on the other hand are always skewed. I might add that by "the normal... is insensitive to changes" you mean invariant to changes in shape/scale. $\endgroup$ – AdamO Mar 13 '18 at 13:17 I wouldn't say it is useless, but it really depends on the application. Note, you never really know the distribution the data is coming from, and all you have is a small set of the realizations. Your sample mean is always finite in sample, but the mean could be undefined or infinite for some types of probability density functions. Let us consider the three types of Levy stable distributions i.e Normal distribution, Levy distribution and Cauchy distribution. Most of your samples do not have a lot of observations at the tail (i.e away from the sample mean). So empirically it is very hard to distinguish between the three, so the Cauchy (has undefined mean) and the Levy (has infinite mean) could easily masquerade as a normal distribution. $\begingroup$ "...empirically it is very hard..." seems to argue against, rather than for, distributional testing. This is strange to read in a paragraph whose introduction suggests there are indeed uses for distributional testing. What, then, are you really trying to say here? $\endgroup$ – whuber♦ Oct 24 '14 at 20:54 $\begingroup$ I am against it, but I also want to be careful than just saying it is useless as I don't know the entire set of possible scenarios out there. There are many tests that depend on the normality assumption. Saying that normality testing is useless is essentially debunking all such statistical tests as you are saying that you are not sure that you are using/doing the right thing. In that case you should not do it, you should not do this large section of statistics. $\endgroup$ – kolonel Oct 24 '14 at 22:16 $\begingroup$ Thank you. The remarks in that comment seem to be better focused on the question than your original answer is! You might consider updating your answer at some point to make your opinions and advice more apparent. $\endgroup$ – whuber♦ Oct 24 '14 at 22:18 $\begingroup$ @whuber No problem. Can you recommend an edit? $\endgroup$ – kolonel Oct 24 '14 at 22:21 $\begingroup$ You might start with combining the two posts--the answer and your comment--and then think about weeding out (or relegating to an appendix or clarifying) any material that may be tangential. For instance, the reference to undefined means as yet has no clear bearing on the question and so it remains somewhat mysterious. $\endgroup$ – whuber♦ Oct 24 '14 at 22:23 I think the first 2 questions have been thoroughly answered but I don't think question 3 was addressed. Many tests compare the empirical distribution to a known hypothesized distribution. The critical value for the Kolmogorov-Smirnov test is based on F being completely specified. It can be modified to test against a parametric distribution with parameters estimated. So if fuzzier means estimating more than two parameters then the answer to the question is yes. These tests can be applied the 3 parameter families or more. Some tests are designed to have better power when testing against a specific family of distributions. For example when testing normality the Anderson-Darling or the Shapiro-Wilk test have greater power than K-S or chi square when the null hypothesized distribution is normal. Lillefors devised a test that is preferred for exponential distributions. Tests where "something" important to the analysis is supported by high p-values are I think wrong headed. As others pointed out, for large data sets, a p-value below 0.05 is assured. So, the test essentially "rewards" for small and fuzzy data sets and "rewards" for a lack of evidence. Something like qq plots are much more useful. The desire for hard numbers to decide things like this always (yes/no normal/not normal) misses that modeling is partially an art and how hypotheses are actually supported. wvguy8258 $\begingroup$ It remains that a large sample that is nearly normal will have a low p-value while a smaller sample that is not nearly as normal will often not. I do not think that large p-values are useful. Again, they reward for a lack of evidence. I can have a sample with several million data points, and it will nearly always reject the normality assumption under these tests while a smaller sample will not. Therefore, I find them not useful. If my thinking is flawed please show it using some deductive reasoning on this point. $\endgroup$ – wvguy8258 Jul 9 '14 at 7:43 $\begingroup$ This doesn't answer the question at all. $\endgroup$ – SmallChess Feb 2 '15 at 0:52 One good use of normality test that I don't think has been mentioned is to determine whether using z-scores is okay. Let's say you selected a random sample from a population, and you wish to find the probability of selecting one random individual from the population and get a value of 80 or higher. This can be done only if the distribution is normal, because to use z-scores, the assumption is that the population distribution is normal. But then I guess I can see this being arguable too... $\begingroup$ Value of what? Mean, sum, variance, an individual observation? Only the last one relies on the assumed normality of the distribution. $\endgroup$ – whuber♦ Sep 29 '13 at 16:12 $\begingroup$ i meant individual $\endgroup$ – Hotaka Sep 29 '13 at 16:29 $\begingroup$ Thanks. Your answer remains so vague, though, that it is difficult to tell what procedures you are referring to and impossible to assess whether your conclusions are valid. $\endgroup$ – whuber♦ Sep 29 '13 at 16:33 $\begingroup$ The problem with this use is the same as with other uses: The test will be dependent on sample size, so, it's essentially useless. It doesn't tell you whether you can use z scores. $\endgroup$ – Peter Flom♦ May 31 '14 at 0:24 protected by Glen_b♦ Mar 8 '18 at 7:36 Not the answer you're looking for? Browse other questions tagged hypothesis-testing normality-assumption philosophical or ask your own question. Fitting data to a log-normal distribution Normality test and Outlier detection Is the practice of doing normality test before doing t test wrong? Is it possible to test for a Gaussian distribution? Why according to distribution graph it's normally distributed, but Jarque-Bera test shows non-normally distributed? Normality test with p-value equal to zero How to verify assumptions of a normal distribution? If the data is not normally distributed and the skewness is = 0.208 SE of skewness is 0.144 , is it acceptable to use t-test? Usefulness of rejecting null in normality test testing normality for a two way anova, residuals as well as each sample? Testing normality Choosing between $z$-test and $t$-test Is a priori power analysis essentially useless? Aren't normality tests backwards? Normality, and when to use t-test vs. Mann-Whitney U-test? Testing for Normality Testing for normality within group or overall Testing for Normality within group or overall? How to verify the calculations correctness of the quadratic form test? Testing normality for residuals
CommonCrawl
Jonathan Mboyo Esole Jonathan Mboyo Esole (born February 24, 1977) is an associate professor of mathematics at Northeastern University. He works on the geometry of string theory. Jonathan Mboyo Esole Esole in 2018 Born24 February 1977 (1977-02-24) (age 46) Alma materFree University of Brussels University of Cambridge Leiden University Scientific career InstitutionsNortheastern University Harvard University Doctoral advisorAna Achúcarro Websitenortheastern.edu/esole Early life and career Esole was born in Kinshasa and attended Collège Boboto. He moved to Belgium at the age of three and did not return to the Congo for six years.[1] He studied at the Free University of Brussels, the same university his father had attended.[1] In his thesis, Unicité de la supergravité D=4 N=1 par les méthodes BRST, he demonstrated the uniqueness of N=1 supergravity in four spacetime dimensions with minimal assumptions using homological methods. This was a major result in the field as it showed that under mild assumptions that if a free graviton is coupled to a particle of spin 3/2, the only consistent theory will have supersymmetry.. He graduated Summa cum laude in 2001, and won the prize for the best thesis.[2] He joined the University of Cambridge for his doctoral studies to study Part III of the Mathematical Tripos, working under director of studies Fernando Quevedo. He moved to Leiden University for his PhD, working with Ana Achúcarro on cosmic strings.[3] His thesis considered Fayet-Iliopoulos terms and BPS cosmic strings in N = 2 supergravity.[4] He served as a visiting fellow at Stanford University, working with Renata Kallosh. He joined KU Leuven as a Marie Curie Fellow, working with Antoine Van Proeyen and Frederic Denef on string theory and supergravity.[5] He spoke at the Marie Curie Fellow Training Workshop.[6] Career Esole works on F-theory, a branch of string theory at the interface with mathematics.[7] He joined the Department of Physics at Harvard University as a postdoctoral research fellow in 2008. He moved to the Department of Mathematics in 2013, and was appointed Benjamin Peirce Fellow working with the Fields Medal winner Shing-Tung Yau.[8] He worked on SU(5) models and opened the door to the systematic use of crepant resolutions of singularities in F-theory.[9] He also studied D-brane deconstructions in IIB Orientifolds.[10] He delivered a keynote at the Conference for African American Research in Mathematical Sciences.[11] He was a member of the Center for the Fundamental Law of Nature. Esole was appointed as an assistant professor at Northeastern University in 2016.[12] He was awarded a National Science Foundation grant to work on Elliptic Fibrations and String Theory in 2014. This allowed him to investigate F-theory and elliptic fibrations.[13] In 2017 Esole was named a NextEinstein Forum Fellow.[14] This award celebrates the best young African scientists.[8] He is interested in African education and supports the Lumumba Lab.[15] He is part of the Malaika school, an initiative to teach girls in Kalebuka.[8] In 2022, Esole was listed as one of the ten members of the newly created African Advisory Board of the French National Center for Scientific Research (CNRS). Antoine Petit, the General Director of CNRS described the need for this new advisory board as follows: "Scientific cooperation between Africa and Europe is a priority for the CNRS: we want to set up lasting partnerships of excellence to meet the challenges of today and tomorrow. To achieve this, we have surrounded ourselves with personalities with whom we can take the right measure of the field, who will help us question our practices and usefully mobilize our forces".[16] In July 2022, Esole became a nonresident senior fellow of Atlantic Council's Africa Center, directed by Ambassador Rama Yade.[17] The Africa Center was established in September 2009 to help transform U.S. and European policy approaches to Africa by emphasizing the building of strong geopolitical partnerships with African states and strengthening economic growth and prosperity on the continent.[18] After the 2021 eruption of Mount Nyiragongo, a volcano near the city of Goma, he created the Linda Project, a platform for African scientists, technologists, and entrepreneurs that provides training and equipment and advocates for open-science research and for African countries to own and control their data and scientific equipment. In May 2022, the Linda Project helped establish the first Congolese-owned seismic network monitoring the volcanos of Mount Nyiragongo and Mount Nyamulagira. Esole personally designed the telemetry of the network.[17] Awards and honours • 2022 Member of the African Advisory Board of CNRS[17] • 2018 International Dunia Award[19][20] • 2017 NextEinstein Forum Fellow[21] • 2013 Harvard University Benjamin Peirce Fellow • 2006 European Commission Maire Curie Fellow • 2001 University of Cambridge Philippe Wiener-Maurice Anspach Foundation Grant • 2001 Free University of Brussels Board of Honour • 2001 Free University of Brussels A.Sc.Br Prize, Best Thesis in the Faculty of Sciences • 1997 Association of Congolese Journalists for Progress, Prix d’ Excellence[21] • 1997 Baccalaureate of the Republic of Zaire, Vice Laureate References 1. "Jonathan Mboyo Esole : un futur Einstein pour l'Afrique ?". Le Point Afrique (in French). Retrieved 2018-11-09. 2. "Jonathan Mboyo Esole". gg2018.nef.org. 24 November 2017. Retrieved 2018-11-09. 3. Achúcarro, Ana; Celi, Alessio; Esole, Mboyo; Bergh, Joris Van den; Proeyen, Antoine Van (2006). "D -term cosmic strings from N = 2 supergravity". Journal of High Energy Physics. 2006 (1): 102. arXiv:hep-th/0511001. Bibcode:2006JHEP...01..102A. doi:10.1088/1126-6708/2006/01/102. ISSN 1126-6708. S2CID 15818035. 4. "Annual Report 2006" (PDF). Retrieved 2018-11-15. 5. "Research Interest of Mboyo Esole". insti.physics.sunysb.edu. Retrieved 2018-11-09. 6. "3rd RTN Workshop, Valencia 2007". www.uv.es. Retrieved 2018-11-16. 7. "Jonathan Mboyo Esole". www.northeastern.edu. Retrieved 2018-11-09. 8. Malaika. "Meet Jonathan Mboyo Esole | Malaika". malaika.org. Retrieved 2018-11-09. 9. Esole, Mboyo; Yau, Shing-Tung (2013). "Small resolutions of SU(5)-models in F-theory". Advances in Theoretical and Mathematical Physics. 17 (6): 1195–1253. arXiv:1107.0733. doi:10.4310/atmp.2013.v17.n6.a1. ISSN 1095-0761. S2CID 56354790. 10. Collinucci, Andres; Denef, Frederik; Esole, Mboyo (2009). "D-brane Deconstructions in IIB Orientifolds". JHEP. 0902 (2): 005. arXiv:0805.1573. Bibcode:2009JHEP...02..005C. doi:10.1088/1126-6708/2009/02/005. 11. Institute for Advanced Study (2016-06-30), Birational Geometry of Elliptic Fibrations and Combinatorics - Jonathan Mboyo Esole, retrieved 2018-11-09 12. "Jonathan Mboyo Esole". www.northeastern.edu. Retrieved 2018-11-09. 13. "NSF Award Search: Award#1701635 – Elliptic Fibrations and String Theory". nsf.gov. Retrieved 2018-11-09. 14. "Math professor named Next Einstein Fellow". Retrieved 2018-11-09. 15. "Interview. Jonathan Mboyo Esole : " Je voudrais faire avancer une culture d'excellence dans l'enseignement des sciences et la recherche scientifique " | adiac-congo.com : toute l'actualité du Bassin du Congo". www.adiac-congo.com (in French). Retrieved 2018-11-09. 16. "Le CNRS s'entoure d'un conseil consultatif sur l'Afrique". Retrieved 2022-08-13. 17. "Jonathan Esole is a nonresident fellow at the Atlantic Council's Africa Center". Retrieved 2022-08-13. 18. "Africa Center". Atlantic Council. Retrieved 13 August 2022. 19. "Professeur Jonathan MBOYO ESOLE | Soulier d'Ébène". www.soulierdebene.be (in French). Retrieved 2018-11-16. 20. Mboyo Esole (2018-05-16), Jonathan Esole, 2018 Dunia Award., retrieved 2018-11-16 21. "Jonathan Esole". Next Einstein Forum. 2017-09-11. Retrieved 2018-11-16. Authority control International • ISNI • VIAF National • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH
Wikipedia
\begin{document} \title{Spectrally Accurate Causality Enforcement using SVD-based Fourier Continuations for High Speed Digital Interconnects} \author{Lyudmyla~L.~Barannyk,~\IEEEmembership{Member,~IEEE,} Hazem~A.~Aboutaleb, Aicha~Elshabini,~\IEEEmembership{Fellow, IEEE \& Fellow, IMAPS,} and~Fred~D.~Barlow,~\IEEEmembership{Senior Member, IEEE \& Fellow, IMAPS} \thanks{Lyudmyla~L.~Barannyk is with the Department of Mathematics, University of Idaho, Moscow, ID 83844 USA (e-mail: [email protected]).} \thanks{Hazem~A.~Aboutaleb is with the Department of Electrical \& Computer Engineering, University of Idaho, Moscow, ID 83844 USA (e-mail: [email protected]).} \thanks{Aicha~Elshabini and Fred Barlow are with the Department of Electrical \& Computer Engineering, University of Idaho, Moscow, ID 83844 USA (e-mail: [email protected]; [email protected]).} } \markboth {Lyudmyla~L.~Barannyk \MakeLowercase{\textit{et al.}}: Spectrally Accurate Causality Enforcement using SVD-based Fourier Continuations}{IEEE Transactions on Components, Packaging and Manufacturing Technology} \maketitle \begin{abstract} We introduce an accurate and robust technique for accessing causality of network transfer functions given in the form of bandlimited discrete frequency responses. These transfer functions are commonly used to represent the electrical response of high speed digital interconnects used on chip and in electronic package assemblies. In some cases small errors in the model development lead to non-causal behavior that does not accurately represent the electrical response and may lead to a lack of convergence in simulations that utilize these models. The approach is based on Hilbert transform relations or Kramers-Kr\"onig dispersion relations and a construction of causal Fourier continuations using a regularized singular value decomposition (SVD) method. Given a transfer function, non-periodic in general, this procedure constructs highly accurate Fourier series approximations on the given frequency interval by allowing the function to be periodic in an extended domain. The causality dispersion relations are enforced spectrally and exactly. This eliminates the necessity of approximating the transfer function behavior at infinity and explicit computation of the Hilbert transform. We perform the error analysis of the method and take into account a possible presence of a noise or approximation errors in data. The developed error estimates can be used in verifying causality of the given data. The performance of the method is tested on several analytic and simulated examples that demonstrate an excellent accuracy and reliability of the proposed technique in agreement with the obtained error estimates. The method is capable of detecting very small localized causality violations with amplitudes close to the machine precision. \end{abstract} \begin{IEEEkeywords} Causality, dispersion relations, Kramers-Kr\"onig relations, Fourier continuation, periodic continuation, Hilbert transform, least squares solution, regularized SVD, high speed interconnects. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} \label{Introduction} The design of high speed interconnects, that are common on chip and at the package level in digital systems, requires systematic simulations at different levels in order to evaluate the overall electrical system performance and avoid signal and power integrity problems \cite{Swaminathan_Engin_2007}. To conduct such simulations, one needs suitable models that capture the relevant electromagnetic phenomena that affect the signal and power quality. These models are often obtained either from direct measurements or electromagnetic simulations in the form of discrete port frequency responses that represent scattering, impedance, or admittance transfer functions or transfer matrices in scalar or multidimensional cases, respectively. Once frequency responses are available, a corresponding macromodel can be derived using several techniques such as the Vector Fitting \cite{Gustavsen_Semlyen_1999}, the Orthonormal Vector Fitting \cite{Deschrijver_Haegeman_Dhaene_2007}, the Delay Extraction-Based Passive Macromodeling \cite{Charest_Achar_Nakhla_Erdin_2009} among others. However, if the data are contaminated by errors, it may not be possible to derive a good model. These errors may be due to a noise, inadequate calibration techniques or imperfections of the test set-up in case of direct measurements or approximation errors due to the meshing techniques, discretization errors and errors due to finite precision arithmetic occurring in numerical simulations. Besides, these data are typically available over a finite frequency range as discrete sets with a limited number of samples. All this may affect the performance of the macromodeling algorithm resulting in non-convergence or inaccurate models. Often the underlying cause of such behavior is the lack of causality in a given set of frequency responses \cite{Triverio_Grivet_Talocia_Nakhla_Canavero_Achar_2007}. Causality can be characterized either in the time domain or the frequency domain. In the time domain, a system is said to be causal if the effect always follows the cause. This implies that a time domain impulse response function $h(t)=0$ for $t<0$, and a causality violation is stated if any nonzero value of $h(t)$ is found for some $t<0$. To analyze causality, one can convert the frequency responses to the time domain using the inverse discrete Fourier transform. This approach suffers from the well known Gibbs phenomenon that is inherent for functions that are not smooth enough and represented by a truncated Fourier series. Examples of such functions include impulse response functions of typical interconnects that have jump discontinuities and whose spectrum is truncated since the frequency response data are available only on a finite length frequency interval. Direct application of the inverse discrete Fourier transform to raw frequency response data causes severe over and under shooting near the singularities. This problem is usually addressed by windowing the Fourier data to deal with the slow decay of the Fourier spectrum \cite[Ch. 7]{Oppenheim_Schafer_1989}. Windowing can also be applied in the Laplace domain \cite{Blais_Cimmino_Ross_Granger_2009} to respect causality. This approach is shown to be more accurate and efficient than the Fourier approach \cite{Granger_Ross_2009}. There are other filtering techniques that deal with the Gibbs phenomenon but they require some knowledge of location of singularities (see \cite{Gottlieb_Shu_1997, Gelb_Tanner_2006, Tadmor_2007, Mhaskar_Prestin_2009} and references therein). A related paper \cite{Beylkin_Monzon_2009} employs nonlinear extrapolation of Fourier data to avoid the Gibbs phenomenon and the use of windows/filtering. In the frequency domain, a system is said to be causal if a frequency response given by the transfer function $H(w)$ satisfies the dispersion relations also known as Kramers-Kr\"onig relations \cite{Kramers_1927, Kronig_1926}. The dispersion relations can be written using the Hilbert transform. They represent the fact that the real and imaginary parts of a causal function are related through Hilbert transform. The Hilbert transform may be expressed in both continuous and discrete forms and is widely used in circuit analysis, digital signal processing, remote sensing and image reconstruction \cite{Guillemin_1977, Oppenheim_Schafer_1989}. Applications in electronics include reconstruction \cite{Amari_Gimersky_Bornemann_1995} and correction \cite{Tesche_1992} of measured data, delay extraction \cite{Knockaert_Dhaene_2008}, interpolation/extrapolation of frequency responses \cite{Narayana_Rao_Adve_Sarker_Vannicola_Wicks_Scott_1996}, time-domain conversion \cite{Luo_Chen_2005}, estimation of optimal bandwidth and data density using causality checking \cite{Young_2010} and causality enforcement techniques using generalized dispersion relations \cite{Triverio_Grivet_Talocia_2006, Triverio_Grivet_Talocia_2006_2, Triverio_Grivet_Talocia_2008}, causality enforcement using minimum phase and all-pass decomposition and delay extraction \cite{Mandrekar_Swaminathan_2005_2, Mandrekar_Swaminathan_2005, Mandrekar_Srinivasan_Engin_Swaminathan_2006, Lalgudi_Srinivasan_Casinovi_Mandrekar_Engin_Swaminathan_Kretchmer_2006}, causality verification using minimum phase and all-pass decomposition that avoids Gibbs errors \cite{Xu_Zeng_He_Han_2006}, causality characterization through analytic continuation for $L_2$ integrable functions \cite{Dienstfrey_Greengard_2001}, causality enforcement using periodic polynomial continuations \cite{Aboutaleb_Barannyk_Elshabini_Barlow_WMED13, Barannyk_Aboutaleb_Elshabini_Barlow_IMAPS, Barannyk_Aboutaleb_Elshabini_Barlow_IMAPS2014} and the subject of the current paper. The Hilbert transform that relates the real and imaginary parts of a transfer function $H(w)$ is defined on the infinite domain which can be reduced to $[0,\infty)$ by symmetry properties of $H(w)$ for real impulse response functions. However, the frequency responses are usually available over a finite length frequency interval, so the infinite domain is either truncated or behavior of the function for large $w$ is approximated. This is necessary since measurements can only be practically conducted over a finite frequency range and often the cost of the measurements scales in an exponential manner with respect to frequency. Likewise simulation tools have a limited bandwidth and there is a computational cost associated with each frequency data point that generally precludes very large bandwidths in these data sets. Usually $H(w)$ is assumed to be square integrable, which would require the function to decay at infinity. When a function does not decay at infinity or even grows, generalized dispersion relations with subtractions can be successfully used to reduce the dependence on high frequencies and allow a domain truncation \cite{Triverio_Grivet_Talocia_2006, Triverio_Grivet_Talocia_2006_2, Triverio_Grivet_Talocia_2008}. A review of some previous work on generalized dispersion relations and other methods that address the problem of having finite frequency range is provided in \cite{Triverio_Grivet_Talocia_2008}. We take another approach and instead of approximating the behavior of $H(w)$ for large $w$ or truncating the domain, we construct a causal periodic continuation or causal Fourier continuation of $H(w)$ by requiring the transfer function to be periodic and causal in an extended domain of finite length. In \cite{Aboutaleb_Barannyk_Elshabini_Barlow_WMED13, Barannyk_Aboutaleb_Elshabini_Barlow_IMAPS, Barannyk_Aboutaleb_Elshabini_Barlow_IMAPS2014}, polynomial periodic continuations were used to make a transfer function periodic on an extended frequency interval. In these papers, the raw frequency responses were used on the original frequency interval. Once a periodic continuation is constructed, the spectrally accurate Fast Fourier Transform \cite{Cooley_Tukey_1965} implemented in FFT/IFFT routines can be used to compute discrete Hilbert transform and enforce causality. The accuracy of the method was shown to depend primarily on the degree of the polynomial, which implied the smoothness up to some order of the continuation at the end points of the given frequency domain. This in turn allowed to reduce the boundary artifacts compared to applying the discrete Hilbert transform directly to the data without any periodic continuation, which is implemented in the function {\tt hilbert} from the popular software Matlab. In the current work we implement the idea of periodic continuations of an interconnect transfer function by approximating this function with a causal Fourier series in an extended domain. The approach allows one to obtain extremely accurate approximations of the given function on the original interval. The causality conditions are imposed exactly and directly on Fourier coefficients, so there is no need to compute Hilbert transform numerically. This eliminates the necessity of approximating the behavior of the transfer function at infinity similar to polynomial continuations employed in \cite{Aboutaleb_Barannyk_Elshabini_Barlow_WMED13, Barannyk_Aboutaleb_Elshabini_Barlow_IMAPS, Barannyk_Aboutaleb_Elshabini_Barlow_IMAPS2014}, and does not require the use of Fast Fourier Transform. The advantage of the method is that it is capable of detecting very small localized causality violations with amplitude close to the machine precision, at the order of $10^{-13}$, and a small uniform approximation error can be achieved on the entire original frequency interval, so it does not have boundary artifacts reported by using {\tt hilbert} Matlab function, polynomial continuations \cite{Aboutaleb_Barannyk_Elshabini_Barlow_WMED13, Barannyk_Aboutaleb_Elshabini_Barlow_IMAPS, Barannyk_Aboutaleb_Elshabini_Barlow_IMAPS2014} or generalized dispersion relations \cite{Triverio_Grivet_Talocia_2006, Triverio_Grivet_Talocia_2006_2, Triverio_Grivet_Talocia_2008}. The performed error analysis unbiases an error due to approximation of a transfer function with a causal Fourier series from causality violations that are due to the presence of a noise or approximation errors in data. The developed estimates of upper bounds for these errors can be used in checking causality of the given data. The paper is organized as follows. Section \ref{causality_dispersion_relations} provides a background on causality for linear time-translation invariant systems, dispersion relations and the motivation for the proposed method. In Section \ref{Fourier_continuation} we derive causal spectrally accurate Fourier continuations using truncated singular value decomposition (SVD). In Section \ref{error_analysis} we perform the error analysis of the method and take into account a possible noise or approximation errors in the given data. We outline an approach for verifying causality of the given data by using the developed error estimates. In Section \ref{examples}, the technique is applied to several analytic and simulated examples, both causal and non-causal, to show the excellent performance of the proposed method that works in a very good agreement with the developed error estimates. Finally, in Section \ref{conclusions} we present our conclusions. \section{Causality for Linear Time-Translation Invariant Systems} \label{causality_dispersion_relations} Consider a linear and time-invariant physical system with the impulse response ${\mathbf h}(t,t')$ subject to a time-dependent input ${\mathbf f(t)}$, to which it responds by an output ${\mathbf x(t)}$. Linearity of the system implies that the output ${\mathbf x(t)}$ is a linear functional of the input ${\mathbf f(t)}$, while time-translation invariance means that if the input is shifted by some time interval $\tau$, the output is also shifted by the same interval, and, hence, the impulse response function ${\mathbf h(t,t')}$ depends only on the difference between the arguments. Thus, the response ${\mathbf x(t)}$ can be written as the convolution of the input ${\mathbf f(t)}$ and the impulse response ${\mathbf h(t-t')}$ \cite{Nussenzveig_1972} \begin{equation} \label{1_3_2} {\mathbf x(t)}=\int_{-\infty}^\infty {\mathbf h(t-t')} {\mathbf f(t')}dt' = {\mathbf h(t)}* {\mathbf f}(t). \end{equation} Denote by \begin{equation} \label{1_3_3} {\mathbf H}(w)=\int_{-\infty}^\infty {\mathbf h}(\tau)\mathop{\rm e}\nolimits^{-i w\tau}d\tau \end{equation} the Fourier transform of ${\mathbf h(t)}$\footnote{Please note that we use an opposite sign of the exponent in the definition of the Fourier transform than in \cite{Nussenzveig_1972}.}. ${\mathbf H}(w)$ is also called the transfer matrix in multidimensional case or transfer function in a scalar case. The system is causal if the output cannot precede the input, i.e. if ${\mathbf f }(t)=0$ for $t<T$, the same must be true for ${\mathbf x(t)}$. This primitive causality condition in the time domain implies ${\mathbf h}(\tau)=0$, $\tau<0$, and (\ref{1_3_3}) becomes \begin{equation} \label{1_3_6} {\mathbf H}(w)=\int_{0}^\infty {\mathbf h}(\tau)\mathop{\rm e}\nolimits^{-i w\tau}d\tau. \end{equation} Note that the integral in (\ref{1_3_6}) is extended only over a half-axis, which implies that ${\mathbf H}(w)$ has a regular analytic continuation in lower half $w$-plane. Examples of physical systems that satisfy the above conditions include electric networks with ${\mathbf f}$ the input voltage, ${\mathbf x}$ the output current, ${\mathbf H}(w)$ the admittance of the network; ${\mathbf f}$ the input current, ${\mathbf x}$ the output voltage, ${\mathbf H}(w)$ the impedance; ${\mathbf f}$, ${\mathbf x}$ both power waves, ${\mathbf H}(w)$ the scattering. For simplicity, we consider the case with a scalar impulse response $h(t)$ but the approach can also be extended to the multidimensional case for any element of the impulse response matrix ${\mathbf h(t)}$. Very often it is assumed that $H(w)$ is square integrable \cite{Dienstfrey_Greengard_2001, Knockaert_Dhaene_2008}, i.e. $\int_{0}^\infty |H(w)|^2 dw < C$ for some constant $C$. Then one can use Parseval's theorem to show that $h(t)$ is also square integrable \cite{Nussenzveig_1972}. The converse also holds \cite{Dym_McKean_1985}. Square integrability of $H(w)$ is often related with the requirement that the total energy of the system is finite. Starting from Cauchy's theorem and using contour integration, one can show \cite{Nussenzveig_1972} that for any point $w$ on the real axis, $H(w)$ can be written as \begin{equation} \label{1_6_7} H(w)=\frac{1}{\pi i}\Xint-_{-\infty}^\infty \frac{H(w')}{w-w'}dw', \quad \mbox{real} \ w, \end{equation} where \[ \Xint-_{-\infty}^\infty=P\int_{-\infty}^\infty = \lim_{\epsilon\to 0}\left(\int_{-\infty}^{w-\epsilon}+\int_{w+\epsilon}^{\infty}\right) \] denotes Cauchy's principal value. Separating the real and imaginary parts of (\ref{1_6_7}), we get \begin{equation} \label{1_6_10} \mathop{\rm Re}\nolimits H(w)=\frac{1}{\pi}\Xint-_{-\infty}^\infty \frac{\mathop{\rm Im}\nolimits H(w')}{w-w'}dw', \end{equation} \begin{equation} \label{1_6_11} \mathop{\rm Im}\nolimits H(w)=-\frac{1}{\pi}\Xint-_{-\infty}^\infty \frac{\mathop{\rm Re}\nolimits H(w')}{w-w'}dw'. \end{equation} These expressions relating $\mathop{\rm Re}\nolimits H$ and $\mathop{\rm Im}\nolimits H$ are called the dispersion relations or Kramers-Kr\"onig relations after Kr\"onig \cite{Kronig_1926} and Kramers \cite{Kramers_1927} who derived the first known dispersion relation for a causal system of a dispersive medium. In mathematics the dispersion relations (\ref{1_6_10}), (\ref{1_6_11}) are also known as the Sokhotski--Plemelj formulas. These formulas show that $\mathop{\rm Re}\nolimits H$ at one frequency is related to $\mathop{\rm Im}\nolimits H$ for all frequencies, and vice versa. Choosing either $\mathop{\rm Re}\nolimits H$ or $\mathop{\rm Im}\nolimits H$ as an arbitrary square integrable function, then the other one is completely determined by causality. Recalling that the Hilbert transform is defined \[ {\mathcal H}[u(w)]=\frac{1}{\pi}\Xint-_{-\infty}^\infty \frac{u(w')}{w-w'} dw', \] we see that $\mathop{\rm Re}\nolimits H$ and $\mathop{\rm Im}\nolimits H$ are Hilbert transforms of each other, i.e. \[ \mathop{\rm Re}\nolimits H(w)={\mathcal H}[\mathop{\rm Im}\nolimits H(w)], \quad \mathop{\rm Im}\nolimits H(w)=-{\mathcal H}[\mathop{\rm Re}\nolimits H(w)]. \] For example, a function $H(w)=\frac{1}{w-i}$ is clearly square integrable and satisfies the dispersion relations (\ref{1_6_10}), (\ref{1_6_11}), which can be verified by contour integration. An example of a function $H(w)$ that is not square integrable but satisfies the Kramers-Kr\"onig dispersion relations (\ref{1_6_10}), (\ref{1_6_11}) is provided by $H(w)=\mathop{\rm e}\nolimits^{-iaw}$, $a>0$. The real and imaginary parts are $\cos(aw)$ and $-\sin(aw)$, and dispersion relations (\ref{1_6_10}), (\ref{1_6_11}) can be verified by noting that ${\mathcal H}[\cos(aw)]=\sin(aw)$ and ${\mathcal H}[\sin(aw)]=-\cos(aw)$. In practice, the function $H(w)$ may not satisfy the assumption of square integrability and it may be only bounded or even behave like $O(w^n)$, when $|w|\to\infty$, $n=0,1,2,\ldots$. In such cases, instead of dispersion relations (\ref{1_6_10}), (\ref{1_6_11}), one can use generalized dispersion relations with subtractions, in which a square integrable function is constructed by subtracting a Taylor polynomial of $H(w)$ around $w=w_0$ from $H(w)$ and dividing the result by $(w-w_0)^n$. This approach makes the integrand in the generalized dispersion relations less dependent on the high-frequency behavior of $H(w)$. This may be very beneficial when the high-frequency behavior of $H(w)$ may not be known with sufficient accuracy or not accessible in practice at all due to availability of only finite bandwidth data. The technique was proposed in \cite{Beltrami_Wohlers_1966, Nussenzveig_1972} and implemented successfully in \cite{Triverio_Grivet_Talocia_2006, Triverio_Grivet_Talocia_2006_2, Triverio_Grivet_Talocia_2008} to reduce sensitivity of Kramers-Kr\"onig dispersion relations (\ref{1_6_10}), (\ref{1_6_11}) to the high-frequency data. In this paper, we take an alternative approach motivated by the example of the periodic function $H(w)=\mathop{\rm e}\nolimits^{-iaw}$, $a>0$, mentioned above, that is not square integrable but still satisfies Kramers-Kr\"onig dispersion relations (\ref{1_6_10}), (\ref{1_6_11}). The transfer function $H(w)$ in practice is typically known only over a finite frequency interval with the limited number of discrete values and it is not periodic in general. Direct application of dispersion relations (\ref{1_6_10}), (\ref{1_6_11}) produces large errors in the boundary regions mainly because the high-frequency behavior of $H(w)$ is missing, unless data decay to zero at the boundary. To overcome this problem, we construct a spectrally accurate causal periodic continuation of $H(w)$ in an extended domain. A method for constructing a periodic continuation, also known as Fourier continuation or Fourier extension, which is based on regularized singular value decomposition (SVD), was recently proposed in \cite{Boyd_2002, Bruno_2003, Bruno_Han_Pohlman_2007, Lyon_2012} (see also references therein). This method allows one to calculate Fourier series approximations of non-periodic functions such that a Fourier series is periodic in an extended domain. Causality can be imposed directly on the Fourier coefficients producing a causal Fourier continuation, thus satisfying causality exactly. The Fourier coefficients are determined by solving an overdetermined and regularized least squares problem since the system suffers from numerical ill-conditioning. The resulting causal Fourier continuation is then compared with the given discrete data on the original bandwidth of interest. A decision about causality of the given data is made using the error estimates developed in Section \ref{error_analysis}. In the next section we provide details of the derivation of causal Fourier continuations. \section{Causal Fourier Continuation} \label{Fourier_continuation} Consider a transfer function $H(w)$ available at a set of discrete frequencies from $[w_{min},w_{max}]$, where $w_{min}\geq 0$. First, let $w_{min}=0$, so we have the baseband case. Since equations (\ref{1_6_10}), (\ref{1_6_11}) are homogeneous in the frequency variable, we can rescale $[0,w_{max}]$ to $[0,0.5]$ using the transformation $x=\frac{0.5}{w_{max}}w$ for convenience, to get a rescaled transfer function $H(x)$. The time domain impulse response function $h(t)$ is often real-valued. Hence, the real and imaginary parts of $H(w)$, as the Fourier transform of $h(t)$, and, hence, of $H(x)$, are even and odd functions. This implies that the discrete set of rescaled frequency responses $H(x)$ is available on the unit length interval $x\in[-0.5,0.5]$ by spectrum symmetry. In some cases, the data are available only from a non-zero, low-frequency cutoff $w_{min}>0$, which corresponds to the bandpass case. The proposed procedure is still applicable since it does not require data points to be equally spaced. The transmission line example \ref{transmission_line_example} considers such situation. The idea is to construct an accurate Fourier series approximation of $H(x)$ by allowing the Fourier series to be periodic and causal in an extended domain. The result is the Fourier continuation of $H$ that we denote by ${\mathcal C}(H)$, and it is defined by \begin{equation}\label{E1} {\mathcal C}(H)(x)=\sum_{k=-M+1}^{M} \alpha_k \mathop{\rm e}\nolimits^{-\frac{2\pi i}{b} k x}, \end{equation} for even number $2M$ of terms, whereas for odd number $2M+1$ of terms, the index $k$ varies from $-M$ to $M$. Throughout this paper we will consider Fourier series with even number of terms for simplicity. All presented results have analogues for Fourier series with odd number of terms. Here $b$ is the period of approximation. For SVD-based periodic continuations $b$ is normally chosen as twice the length of the domain on which function $H$ is given \cite{Bruno_Han_Pohlman_2007}. The value $b=2$ is not necessarily optimal and it is shown \cite{Lyon_2012a} to depend on a function being approximated. For causal Fourier continuations we also find that the optimal value of $b$ depends on $H$. In practice, $b$ can be varied in $1<b\leq 4$ to get more optimal performance of the method. For very smooth functions, it is better to use a wider extension zone with $b\geq 2$, for example, $b=2$ or $b=4$ was enough in most of our examples. However, for functions that are wildly oscillatory or have high gradients in the boundary regions of the domain where the original data are available, a smaller extension zone with $1<b<2$ is recommended \cite{Boyd_2002}. We used $b=1.1$ in one of our examples. Assume that values of the function $H(x)$ are known at $N$ discretization or collocation points $\{x_j\}$, $j=1,\ldots,N$, $x_j\in[-0.5,0.5]$. Note that ${\mathcal C}(H)(x)$ is a trigonometric polynomial of degree at most $M$. Since $\mathop{\rm Re}\nolimits H(x)$ and $\mathop{\rm Im}\nolimits H(x)$ are even and odd functions of $x$, respectively, the Fourier coefficients \[ \alpha_k=\frac{1}{b}\int_{-b/2}^{b/2} H(x) \overline{\phi_k(x)}dx, \quad k=1,\ldots, M, \] are real. Here $\phi_{k}(x)=\mathop{\rm e}\nolimits^{-\frac{2\pi i }{b}k x}$, $k\in\hbox{\bb Z}$, and $\bar{ \ }$ denotes the complex conjugate. Functions $\{\phi_{k}(x)\}$ form a complete orthogonal basis in $L_2[-\frac b2, \frac b2]$, and, in particular \begin{equation} \label{Kronecker_delta_basis} \int_{-b/2}^{b/2} \phi_k(x)\overline{\phi_{k'}(x)} dx = b \,\delta_{kk'}, \end{equation} where $\delta_{kk'}$ is the Kronecker delta. In addition, $\overline{\phi_k}(x)=\mathop{\rm e}\nolimits^{\frac{2\pi i }{b}k x}=\phi_{-k}(x)$. For a function $\mathop{\rm e}\nolimits^{-iax}$, the Hilbert transform is $ {\mathcal H}\{\mathop{\rm e}\nolimits^{-iax}\}= i\mathop{\rm sgn}\nolimits(a) \mathop{\rm e}\nolimits^{-iax} $. Hence, \begin{equation}\label{E2} {\mathcal H}\{\phi_k(x)\}= i\mathop{\rm sgn}\nolimits(k) \phi_k(x), \end{equation} which implies that the functions $\{\phi_k(x)\}$ are the eigenfunctions of the Hilbert transform ${\mathcal H}$ with associated eigenvalues $\pm i$ with $x\in[-\frac b2,\frac b2]$. We will use relations (\ref{E2}) to impose a causality condition on the coefficients of $C(H)(x)$ similarly as it was done in \cite{Knockaert_Dhaene_2008} for the case of square integrable $H(w)$ where the idea of projecting on the eigenfunctions of the Hilbert transform in $L_2(\hbox{\bb R})$ \cite{Weideman_1995} was used. In the present work, the square integrability of $H(w)$ is not required and more general transfer functions than in \cite{Knockaert_Dhaene_2008} can be considered. For convenience of derivation, let us write $C(H)(x)$ as a Fourier series $ {\mathcal C}(H)(x)=\sum_{k=-\infty}^{\infty} \alpha_k \phi_k (x) $, which will be truncated at the end to get a Fourier continuation in the form (\ref{E1}). Let ${\mathcal C}(H)(x)=\mathop{\rm Re}\nolimits{\mathcal C}(H)(x) + i \mathop{\rm Im}\nolimits {\mathcal C}(H)(x)$ and $\phi_k(x) =\mathop{\rm Re}\nolimits \phi_k(x)+i \mathop{\rm Im}\nolimits \phi_k(x)$. Since \[ \mathop{\rm Re}\nolimits \phi_k=\frac 12 (\phi_k+\overline{\phi}_k), \quad \mathop{\rm Im}\nolimits \phi_k=\frac{1}{2i} (\phi_k-\overline{\phi}_k) \] we obtain \[ \mathop{\rm Re}\nolimits{\mathcal C}(H)(x)=\sum_{k=-\infty}^{\infty} \alpha_k \mathop{\rm Re}\nolimits \phi_k=\frac 12 \sum_{k=-\infty}^{\infty} \alpha_k (\phi_k+\overline{\phi}_k) \] and, since $\overline{\phi_k}=\phi_{-k}$, we have \[ \mathop{\rm Re}\nolimits{\mathcal C}(H)(x)=\frac 12 \sum_{k=-\infty}^{\infty} \alpha_k (\phi_k+\phi_{-k}) = \frac 12 \sum_{k=-\infty}^{\infty} (\alpha_k +\alpha_{-k}) \phi_k, \] where in the last sum we changed the order of summation in the second term. Similarly, we can show that \[ \mathop{\rm Im}\nolimits{\mathcal C}(H)(x)=\frac {1}{2i} \sum_{k=-\infty}^{\infty} (\alpha_k -\alpha_{-k}) \phi_k. \] For a causal periodic continuation, we need $\mathop{\rm Im}\nolimits{\mathcal C}(H)(x)$ to be the Hilbert transform of $-\mathop{\rm Re}\nolimits{\mathcal C}(H)(x)$. Hence, \[ \frac {1}{2i} \sum_{k=-\infty}^{\infty} (\alpha_k -\alpha_{-k}) \phi_k= -{\mathcal H}\left[\frac 12 \sum_{k=-\infty}^{\infty} (\alpha_k +\alpha_{-k}) \phi_k \right]. \] Employing linearity of the Hilbert transform, we get \[ \frac {1}{2i} \sum_{k=-\infty}^{\infty} (\alpha_k -\alpha_{-k}) \phi_k= -\frac 12 \sum_{k=-\infty}^{\infty} (\alpha_k +\alpha_{-k}) {\mathcal H}[\phi_k]. \] Using (\ref{E2}), we obtain \[ \frac {1}{2i} (\alpha_k -\alpha_{-k}) =-\frac 12 (\alpha_k +\alpha_{-k}) i \mathop{\rm sgn}\nolimits(k) \quad \mbox{for any} \ k\in\hbox{\bb Z} \] or \[ \alpha_k-\alpha_{-k} = (\alpha_k +\alpha_{-k}) \mathop{\rm sgn}\nolimits(k), \quad k\in\hbox{\bb Z}, \] that implies $\alpha_{k}=0$ for $k\leq 0$ in (\ref{E1}). Hence, a causal Fourier continuation has the form \begin{equation} \label{E3_0} {\mathcal C}(H)(x)=\sum_{k=1}^{M} \alpha_k \phi_k(x) \end{equation} where we truncated the infinite sum to obtain a trigonometric polynomial. Evaluating $H(x)$ at points $x_j$, $j=1,\ldots,N$, $x_j\in[-0.5, 0.5]$, produces a complex valued system \begin{equation} \label{E3} {\mathcal C}(H)(x_j) =\sum_{k=1}^{M} \alpha_k \phi_k(x_j) \end{equation} with $N$ equations for $M$ unknowns $\alpha_{k}$, $k=1,\ldots,M$, $N\geq M$. If $N>M$, the system (\ref{E3}) is overdetermined and has to be solved in the least squares sense. When Fourier coefficients $\alpha_k$ are computed, formula (\ref{E3_0}) provides reconstruction of $H(x)$ on $[-0.5, 0.5]$. To ensure that numerically computed Fourier coefficients $\alpha_k$ are real, instead of solving complex-valued system (\ref{E3}), one can separate the real and imaginary parts of ${\mathcal C}(H)(x_j)$ to obtain real-valued system \begin{equation} \label{E4} \begin{array}{l} \displaystyle \phantom{-} \mathop{\rm Re}\nolimits{\mathcal C}(H)(x_j)=\sum_{k=1}^{M} \alpha_{k} \mathop{\rm Re}\nolimits\phi_k(x_j), \\[13pt] \displaystyle \hspace{10pt} \mathop{\rm Im}\nolimits{\mathcal C}(H)(x_j)=\sum_{k=1}^{M} \alpha_{k} \mathop{\rm Im}\nolimits\phi_k(x_j). \end{array} \end{equation} This produces $2N$ equations ($N$ equations for both real and imaginary parts) and $M$ unknowns $\alpha_{k}$. We show below that both complex (\ref{E3}) and real (\ref{E4}) formulations give the reconstruction errors of the same order with the real formulation performing slightly better. To distinguish between the continuation ${\mathcal C}(H)$ computed using complex or real formulation, we will use notation ${\mathcal C}^C(H)$ and ${\mathcal C}^R(H)$, respectively. Consider the real formulation (\ref{E4}) and introduce the following notation. Let $\vec f=\bigl(\mathop{\rm Re}\nolimits H(x_1), \ldots, \mathop{\rm Re}\nolimits H(x_N), \mathop{\rm Im}\nolimits H(x_1), \ldots, \mathop{\rm Im}\nolimits H(x_N)\bigr)^T$, $\vec\alpha=\bigl(\alpha_{1},\ldots,\alpha_{M}\bigr)^T$, where ${}^T$ denotes the transpose, and matrix $A$ have entries \[ \displaystyle A_{jk} =\mathop{\rm Re}\nolimits\{\mathop{\rm e}\nolimits^{-\frac{2\pi i}{b}k x_j}\}, \ j=1,\ldots,N, \ k=1,\ldots, M, \] \[ A_{(j+N),k} =\mathop{\rm Im}\nolimits\{\mathop{\rm e}\nolimits^{-\frac{2\pi i}{b}k x_j}\}, \ j=1,\ldots,N, \ k=1,\ldots, M. \] Similar notation can be made for the complex formulation (\ref{E3}). Then the coefficients $\alpha_{k}$, $k=1,\ldots, M$, are defined as a least squares solution of $ A\vec\alpha=\vec f $ written as \[ \min_{\{\alpha_k\}} \sum_{j=1}^{2N} \left| \sum_{k=1}^{M} \alpha_{k} A_{jk}-f_j\right|^2, \] that minimizes the Euclidean norm of the residual. This least squares problem is extremely ill-conditioned, as explained in \cite{Huybrechs_2010} using the theory of frames. However, it can be regularized using a truncated SVD method when singular values below some cutoff tolerance $\xi$ close to the machine precision are being discarded. In this work we use $\xi=10^{-13}$ as the threshold to filter the singular values. The ill-conditioning increases as $M$ increases by developing rapid oscillations in the extended region. These oscillations are typical for SVD-based Fourier continuations. Once the system reaches a critical size that does not depend on the function being approximated, the coefficient matrix becomes rank deficient and the regularization of the SVD is required to treat singular values close to the machine precision. Because of the rank deficiency, the Fourier continuation is not longer unique. Applying the truncated SVD method produces the minimum norm solution $\{\alpha_k\}$, $k=1,\ldots, M$, for which the corresponding Fourier continuation is oscillatory. The oscillations in the extended region do not significantly affect the quality of the causal Fourier continuation on the original domain and varying $b$ can minimize their effect and decrease the overall reconstruction error, especially in the boundary domain. Another way to make ill-conditioning of matrix problems (\ref{E3}) or (\ref{E4}) better is to use more data (collocation) points $N$ than the Fourier coefficients $M$. This is called ``overcollocation" \cite{Boyd_2002} and makes the problem more overdetermined and helps to increase the accuracy of solutions. It is recommended to use at least twice more collocation points $N$ than the Fourier coefficients $M$, i.e. $N=2M$. The convergence can be checked by keeping the number of Fourier coefficients $M$ fixed and increasing the number of collocation points $N$. The overcollocation also helps with filtering out trigonometric interpolants that have very small errors at collocation points $x_j$ but large oscillations between the collocation points \cite{Boyd_2002}. In all our examples we use at least $N=2M$ as an effective way to obtain an accurate and reliable approximation of $H(x)$ over the interval $[-0.5, 0.5]$. In the multidimensional case when a transfer matrix ${\mathbf H(w)}$ is given, the above procedure can be extended to all elements of the matrix. Computing SVD is an expensive numerical procedure for large matrices. The operation count to find a least squares solution of $Ax=b$ using the SVD method with $A$ being $N\times M$, $N\geq M$, matrix, is of the order $O(NM^2+M^3)$ \cite{LAPACK_Users_Guide}. The actual CPU times for computing SVD, solving a linear system in the least squares sense and constructing causal Fourier continuations for various values of $M$ used in this work, are shown in Table \ref{T_CPU} in Section \ref{examples}. Computational savings can be achieved by noting that in our problem the matrix $A$ depends only on the location of frequency points at which transfer matrix is evaluated/available, and continuation parameters $M$ and $b$ but does not depend on actual values of ${\mathbf H(w)}$. Having frequency responses at $N$ points, we can fix $N=2M$, choose $b=2$ as a default value and compute SVD only once prior to verifying causality. Varying $1<b\leq 4$ or $2M<N$ for each element of ${\mathbf H(w)}$ separately, if needed, would require recomputing SVD. \section{Error Analysis} \label{error_analysis} In this section, we provide an upper bound for the error in approximation of a given function $H(x)$ by its causal Fourier continuation ${\mathcal C}(H)(x)$. The analysis of convergence of Fourier continuation technique based on the truncated SVD method was done by Lyon \cite{Lyon_2012a} using a split real formulation. In this work, we extend his results to causal Fourier continuations. The obtained error estimates can be employed to characterize causality of a given set of data. \subsection{Error estimates} Denote by $\hat H_{M}$ any function of the form \begin{equation}\label{Fourier_series_arbitrary} \hat H_{M}(x)=\sum_{k=1}^{M} \hat\alpha_k \phi_k(x) \end{equation} where $\phi_k(x)=\mathop{\rm e}\nolimits^{-\frac{2\pi i}{b}kx}$, $k=1,\ldots,M$, as before. Let $A=U \Sigma V^*$ be the full SVD decomposition \cite{Trefethen_Bau_1997} of the matrix $A$ with entries $A_{kj}=\phi_k(x_j)$, $j=1,\ldots,N$, $k=1,\ldots, M$, where $U$ is an $N\times N$ unitary matrix, $\Sigma$ is an $N\times M$ diagonal matrix of singular values $\sigma_j$, $j=1,\ldots,p$, $p=\min({N,M})$, $V$ is an $M \times M$ unitary matrix with entries $V_{kj}$, and $V^*$ denotes the complex conjugate transpose of $V$. We can prove the following result. \begin{theorem} Consider a rescaled transfer function $H(x)$ defined by symmetry on $\Omega=[-0.5,-a]\cup[a, 0.5]$, where $a=0.5\frac{w_{min}}{w_{max}}$, whose values are available at points $x_j\in \Omega$, $j=1,\ldots,N$. Then the error in approximation of $H(x)$, that is known with some error $\varepsilon$, by its causal Fourier continuation ${\mathcal C}(H)(x)$ defined in (\ref{E3_0}) on a wider domain $\Omega^c=[-b/2,b/2]$, $b\geq 1$, has the upper bound \[ ||H- {\mathcal C}(H+\varepsilon) ||_{L_2(\Omega)} \leq (1+\Lambda_2 \sqrt{N(M-K)}) \] \begin{equation}\label{combined_bound_noise} \times \left(|| H-\hat H_{M} || _{L_\infty(\Omega)} + ||\varepsilon||_{L_\infty(\Omega)} \right) +\Lambda_1 \sqrt{K/b} ||\hat H_{M}||_{L_\infty(\Omega^c)} \end{equation} and holds for all functions of the form (\ref{Fourier_series_arbitrary}). Here \begin{equation}\label{lambda12} \Lambda_1=\max_{j:\ \sigma_j<\xi}|| v_j(x) ||_{L_2(\Omega)}, \quad \Lambda_2=\max_{j:\ \sigma_j>\xi}\frac{||v_j(x) ||_{L_2(\Omega)}}{\sigma_j}, \end{equation} and functions $v_j(x)=\sum_{k=1}^{M} V_{kj} \phi_k(x)$ are each an up to $M$ term causal Fourier series with coefficients given by the $j$th column of $V$; $K$ denotes the number of singular values that are discarded, i.e. the number of $j$ for which $\sigma_j<\xi$, where $\xi$ is the cut-off tolerance. \end{theorem} \underline{Proof}. To obtain the error bound (\ref{combined_bound_noise}) we use ideas from \cite{Lyon_2012a} but employ a complex formulation and impose causality on Fourier coefficients. The error bound for $||H- {\mathcal C}(H) ||_{L_2(\Omega)} $ is expressed in terms of the error $||H- \hat H_{M}||_{L_\infty(\Omega)} $ in approximation of a function with a causal Fourier series and $||\hat H_{M}||_{L_\infty(\Omega^c)} $ for any given causal Fourier series $\hat H_{M}$. This requires finding upper bounds for $||{\mathcal C}(H-\hat H_{M}) ||_{L_2(\Omega)} $ and $||\hat H_{M}- {\mathcal C}(\hat H_{M})||_{L_2(\Omega)} $, that estimate the error due to truncation of singular values and the effect of Fourier continuation on the error in approximation of a function with a causal Fourier series. If function $H$ is known with some error $\epsilon$, its effect is also included in a straight-forward way. The bound for $||H- \hat H_{M}||_{L_\infty(\Omega)} $ follows from Jackson Theorems \cite{Cheney_2000} that estimate the error in approximation of a periodic function with its $M$ causal Fourier series $\hat H_{M}$ as a partial case. Indeed, a causal $M$ mode Fourier series can be considered as an $2M$ mode Fourier series whose coefficients with nonpositive indices are zero. Hence, the error in approximating a $b$-periodic function $H$ with $k$ continuous derivatives with a causal $M$ mode Fourier series, has the following upper bound: \begin{equation}\label{Fourier_error} || H-\hat H_{M} || _{L_\infty(\Omega)}\leq \frac\pi 2 \left(\frac b \pi \right)^k \left(\frac {1}{2M}\right)^k ||H^{(k)}||_{L_\infty(\Omega^c)}. \end{equation} Left and right singular vectors that form columns of matrices $U$ and $V$ are used in the derivation of the error estimates (\ref{combined_bound_noise}), (\ref{lambda12}) as alternatives to Fourier basis. $\square$ As can be seen from (\ref{lambda12}), $\Lambda_1$, $\Lambda_2$ and $K$ depend only on the continuation parameters $N$, $M$, $b$ and $\xi$ as well as location of discrete points $x_j$, and not on the function $H$. The behavior of $\Lambda_1 \sqrt{K/b}$ and $\Lambda_2 \sqrt{N(M-K)}$ as functions of $M$ can be investigated using direct numerical calculations. We do this for the case of equally spaced points $x_j$, $j=1,\ldots,N$, which is typical in applications, and use $N=2M$ with $b=2$. Other distributions of points $x_j$, values of $b$ and relations between $M$ and $N$ can be analyzed in a similar manner. For example, results for $b=4$ are very similar to the case with $b=2$. We find that while $\Lambda_1$ does not change much with $M$ and remains small, the coefficient $\Lambda_1 \sqrt{K/b}$ stays close to the cut-off value $\xi$ for small $M$ and increases at most to $10\xi$ for large $M$. This behavior does not seem to depend on the cut-off value $\xi$ and the results are similar for $\xi$ varying from $10^{-13}$ to $10^{-7}$. The number $K$ of discarded singular values grows with $M$ (provided that singular values above $\xi$ are computed accurately) since the ill-conditioning of the problem increases with $M$. Values of $\Lambda_2$ remain close to $1$ for values $M$ we considered. At the same time, the coefficient $\Lambda_2 \sqrt{N(M-K)}$ grows approximately as $\sqrt{M}$ as $M$ increases. Since the error (\ref{Fourier_error}) in approximation of a function with a causal Fourier series decays as ${\mathcal O}(M^{-k})$ and the coefficient $\Lambda_2 \sqrt{N(M-K)}$ grows as ${\mathcal O}(M)$, the error bound part that is due to a causal Fourier series approximation \[ \epsilon_F\equiv (1+\Lambda_2 \sqrt{N(M-K)}) || H-\hat H_{M} || _{L_\infty(\Omega)} \] decays at least as fast as ${\mathcal O}(M^{-k+1})$. For comparison, the analogous error bound term for Fourier continuations reported in \cite{Lyon_2012a} is on the order of ${\mathcal O}(M^{-k+1/2})$, i.e. a causal Fourier series converges slightly slower than a standard Fourier series. In practice, the smoothness order $k$ of the transfer function $H(x)$ may not be known. In this case, it can be estimated by noting that the error bound $\epsilon_F$ can be written as \begin{equation} \label{error_epsilon1} \epsilon_F\sim \tilde C M^{-k+1}. \end{equation} Taking natural logarithm of both sides, we get \begin{equation}\label{error_ln} \ln \epsilon_F \sim \ln\tilde C+(-k+1)\ln M, \end{equation} i.e. $\ln \epsilon_F$ is approximately a linear function of $\ln M$. The values of $\tilde C$ and $k$ can be estimated as follows. Assume that $H$ is known at $N$ frequency points. Usually the number of frequency responses is fixed and it may not be possible to get data with higher resolution. Assume that the errors due to truncation of singular values (term with $\Lambda_1$) and a noise in data (term with $||\varepsilon ||_{L_\infty(\Omega)}$) are small, so that the error due to a causal Fourier series approximation is dominant. Let $E_R(x)$ and $E_I(x)$ be reconstruction errors, \begin{equation} \label{error_Re} E_R(x)=\mathop{\rm Re}\nolimits H(x) - \mathop{\rm Re}\nolimits{\mathcal C}(H)(x), \end{equation} \begin{equation}\label{error_Im} E_I(x)=\mathop{\rm Im}\nolimits H(x) - \mathop{\rm Im}\nolimits{\mathcal C}(H)(x) \end{equation} on the original interval $[-0.5,0.5]$. Compute $E_R(x)$ and $E_I(x)$ with $N$, $N/2$, $N/4$ etc. samples, i.e. with $M$, $M/2$, $M/4$ etc. Fourier coefficients. Solve equation (\ref{error_ln}) in the least squares sense to find approximations of $\ln\tilde C$ and $-k+1$, and, hence, to $\tilde C$ and $k$. Then the error term $\epsilon_F$ can be extrapolated to higher values of $M$ using (\ref{error_epsilon1}) to see if the causal Fourier series approximation error decreases if the number $M$ of Fourier coefficients increases, i.e. resolution increases. The error bound term \begin{equation}\label{error_truncation} \epsilon_T=\Lambda_1 \sqrt{K/b} ||\hat H_{M}||_{L_\infty(\Omega^c)}, \end{equation} that is due to the truncation of singular values, is typically small and close to the cut-off value $\xi$ for small $M< 250$ and at most $10\xi$ for $250\leq M\leq 1500$. As can be seen from (\ref{error_truncation}), $\epsilon_T$ depends on $b$ and the function $H$ being approximated. The default value $b=2$ may not provide the smallest error. To find a more optimal value of $b$, a few values in $1<b\leq 4$ may be tried to determine which one gives smaller overall reconstruction errors. In case of non-causal functions, varying $b$ does not essentially effect the size of reconstruction errors. The error $\varepsilon$ in data should be known in practice since the error in measurements or the accuracy of full wave simulations are typically known. The error bound term due to a noise in data \[ \epsilon_n=(1+\Lambda_2 \sqrt{N(M-K)}) ||\varepsilon||_{L_\infty(\Omega)} \] consists of the norm of the noise amplified by the coefficient $\Lambda_2 \sqrt{N(M-K)})$ that grows as ${\mathcal O}(M)$ as was shown above. In numerical experiments that we conducted the reconstruction errors due to a noise in data seem to level off to the order of $\varepsilon$ and are not amplified significantly as the resolution increases. This does not contradict the error estimate (\ref{combined_bound_noise}), (\ref{lambda12}). The error bounds are not tight and the actual reconstruction errors may be smaller than the error bounds suggest. \subsection{Causality characterization} The error estimate (\ref{combined_bound_noise}), (\ref{lambda12}) shows that the reconstruction errors $E_R(x)$ and $E_I(x)$ defined in (\ref{error_Re}), (\ref{error_Im}) can be dominated by either the error due to approximation of a function with its causal Fourier series, which has the upper bound $\epsilon_F$, or the error $\epsilon$ due to a noise or approximation errors in data, which has the upper bound $\epsilon_n$. If the only errors in data are round-off errors, then the reconstruction errors will approach or will be bounded by the error due to truncation of singular values, which has the upper bound $\epsilon_T$. The noise level $\epsilon$ may be known. In case of experimental data, $\epsilon$ could be around $10^{-3}$ or $10^{-4}$, for example. Data obtained from finite element simulations may be accurate within $10^{-6}$ or $10^{-7}$, for instance, which would correspond to a single precision accuracy. In some cases, the expected accuracy may be even higher. In this case if the reconstruction errors are higher than $\epsilon_n$ (in practice $\epsilon$ can be used), then the reconstruction errors may be dominated by a causal Fourier series approximation error with the upper bound $\epsilon_F$. Determining the smoothness order of $H$ using the model (\ref{error_epsilon1}) with $N$, $N/2$, $N/4$ etc. samples can give an answer whether the causal Fourier series approximation error can be made smaller by increasing the resolution of the data. If the model (\ref{error_epsilon1}) extrapolated to higher values of $M$, decays with $M$, this implies that the Fourier series approximation error can be decreased by using more frequency samples, then we can state that the dispersion relations are satisfied within error given by $E_R(x)$ and $E_I(x)$ and causality violations, if present, have the order smaller than or at most the order of reconstruction errors $E_R(x)$ and $E_I(x)$. In this case, with fixed resolution, the method will not be able to detect these smaller causality violations. The noise level $\epsilon$ may not be known but it can be determined by comparing reconstruction errors $E_R(x)$ and $E_I(x)$ as the resolution of data increases, if possible, or decreases, since the reconstruction error due to the noise does not significantly depend on the resolution in practice. This means that as the number $N$ of samples increases, the reconstruction errors level off at the value of the noise $\epsilon$. Equivalently, there may be a situation that using $N$, $N/2$, $N/4$ etc. samples (resolution becomes coarser) the reconstruction error does not increase. Instead, it remains at the same order, which would be a noise level $\epsilon$. In this case, the dispersion relations will be satisfied within the error $\epsilon$, which would be the order of causality violations. Please see Example \ref{example_Micron} of a finite element model of a package where a related situation is discussed. In the next section we employ the proposed causal Fourier continuation based method to several analytic and simulated examples that are causal/non-causal or have imposed causality violations. When a given transfer function $H(x)$ is causal, we expect that the dispersion relations (\ref{1_6_10}), (\ref{1_6_11}) are satisfied with the accuracy close to machine precision. This is so called an ideal causality test. When there is a causality violation, the method is expected to point to the location of violation or at least develop reconstruction error close to the order given by the noise in data as suggested by the error analysis. We show that the results are in full agreement with the error estimates developed in Section \ref{error_analysis}. \section{Numerical Experiments: Causality Verification} \label{examples} \subsection{Two-pole example} \label{Example1} The two-pole transfer function with no delay \cite{Knockaert_Dhaene_2008} is defined by \begin{equation} \label{two_pole_eqn} H(w)=\frac{r}{iw+s}+\frac{\overline{r}}{iw+\overline{s}} \end{equation} where $r=1+3i$, $s=1+2i$. Since both poles of this function located at $\pm 2+i$ are in the upper half plane, the transfer function is causal as a linear combination of causal transforms. We sample data on the interval from $w=0$ to $w_{max}=6$, use the spectrum symmetry to obtain data on $[-w_{max},0)$ and scale the frequency interval from $[-w_{max}, w_{max}]$ to $[-0.5, 0.5]$. The real and imaginary parts of $H(x)$ are shown in Fig. \ref{F4}. Superimposed are their causal Fourier continuations obtained using $M=250$, $N=1000$, $b=4$ and solving the complex system (\ref{E3}) and its real counterpart (\ref{E4}). As can be seen, there is no essential difference in using complex or real formulation, though the real formulation (\ref{E4}) is slightly more ill-conditioned than the complex one. The data and the causal Fourier continuations are practically undistinguishable on $[-0.5, 0.5]$. \begin{figure} \caption{$H(x)$ and its Fourier continuations ${\mathcal C}^C(H)$ and ${\mathcal C}^R(H)$ computed using complex (\ref{E3}) and real (\ref{E4}) formulations in example \ref{Example1} with $M=250$, $N=4M$, $b=4$ shown on $[-0.5, 0.5]$.} \label{F4} \end{figure} To demonstrate the nature of continuations, we plot the same curves as in Fig. \ref{F4} (only the real parts are presented, the imaginary parts have similar features) but on the extended domain $[-4,4]$ where we show two periods. These are plotted in Fig. \ref{F3}. It is obvious that the continuations oscillate in the extended region outside $[-0.5, 0.5]$. The frequency of these oscillations increases with $M$. At the same time, the Fourier series become more and more accurate in approximation in the original interval $[-0.5, 0.5]$. To demonstrate this, we show the reconstruction errors $E_R(x)$, defined in (\ref{error_Re}), in Fig. \ref{F2} (semilogy plot is shown) in $[-0.5, 0.5]$ for various values of $M$ with $N=4M$ obtained using real formulation (\ref{E4}). The results for $E_I(x)$ are similar. As $M$ increases from $M=5$ to $250$, the order of the error decreases from $10^{-1}$ to $10^{-14}$ for both real and imaginary parts. For example, with $M=250$, $N=1000$, $b=4$, both errors $E_R(x)$ and $E_I(x)$ are at the order of $4\times 10^{-14}$. The results indicate that the error is uniform on the entire interval $[-0.5, 0.5]$ and does not exhibit boundary artifacts. \begin{figure} \caption{The real part of Fourier continuations ${\mathcal C}^C(H)$ and ${\mathcal C}^R(H)$ in example \ref{Example1} with $M=250$, $N=1000$, $b=4$ shown on a wider domain $[-4,4]$ (two periods are shown).} \label{F3} \end{figure} \begin{figure} \caption{Semilogy plot of the errors $E_R(x)$ in example \ref{Example1} on $[-0.5, 0.5]$ with $M=5$, $10$, $25$, $50$, $100$ and $250$ and $400$, $N=4M$, $b=4$.} \label{F2} \end{figure} The above results demonstrate that the proposed technique is capable of verifying that the given data are causal with high accuracy. In this case, causality is satisfied with the error less than $10^{-13}$. These results are in agreement with the error estimates (\ref{combined_bound_noise}), (\ref{lambda12}) developed in Section \ref{error_analysis}. Since the data do not have any noise except of round-off errors and the transfer function is smooth on $[-0.5, 0,5]$, the reconstruction errors are dominated by the fast decaying error in approximation of the smooth transfer function with its causal Fourier series for smaller $M$ and then by the error due to truncation of the singular values for high values of $M$, which it is close to the cut-off value $\xi=10^{-13}$. We also make another observation about the behavior of the errors $E_R(x)$ and $E_I(x)$ as $M$ increases for a causal smooth function. Even for small values of $M$, as $M$ doubles, the errors decrease by several orders of magnitude until the errors level off around $5\times 10^{-14}$ (see Table \ref{Tcausalerror}). This is a consequence of the fast convergence of a Fourier series for a smooth function. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c||c|c|c|} \hline \rule{0cm}{10pt} $N$ & $M$ & $||E_R||_\infty$, $||E_I||_\infty$ & $N$ & $M$ & $||E_R||_\infty$, $||E_I||_\infty$ \\[3pt] \hline \rule{0cm}{10pt} $40$ & $10$ & $\sim 10^{-1}$ & $400$ & $100$ & $\sim 10^{-7}$ \\[3pt] \hline \rule{0cm}{10pt} $100$ & $25$ & $\sim 4\times 10^{-3}$ & $800$ & $200$ &$\sim 2\times 10^{-13}$ \\[3pt] \hline \rule{0cm}{10pt} $200$ & $50$ & $\sim 10^{-4}$ & $1000$ & $250$ & $\sim 5\times10^{-14}$ \\[3pt] \hline \end{tabular} \end{center} \caption{The orders of errors $E_R(x)$ and $E_I(x)$ in example \ref{Example1} to demonstrate how fast reconstruction errors decay as resolution increases in case of a causal smooth function.} \label{Tcausalerror} \end{table} Next we test how sensitive this method is to causality violations. We do this by imposing a localized non-causal perturbation on causal data. This artificial causality violation is modeled by a Gaussian function \begin{equation} \label{Gauss_pert} a\exp\left(-\frac{(x-x_0)^2}{2\sigma^2}\right) \end{equation} of small amplitude $a$, centered at $x_0$ and added to $\mathop{\rm Re}\nolimits H$, while keeping $\mathop{\rm Im}\nolimits H$ unchanged. This type of non-causal perturbation was used in \cite{Triverio_Grivet_Talocia_2006} to test causality verification technique based on the generalized dispersion relations. We use the Gaussian centered at $x_0=0.1$ with standard deviation $\sigma=10^{-2}/6$, so its ``support" is concentrated on a very narrow interval of length $10^{-2}$ and outside this interval the values of the perturbation are very close to $0$. The advantage of using a Gaussian perturbation is that it can be localized and allows one to verify if the proposed method is capable of detecting the location of causality violation. By varying the amplitude $a$, we can impose larger or smaller causality violations. The smaller $a$ can be used, the more sensitive method is for detecting causality violations. With $a=10^{-10}$, which gives a very small causality violation, the error between the data and its causal Fourier continuation is shown in Fig. \ref{F5}. It is clear that the error has very pronounced spikes at $x=\pm 0.1$ due to symmetry that correspond to the exact locations of the Gaussian perturbation. These spikes are of the order $10^{-11}$ whereas on the rest of the interval the error is about $10$ times smaller. For larger perturbations, the results are similar. For example, with $a=10^{-8}$, the error at $\pm 0.1$ is of the order of $10^{-9}$ and the rest of the interval has the error $10$ times smaller, etc. We can see that the reconstruction error in this case is strongly dominated by the error (perturbation) in the data at the location of causality violation and the magnitude of the error is of the same order as the order of the perturbation. At the same time, the transfer function itself is very smooth, which ensures fast convergence of the Fourier series. The results are in a perfect agreement with the error estimate (\ref{combined_bound_noise}), (\ref{lambda12}). \begin{figure} \caption{Semilogy plot of $E_R(x)$ and $E_I(x)$ in example \ref{Example1} with Gaussian perturbation (\ref{Gauss_pert}) with $a=10^{-10}$ on $[-0.5, 0.5]$ with $M=250$, $N=1000$, $b=4$.} \label{F5} \end{figure} Another perturbation that we consider is a cosine function $a\cos(20\pi x)$ that we also add to $\mathop{\rm Re}\nolimits H(x)$ but keep $\mathop{\rm Im}\nolimits H(x)$ unaltered. Adding a non-causal cosine perturbation makes the transfer function non-causal on the entire interval and higher reconstruction errors are expected everywhere. We find that both errors $E_R(x)$ and $E_I(x)$ oscillate with the frequency and amplitude similar to those of the perturbation. For example, with $a=10^{-10}$, these errors are of the order $6\times 10^{-11}$. Increasing/decreasing $a$ increases/decreases the magnitude of the error. To see how the level of a noise can be detected, set, for example, $a=10^{-5}$ and compute the reconstruction errors $E_R(x)$ and $E_I(x)$ as the resolution increases. We vary $M$ from $10$ to $250$ and analyze the order of the errors. The results are shown in Table \ref{two_pole_noncausal}. As we can see, the reconstruction errors first decrease as the resolution (the number $N$ of points) and the number $M$ of Fourier coefficients increase until the error reaches the size of the perturbation, which happens at $M=100$, and after that the order of the error does not decrease further and levels off instead. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c||c|c|c|} \hline \rule{0cm}{10pt} $N$ & $M$ & $||E_R||_\infty$, $||E_I||_\infty$ & $N$ & $M$ & $||E_R||_\infty$, $||E_I||_\infty$ \\[3pt] \hline \rule{0cm}{10pt} $40$ & $10$ & $\sim 10^{-1}$ & $400$ & $100$ & $\sim 7\times 10^{-6}$ \\[3pt] \hline \rule{0cm}{10pt} $100$ & $25$ & $\sim 4\times 10^{-3}$ & $800$ & $200$ &$\sim 7\times 10^{-6}$ \\[3pt] \hline \rule{0cm}{10pt} $200$ & $50$ & $\sim 10^{-4}$ & $1000$ & $250$ & $\sim 7\times10^{-6}$ \\[3pt] \hline \end{tabular} \end{center} \vskip5pt \caption{The orders of errors $E_R(x)$ and $E_I(x)$ in example \ref{Example1} with non-causal cosine perturbation $a\cos(20\pi x)$ with $a=10^{-5}$ as $M$ and $N$ increase in proportion $N=4M$. } \label{two_pole_noncausal} \end{table} With a smaller perturbation amplitude $a$, the error levels off at a larger value of $M$. For example, with $a=10^{-10}$, the reconstruction errors stop decreasing at $M=200$. \subsection{Finite Element Model of a DRAM package} \label{example_Micron} In this example we use a scattering matrix $S$ generated by a Finite Element Modeling (FEM) of a DRAM package (courtesy of Micron). The package contains $110$ input and output ports. The simulation process was performed for $100$ equally spaced frequency points ranging from $w_{min}$=0 to $w_{max}=5$ GHz. We expect data to be causal but, perhaps, with some error due to limited accuracy of numerical simulations. For simplicity, we apply our method to the $S$-parameter $S(100,1)$ rather than the entire $110 \times 110$ $S$-matrix. The selected $S$-parameter $H(w)=S(100,1)$ relates the output signal from port $100$ to the input signal at port $1$ as a function of frequency $w$. Our approach can be extended to the entire $S$-matrix by applying the method to every element of the scattering matrix $S$. The graph of $H(x)$ is shown in Fig. \ref{FMicron}. \begin{figure} \caption{$H(x)$ with its causal Fourier continuations ${\mathcal C}^C(H)$ and ${\mathcal C}^R(H)$ in example \ref{example_Micron} with $M=100$, $N=199$, $b=1.1$.} \label{FMicron} \end{figure} In this test example, the number of samples in $[-0.5, 0.5]$ is fixed at $N=2\cdot 100-1=199$ (by symmetry), so to construct a causal Fourier continuation we use $M=100$ Fourier coefficients and can only vary the length $b$ of the extended region. Because the transfer function has oscillations and high slopes in the boundary region, we have to use a smaller value of $b$. We find $b=1.1$ to be optimal by trying a few different values of $b\in(1,2)$. Slightly higher values of $b$ give similar results while using too large $b$ does not produce small enough error, most likely because we have fixed resolution and more data points would be needed to construct a causal continuation on a larger domain with good enough resolution. The errors $E_R(x)$ with the above parameters are shown in Fig. \ref{FMicron_errors}. The errors $E_I(x)$ have the same order of $10^{-5}$. This implies that the dispersion relations are satisfied within error on the order of $10^{-5}$. Since the data came from finite element simulations, we expect their accuracy to be on the order of $10^{-6}$ or $10^{-7}$ at least. To verify if the relatively large reconstruction errors comes from a noise in the data or a causal Fourier series approximation error, we use data with $N$ samples, then every other and every forth samples, i.e. with $N$, $\frac{N-1}{2}+1$ and $\frac{N-1}{4}+1$ samples to find a model (\ref{error_epsilon1}) of the form $CM^\alpha$ using the least squares method. We find that both $E_R$ and $E_I$ decay approximately as ${\mathcal O}(M^{-3})$. These models were extrapolated to higher values of $M$ as shown in Fig. \ref{FMicron_errors}, where we plot the $l_2$ norms of actual reconstruction errors and fitted model curves. The extrapolated error curves indicate that the error may be decreased further if more frequency samples are available. In this case, we say that the transfer function $H(x)$ satisfies dispersion relations within $10^{-5}$, i.e. the transfer function $H(x)$ is causal within the error at most $10^{-5}$, and the causality violations, if present, are smaller than or at most on the order of $10^{-5}$. To determine the actual level of a noise in the data, higher resolution of frequency responses would be needed. For comparison, using periodic polynomial continuation method \cite{Aboutaleb_Barannyk_Elshabini_Barlow_WMED13, Barannyk_Aboutaleb_Elshabini_Barlow_IMAPS} with $8$th degree polynomial applied to the transfer function in this example, the error in approximation of $H(x)$ is about $2\times 10^{-3}$ that is by two orders of magnitude larger than with the spectral continuation, it is not uniform and the largest in the boundary domain. \begin{figure} \caption{Errors $E_R(x)$ in approximation of $S(100,1)$ in example \ref{example_Micron} with $N=199$, $M=100$ and $b=1.1$. Errors $E_I(x)$ have the same order. } \label{FMicron_errors} \end{figure} \begin{figure} \caption{Errors $||E_R(x)||_2$ in approximation of $S(100,1)$ in example \ref{example_Micron} with $N=199$, $99$, $49$ and $M=100$, $50$, $25$, respectively, and $b=1.1$, together with their least squares fit: $||E_R||_2\sim 11.7M^{-2.9}$ and extrapolation for higher values of $M$. For $||E_I(x)||_2$ we find $||E_I||_2\sim 23.5M^{-3.1}$. } \label{FMicron_errors_fit} \end{figure} \subsection{Transmission line example} \label{transmission_line_example} We consider a uniform transmission line segment that has the following per-unit-length parameters: $L=4.73$ nH/cm, $C=3.8$ pF/cm, $R=0.8$ $\Omega$/cm, $G=0$ and length ${\mathcal L}=10$ cm. The frequency is sampled on the interval $(0, 5.0]$ GHz. This example was used in \cite{Triverio_Grivet_Talocia_2006} to analyze causality using generalized dispersion relations with subtractions. The scattering matrix of the structure was computed using Matlab function {\tt rlgc2s}. Then we consider the element $H(w)=S_{11}(w)$. Due to limitation of the model used in the function {\tt rlgc2s}, we cannot obtain the value of the transfer function at $w=0$ (DC) but we can sample it from any small nonzero frequency. It is typical for systems to have frequency response missing at low frequencies and can occur either during measurements or simulations. However, the value of $H(w)$ at $w=0$ is finite, because the magnitude of $S_{11}$ must be bounded by $1$. Hence, we have a bandpass case. Once we choose the number of points and the corresponding $w_{min}>0$, we can sample frequencies from $[w_{min}, w_{max}]$. We use $w_{max}=5.0$ GHz. Using symmetry conditions we reflect the values of the transfer function for negative frequencies as for the baseband case considered above. We know that $\mathop{\rm Im}\nolimits H(w)$ equals $0$ at $w=0$ but $\mathop{\rm Re}\nolimits H(w)$ is to be computed. Since we do not have a value of $\mathop{\rm Re}\nolimits H(w)$ at $w=0$, our frequencies at which the values of the transfer function are available will have a gap at $w=0$. Nevertheless, our approach is still applicable since it does not require the data points to be equally spaced. In this example, we get better results, however, with smaller $w_{min}$. Alternatively, we can use a polynomial interpolation to find a value of the real part of $H(w)$ at $w=0$. The value of the imaginary part $\mathop{\rm Im}\nolimits H(0)=0$ by symmetry. This approach is not very accurate since it does not take into account causality when the polynomial interpolation of $\mathop{\rm Re}\nolimits H(w)$ is constructed, and produces a larger error compared with just skipping the value at $w=0$ and using spectral continuation approach directly. With our technique and $M=1500$, $N=3000$, $b=4$ we are able to construct a causal Fourier continuation accurate within $3\times 10^{-15}$. The graphs of $\mathop{\rm Re}\nolimits H(w)$ together with their causal Fourier continuations are presented in Fig. \ref{F9}. Agreement for $\mathop{\rm Im}\nolimits H(w)$ is similar. With smaller values of $w_{max}$, it is enough to use smaller values of $M$ and $N$ in the same proportion $2M=N$ to get the same order of accuracy. For example, to get an error in approximation of the transfer function on the original interval at the order of $10^{-14}$ and $w_{max}=3.0$ GHz, it is enough to use $M=750$, $N=1500$, while for $w_{max}=1$ GHz one could use $M=250$, $N=500$. In this case, just more data points would be needed to have the same resolution on the longer domain. \begin{figure} \caption{Transfer function $H(x)$ and its causal Fourier continuations ${\mathcal C}^C(H)$ and ${\mathcal C}^R(H)$ in example \ref{transmission_line_example} with $M=1500$, $N=3000$, $b=4$ (real parts are shown).} \label{F9} \end{figure} Computations presented in this paper were done using Matlab on a MacBook Pro with 2.93\,GHz Intel Core 2 Duo Processor and 4 GB RAM. The CPU times (in seconds) for computing SVD, minimum norm solution of a linear system using the least squares method as well as overall CPU time for constructing a causal Fourier continuation with $M$ varying from $50$ to $1500$ with $N=2M$ are shown in Table \ref{T_CPU}. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \rule{0cm}{10pt} $N$ & $M$ & CPU time & CPU time & CPU time \\[3pt] & & SVD & minnmsvd & continuation \\[3pt] \hline \rule{0cm}{10pt} 100 & 50 & 0.0091 & 0.0141 & 0.0299 \\[3pt] \hline \rule{0cm}{10pt} 200 & 100 & 0.1910 & 0.0340 & 0.0781 \\[3pt] \hline \rule{0cm}{10pt} 500 & 250 & 0.9626 & 0.1646 & 0.4040 \\[3pt] \hline \rule{0cm}{10pt} 1000 & 500 & 3.9779 & 1.5061 & 3.9063 \\[3pt] \hline \rule{0cm}{10pt} 2000 & 1000& 25.5921 & 9.8634 & 35.9250 \\[3pt] \hline \rule{0cm}{10pt} 3000 & 1500 & 68.6258 & 48.4047 & 117.6096 \\[3pt] \hline \end{tabular} \end{center} \vskip5pt \caption{CPU times (in seconds) for computing SVD, finding minimum norm solution using the least squares method and overall CPU time for constructing causal Fourier continuation for $M$ varying from $50$ to $1500$ with $N=2M$.} \label{T_CPU} \end{table} \begin{figure} \caption{Errors $E_R(x)$ in example \ref{transmission_line_example} with $M=1500$, $N=3000$, $b=4$ and non-causal Gaussian perturbation (\ref{Gauss_pert}) with $a=10^{-6}$, $\sigma=10^{-2}/6$, centered at $x_0=0.1$. The errors $E_I(x)$ have similar spikes at causality violation locations.} \label{F13} \end{figure} Now we impose a Gaussian perturbation (\ref{Gauss_pert}) to the transfer function as in example \ref{Example1} with amplitude $a=10^{-6}$ to model a causality violation located at $x_0=0.1$ with the standard deviation $\sigma=10^{-2}/6$, so that the ``support" of the Gaussian is approximately $10^{-2}$ or $1/10$ of the entire bandwidth. This results in the error having very pronounced spikes, shown in Fig. \ref{F13}, of magnitude $4.5\times 10^{-7}$ in both $\mathop{\rm Re}\nolimits H$ and $\mathop{\rm Im}\nolimits H $ at the locations of the perturbation, while as before the error on the rest of the interval is approximately $10$ times smaller. We find that we are able to detect reliably a causality violation of amplitude up to $a=5\times 10^{-14}$. In papers \cite{Triverio_Grivet_Talocia_2006, Triverio_Grivet_Talocia_2006_2, Triverio_Grivet_Talocia_2008} an excellent work is done by utilizing the generalized dispersion relations with subtractions to check causality of raw frequency responses. The performed error analysis provides explicit frequency dependent error estimates to account for finite frequency resolution (discretization error in computing Hilbert transform integral) and finite bandwidth (truncation error due to using only a finite frequency interval instead of the entire frequency axis) and unbiases the causality violations from numerical discretization and domain truncation errors. It is shown that by using more subtractions, one can make the truncation error arbitrary small but the discretization error does not go away since it is fixed by the resolution of given frequency responses. The authors report that if a causality violations are too small (with amplitude smaller than $10^{-5}$) and smooth, using more subtractions may not affect the overall error since it is then dominated by the discretization error. In addition, even with placing more subtraction points in the boundary regions close to $\pm w_{max}$, the truncation error always diverges at the bandwidth edges due to the missing out-of-band samples. In the current work, we are able to remove boundary artifacts and detect really small localized infinitely smooth (Gaussian) causality violations. The resolution of data is also important since the number $N$ of collocation points at which frequency responses are available, dictates the number $M$ of Fourier coefficients with $N=2M$, and, hence, the sensitivity of the method to causality violations. For instance, in the above example that was also used in \cite{Triverio_Grivet_Talocia_2006} (but we use a smaller amplitude $a=10^{-6}$ and the Gaussian with more narrow $6\sigma=10^{-2}$ ``support"), with $N=250$ ($125$ points in $[0,0.5]$) we can detect causality violation with $a=5\times 10^{-7}$, whereas $N=500$ and $N=1000$ are capable of detecting violations with $a=10^{-12}$ and $a=5\times 10^{-13}$, respectively. Similarly to \cite{Triverio_Grivet_Talocia_2008}, we also find that it is more difficult to detect wide causality violations. As the ``support" $6\sigma$ of the Gaussian increases, the spikes in the reconstruction errors become wider and shorter, and eventually it is not possible to determine the location of the causality violation since the error at causality violation locations has the same order as the error on the rest of the interval. For $6\sigma\leq 0.1$, the spikes in the errors are still observable, whereas for a bigger value $6\sigma \geq 0.2$, the reconstruction errors are uniform. Increasing the resolution of the data does not decrease the reconstruction errors, that indicates the presence of causality violations of the order of reconstruction errors. \subsection{Delayed Gaussian example} \label{delayed_Gaussian} Here we test the performance of the method with an example of a delayed Gaussian function that was used in \cite{Xu_Zeng_He_Han_2006} to check causality of interconnects through the minimum-phase and all-pass decomposition. We consider the impulse response function modeled by a Gaussian with the center of the peak $t_d$ and standard deviation $\sigma$: \[ h(t,t_d)=\exp\left[-\frac{(t-t_d)^2}{2\sigma^2}\right]. \] If $t_d=0$, the Gaussian function $h(t,0)$ is even, so it cannot be causal. As $t_d$ increases, the center of the peak moves to the right and for $t_d>3\sigma$ the impulse response function $h(t,t_d)$ can be gradually made causal. The corresponding transfer function is \[ H(w,t_d)=\exp\left[-2(\pi w \sigma)^2-2i\pi w\, t_d\right] \] which is a periodic function damped by an exponentially decaying function. We consider two regimes. One has value of $t_d<3\sigma$, so that the transfer function $H(w,t_d)$ is non-causal. In the second regime, the delay $t_d> 3\sigma$ is big enough to make the transfer function $H(w,t_d)$ causal. We fix $b=2$, $\sigma=2$ and sample $w$ from the interval $[0,3.6\times 10^{8}]$ Hz and consider first the case with $t_d=0.1\sigma$. The real part of $H(w,t_d)$ is shown in Fig. \ref{F15} together with its Fourier continuations. One can clearly see that Fourier continuations do not match well $\mathop{\rm Re}\nolimits H$. Instead, there are visible high frequency oscillations throughout the domain as confirmed by analyzing the reconstruction errors $E_R(x)$. With $M=250$, $N=500$, the magnitude of the errors is about $2\times 10^{-3}$ (see Fig. \ref{F16}). When $N$ is increased in proportion $N=2M$, the error slightly increases. For example, with $M=1000$, $N=2000$, the errors are about $4\times 10^{-3}$. Varying the length $b$ of the extended domain does not decrease the magnitude of the error. This is in agreement with the error estimate (\ref{combined_bound_noise}), (\ref{lambda12}) since in this case the reconstruction error is dominated by causality violations. Results for $E_I(x)$ are similar. \begin{figure} \caption{Noncausal transfer function $H(w,t_d)$ in example \ref{delayed_Gaussian} and its Fourier continuations ${\mathcal C}^C(H)$ and ${\mathcal C}^R(H)$ with $M=250$, $N=500$, $b=2$, $t_d=0.1\sigma$ (only real parts are shown).} \label{F15} \end{figure} \begin{figure} \caption{Errors $E_R(x)$ between the noncausal transfer function $H(w,t_d)$ and its causal Fourier continuations ${\mathcal C}^C(H)$ and ${\mathcal C}^R(H)$ in example \ref{delayed_Gaussian} with $M=250$, $N=500$, $b=2$, $t_d=0.1\sigma$. Errors $E_I(x)$ have the same order.} \label{F16} \end{figure} In the second case, we set $t_d=6\sigma$, which should give a causal transfer function. $\mathop{\rm Re}\nolimits H(w,t_d)$ together with its Fourier continuations are shown in Fig. \ref{F17}. Both reconstruction errors $E_R(x)$ and $E_I(x)$ drop to the order of $2\times 10^{-15}$ (see Fig. \ref{F18}). In this case, the transfer function is causal and infinitely smooth, so the error in approximation of such function with a causal Fourier series decays quickly even for moderate values of $M$. We do observe a gradual change of the non-causal Gaussian function into a causal function as $t_d$ increases. Writing $t_d=\gamma\sigma$ and vary values $\gamma=1$, $2$, $4$, and $5$ and find that the $l_\infty$ norms of both errors $E_R$ and $E_I$ are $5\times 10^{-6}$, $10^{-7}$, $3\times 10^{-12}$ and $10^{-14}$, respectively, i.e. the error decays as $t_d$ increases as expected. \begin{figure} \caption{Causal transfer function $H(w,t_d)$ in example \ref{delayed_Gaussian} and its Fourier continuation with $M=250$, $N=500$, $b=2$, $t_d=6\sigma$ (real parts are shown).} \label{F17} \end{figure} \begin{figure} \caption{Errors $E_R(x)$ between the causal transfer function $H(w,t_d)$ and its causal Fourier continuation in example \ref{delayed_Gaussian} with $M=250$, $N=500$, $b=2$, $t_d=6\sigma$. Errors $E_I(x)$ have the same order.} \label{F18} \end{figure} \section{Conclusions} \label{conclusions} We present a numerical method for verification and enforcement, if necessary, of causality of bandlimited tabulated frequency responses, that can be employed before the data are used for macromodeling. The approach is based on the Kramers-Kr\"onig dispersion relations and a construction of SVD-based causal Fourier continuations. This is done by calculating accurate causal Fourier series approximations of transfer functions, not periodic in general, and allowing the causal Fourier series to be periodic in an extended domain. The causality is imposed directly on Fourier coefficients using dispersion relations that require real and imaginary parts of a causal function to be a Hilbert transform pair. The approach eliminates the necessity of approximating the behavior of the transfer function at infinity, which is known to be a source of significant errors in computation of the Hilbert transform defined on an infinite domain (or semi-infinite due to spectrum symmetry) with data available only on a finite bandwidth. In addition, this procedure does not require direct numerical evaluation of the Hilbert transform. The Fourier coefficients are computed by solving an oversampled regularized least squares problem via a truncated SVD method to have the ill-conditioning of the system under control. Causal Fourier continuations even with moderate number of Fourier coefficients are typically oscillatory in the extended domain but this does not essentially affect the quality of reconstruction of the transfer function on the original frequency domain. The length of the extended domain may be tuned to find more optimal value that would allow decreasing overall reconstruction errors. The error analysis performed for the proposed method unbiases the error of approximation of a transfer function with a causal Fourier series, the error due to truncation of singular values from the causality violations, i.e. a noise or approximation errors in data obtained from measurements or numerical simulations, respectively. The obtained estimates for upper bounds of these errors can be used to verify causality of the given frequency responses. The method is applicable to both baseband and bandpass regimes and does not require data points to be equally spaced. It shows high accuracy and robustness and is capable of detecting very small localized causality violations of the amplitude close to the machine precision. The proposed technique is applied to several analytic and simulated examples with and without causality violations. The results demonstrate an excellent performance of the method in agreement with obtained error estimates. \section{Acknowledgments} We thank the reviewers for their very detailed and constructive comments. This work was funded by the Micron Foundation. The author L.L.B. would also like to acknowledge the availability of computational resources made possible through the National Science Foundation Major Research Instrumentation Program: grant no. 1229766. \begin{IEEEbiography} [{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Lyudmyla_Barannyk.jpg}}]{Lyudmyla L. Barannyk} received the M.S. degree in Applied Mathematics and Ph.D. degree in Mathematical Sciences from New Jersey Institute of Technology in 2000 and 2003, respectively. She has been an Assistant Professor in the Department of Mathematics, University of Idaho, since 2007. She was a Postdoctoral Assistant Professor in the Department of Mathematics, University of Michigan, from 2003 to 2007. Her research interests are electrical modeling and characterization of interconnect packages, scientific computing, mathematical modeling, dimension reduction of large ODE systems, numerical methods for ill-posed problems, fluid dynamics, interfacial instability, PDEs, pseudo-spectral methods, boundary integral methods, grid-free numerical methods. \end{IEEEbiography} \begin{IEEEbiography} [{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Hazem_Photo.jpg}}]{Hazem A. Aboutaleb} is a Ph.D. student in the Department of Electrical and Computer Engineering, University of Idaho. He received his B.Sc. and M.S. degrees in electrical engineering from The Military Technical College, Cairo, Egypt, in 1994 and 2007, respectively. He was awarded a doctoral fellowship by the Egyptian Government, in 2011, to join University of Idaho as a doctoral student. His research interests are in the area of Microelectronics Fabrication with emphasis on macromodeling of microelectronics package, causality verification and enforcement of macromodels and PI and SI co-simulation analysis. \end{IEEEbiography} \begin{IEEEbiography} [{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{elshabini116x89.jpg}}] {Aicha Elshabini} is a Distinguished Professor, Electrical and Computer Engineering department at University of Idaho. She is a Professional Engineer and IEEE Fellow (CPMT) and IMAPS Fellow. She is the current advisor for the National Society of Black Engineers (NSBE), the Society of Women Engineers (SWE), and the International Society of Microelectronics and Electronic Packaging (IMAPS). Prior to this role, she served as the Dean of Engineering at University of Idaho and the department Head for Electrical Engineering department, and Computer Science and Computer Engineering department at University of Arkansas. Professor Elshabini has a B.S.E.E. degree, Cairo University in Electronics \& Communications, a five years academic program (British system), an M.S.E.E., University of Toledo, Ohio in Microelectronics, and a Ph.D., Electrical Engineering from the University of Colorado at Boulder in Solid State Devices and Optoelectronics, 1978. Elshabini was awarded the 1996 John A. Wagnon Technical Achievements Award, the 2006 Daniel C. Hughes, Jr. Memorial Award, the 2007 Outstanding Educator Award, and the 2011 President Award. \end{IEEEbiography} \begin{IEEEbiography} [{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{fred_Barlow196.jpg}}]{Fred Barlow} is a professor and the department chair in the Department of Electrical and Computer Engineering at the University of Idaho with an emphasis on electronic packaging. In the past, Dr. Barlow worked for several universities, including Virginia Tech and the University of Arkansas where he held the position of associate department head. Professor Barlow has served as the major professor for more than twenty graduate students and served as PI or Co-PI for over six million dollars of funded research engaging the microelectronics industry. Dr. Barlow has published over 100 papers and is coeditor of The Handbook of Thin FilmTechnology (McGraw Hill, 1998), as well as for the Handbook of Ceramic Interconnect Technology (CRC Press, 2007). He is also a fellow member of the International Microelectronics and Packaging Society (IMAPS) and a senior member of the Institute of Electrical and Electronic Engineers (IEEE). \end{IEEEbiography} \end{document}
arXiv
\begin{definition}[Definition:Local Ring/Noncommutative/Definition 2] Let $R$ be a ring with unity. $R$ is a '''local ring''' {{iff}} it has a unique maximal right ideal. \end{definition}
ProofWiki
\begin{definition}[Definition:Bilinear Space] Let $\mathbb K$ be a field. A '''bilinear space''' over $\mathbb K$ is a pair $\left({V, f}\right)$ where: :$V$ be a vector space over $\mathbb K$ of finite dimension $n > 0$ :$f$ is a bilinear form on $V$. \end{definition}
ProofWiki
Measurement of forward Z → e+e− production at √s = 8 TeV R. Aaij, B. Adeva, M. Adinolfi, A. Affolder, Z. Ajaltouni, S. Akar, J. Albrecht, F. Alessio, M. Alexander, S. Ali, G. Alkhazov, P. Alvarez Cartelle, A. A. Alves, S. Amato, S. Amerio, Y. Amhis, L. An, L. Anderlini, J. Anderson, M. AndreottiJ. E. Andrews, R. B. Appleby, O. Aquines Gutierrez, F. Archilli, A. Artamonov, M. Artuso, E. Aslanides, G. Auriemma, M. Baalouch, S. Bachmann, J. J. Back, A. Badalov, C. Baesso, W. Baldini, R. J. Barlow, C. Barschel, S. Barsuk, W. Barter, V. Batozskaya, V. Battista, A. Bay, L. Beaucourt, J. Beddow, F. Bedeschi, I. Bediaga, I. Belyaev, E. Ben-Haim, G. Bencivenni, S. Benson, J. Benton, A. Berezhnoy, R. Bernet, A. Bertolin, M. O. Bettler, M. van Beuzekom, A. Bien, S. Bifani, T. Bird, A. Bizzeti, T. Blake, F. Blanc, J. Blouw, S. Blusk, V. Bocci, A. Bondar, N. Bondar, W. Bonivento, S. Borghi, A. Borgia, M. Borsato, T. J.V. Bowcock, E. Bowen, C. Bozzi, S. Braun, D. Brett, M. Britsch, T. Britton, J. Brodzicka, N. H. Brook, A. Bursche, J. Buytaert, S. Cadeddu, R. Calabrese, M. Calvi, M. Calvo Gomez, P. Campana, D. Campora Perez, L. Capriotti, A. Carbone, G. Carboni Mostrar 70 otros Resultado de la investigación › revisión exhaustiva Abstract: A measurement of the cross-section for Z-boson production in the forward region of pp collisions at 8 TeV centre-of-mass energy is presented. The measurement is based on a sample of Z → e +e − decays reconstructed using the LHCb detector, corresponding to an integrated luminosity of 2.0 fb −1. The acceptance is defined by the requirements 2.0 < η < 4.5 and p T > 20 GeV for the pseudorapidities and transverse momenta of the leptons. Their invariant mass is required to lie in the range 60-120 GeV. The cross-section is determined to beσpp→Z→e+e−=93.81±0.41stat±1.48syst±1.14lumipb,$$ \sigma \left(\mathrm{pp}\to \mathrm{Z}\to {\mathrm{e}}^{+}{\mathrm{e}}^{-}\right)=93.81\pm 0.41\left(\mathrm{stat}\right)\pm 1.48\left(\mathrm{syst}\right)\pm 1.14\left(\mathrm{lumi}\right)\mathrm{p}\mathrm{b}, $$where the first uncertainty is statistical and the second reflects all systematic effects apart from that arising from the luminosity, which is given as the third uncertainty. Differential cross-sections are presented as functions of the Z-boson rapidity and of the angular variable ϕ ∗, which is related to the Z-boson transverse momentum.[Figure not available: see fulltext.] Published - 1 may. 2015 Publicado de forma externa Física nuclear y de alta energía Enlace a las citas en Scopus Profundice en los temas de investigación de 'Measurement of forward Z → e+e− production at √s = 8 TeV'. En conjunto forman una huella única. transverse momentum Physics & Astronomy 80% cross sections Physics & Astronomy 69% acceptability Physics & Astronomy 40% center of mass Physics & Astronomy 37% requirements Physics & Astronomy 26% Aaij, R., Adeva, B., Adinolfi, M., Affolder, A., Ajaltouni, Z., Akar, S., Albrecht, J., Alessio, F., Alexander, M., Ali, S., Alkhazov, G., Alvarez Cartelle, P., Alves, A. A., Amato, S., Amerio, S., Amhis, Y., An, L., Anderlini, L., Anderson, J., ... Carboni, G. (2015). Measurement of forward Z → e+e− production at √s = 8 TeV. Journal of High Energy Physics, 2015(5), 1-21. [109]. https://doi.org/10.1007/JHEP05(2015)109 Aaij, R. ; Adeva, B. ; Adinolfi, M. et al. / Measurement of forward Z → e+e− production at √s = 8 TeV. En: Journal of High Energy Physics. 2015 ; Vol. 2015, N.º 5. pp. 1-21. @article{ebf9321e46cc4f38beb2f47af68970eb, title = "Measurement of forward Z → e+e− production at √s = 8 TeV", abstract = "Abstract: A measurement of the cross-section for Z-boson production in the forward region of pp collisions at 8 TeV centre-of-mass energy is presented. The measurement is based on a sample of Z → e +e − decays reconstructed using the LHCb detector, corresponding to an integrated luminosity of 2.0 fb −1. The acceptance is defined by the requirements 2.0 < η < 4.5 and p T > 20 GeV for the pseudorapidities and transverse momenta of the leptons. Their invariant mass is required to lie in the range 60-120 GeV. The cross-section is determined to beσpp→Z→e+e−=93.81±0.41stat±1.48syst±1.14lumipb,$$ \sigma \left(\mathrm{pp}\to \mathrm{Z}\to {\mathrm{e}}^{+}{\mathrm{e}}^{-}\right)=93.81\pm 0.41\left(\mathrm{stat}\right)\pm 1.48\left(\mathrm{syst}\right)\pm 1.14\left(\mathrm{lumi}\right)\mathrm{p}\mathrm{b}, $$where the first uncertainty is statistical and the second reflects all systematic effects apart from that arising from the luminosity, which is given as the third uncertainty. Differential cross-sections are presented as functions of the Z-boson rapidity and of the angular variable ϕ ∗, which is related to the Z-boson transverse momentum.[Figure not available: see fulltext.] ", author = "R. Aaij and B. Adeva and M. Adinolfi and A. Affolder and Z. Ajaltouni and S. Akar and J. Albrecht and F. Alessio and M. Alexander and S. Ali and G. Alkhazov and {Alvarez Cartelle}, P. and Alves, {A. A.} and S. Amato and S. Amerio and Y. Amhis and L. An and L. Anderlini and J. Anderson and M. Andreotti and Andrews, {J. E.} and Appleby, {R. B.} and {Aquines Gutierrez}, O. and F. Archilli and A. Artamonov and M. Artuso and E. Aslanides and G. Auriemma and M. Baalouch and S. Bachmann and Back, {J. J.} and A. Badalov and C. Baesso and W. Baldini and Barlow, {R. J.} and C. Barschel and S. Barsuk and W. Barter and V. Batozskaya and V. Battista and A. Bay and L. Beaucourt and J. Beddow and F. Bedeschi and I. Bediaga and I. Belyaev and E. Ben-Haim and G. Bencivenni and S. Benson and J. Benton and A. Berezhnoy and R. Bernet and A. Bertolin and Bettler, {M. O.} and {van Beuzekom}, M. and A. Bien and S. Bifani and T. Bird and A. Bizzeti and T. Blake and F. Blanc and J. Blouw and S. Blusk and V. Bocci and A. Bondar and N. Bondar and W. Bonivento and S. Borghi and A. Borgia and M. Borsato and Bowcock, {T. J.V.} and E. Bowen and C. Bozzi and S. Braun and D. Brett and M. Britsch and T. Britton and J. Brodzicka and Brook, {N. H.} and A. Bursche and J. Buytaert and S. Cadeddu and R. Calabrese and M. Calvi and {Calvo Gomez}, M. and P. Campana and {Campora Perez}, D. and L. Capriotti and A. Carbone and G. Carboni", note = "Publisher Copyright: {\textcopyright} 2015, The Author(s). Copyright: Copyright 2017 Elsevier B.V., All rights reserved.", Aaij, R, Adeva, B, Adinolfi, M, Affolder, A, Ajaltouni, Z, Akar, S, Albrecht, J, Alessio, F, Alexander, M, Ali, S, Alkhazov, G, Alvarez Cartelle, P, Alves, AA, Amato, S, Amerio, S, Amhis, Y, An, L, Anderlini, L, Anderson, J, Andreotti, M, Andrews, JE, Appleby, RB, Aquines Gutierrez, O, Archilli, F, Artamonov, A, Artuso, M, Aslanides, E, Auriemma, G, Baalouch, M, Bachmann, S, Back, JJ, Badalov, A, Baesso, C, Baldini, W, Barlow, RJ, Barschel, C, Barsuk, S, Barter, W, Batozskaya, V, Battista, V, Bay, A, Beaucourt, L, Beddow, J, Bedeschi, F, Bediaga, I, Belyaev, I, Ben-Haim, E, Bencivenni, G, Benson, S, Benton, J, Berezhnoy, A, Bernet, R, Bertolin, A, Bettler, MO, van Beuzekom, M, Bien, A, Bifani, S, Bird, T, Bizzeti, A, Blake, T, Blanc, F, Blouw, J, Blusk, S, Bocci, V, Bondar, A, Bondar, N, Bonivento, W, Borghi, S, Borgia, A, Borsato, M, Bowcock, TJV, Bowen, E, Bozzi, C, Braun, S, Brett, D, Britsch, M, Britton, T, Brodzicka, J, Brook, NH, Bursche, A, Buytaert, J, Cadeddu, S, Calabrese, R, Calvi, M, Calvo Gomez, M, Campana, P, Campora Perez, D, Capriotti, L, Carbone, A & Carboni, G 2015, 'Measurement of forward Z → e+e− production at √s = 8 TeV', Journal of High Energy Physics, vol. 2015, n.º 5, 109, pp. 1-21. https://doi.org/10.1007/JHEP05(2015)109 Measurement of forward Z → e+e− production at √s = 8 TeV. / Aaij, R.; Adeva, B.; Adinolfi, M. et al. En: Journal of High Energy Physics, Vol. 2015, N.º 5, 109, 01.05.2015, p. 1-21. T1 - Measurement of forward Z → e+e− production at √s = 8 TeV AU - Aaij, R. AU - Adeva, B. AU - Adinolfi, M. AU - Affolder, A. AU - Ajaltouni, Z. AU - Akar, S. AU - Albrecht, J. AU - Alessio, F. AU - Alexander, M. AU - Ali, S. AU - Alkhazov, G. AU - Alvarez Cartelle, P. AU - Alves, A. A. AU - Amato, S. AU - Amhis, Y. AU - An, L. AU - Anderlini, L. AU - Andreotti, M. AU - Andrews, J. E. AU - Appleby, R. B. AU - Aquines Gutierrez, O. AU - Archilli, F. AU - Artamonov, A. AU - Artuso, M. AU - Aslanides, E. AU - Auriemma, G. AU - Baalouch, M. AU - Bachmann, S. AU - Back, J. J. AU - Badalov, A. AU - Baesso, C. AU - Baldini, W. AU - Barlow, R. J. AU - Barschel, C. AU - Barsuk, S. AU - Barter, W. AU - Batozskaya, V. AU - Battista, V. AU - Bay, A. AU - Beaucourt, L. AU - Beddow, J. AU - Bediaga, I. AU - Belyaev, I. AU - Ben-Haim, E. AU - Bencivenni, G. AU - Benson, S. AU - Benton, J. AU - Berezhnoy, A. AU - Bernet, R. AU - Bertolin, A. AU - Bettler, M. O. AU - van Beuzekom, M. AU - Bien, A. AU - Bifani, S. AU - Bird, T. AU - Bizzeti, A. AU - Blake, T. AU - Blanc, F. AU - Blouw, J. AU - Blusk, S. AU - Bocci, V. AU - Bondar, A. AU - Bondar, N. AU - Bonivento, W. AU - Borghi, S. AU - Borgia, A. AU - Borsato, M. AU - Bowcock, T. J.V. AU - Bowen, E. AU - Bozzi, C. AU - Braun, S. AU - Brett, D. AU - Britsch, M. AU - Britton, T. AU - Brodzicka, J. AU - Brook, N. H. AU - Bursche, A. AU - Buytaert, J. AU - Cadeddu, S. AU - Calabrese, R. AU - Calvi, M. AU - Calvo Gomez, M. AU - Campana, P. AU - Campora Perez, D. AU - Capriotti, L. AU - Carbone, A. AU - Carboni, G. N1 - Publisher Copyright: © 2015, The Author(s). Copyright: Copyright 2017 Elsevier B.V., All rights reserved. N2 - Abstract: A measurement of the cross-section for Z-boson production in the forward region of pp collisions at 8 TeV centre-of-mass energy is presented. The measurement is based on a sample of Z → e +e − decays reconstructed using the LHCb detector, corresponding to an integrated luminosity of 2.0 fb −1. The acceptance is defined by the requirements 2.0 < η < 4.5 and p T > 20 GeV for the pseudorapidities and transverse momenta of the leptons. Their invariant mass is required to lie in the range 60-120 GeV. The cross-section is determined to beσpp→Z→e+e−=93.81±0.41stat±1.48syst±1.14lumipb,$$ \sigma \left(\mathrm{pp}\to \mathrm{Z}\to {\mathrm{e}}^{+}{\mathrm{e}}^{-}\right)=93.81\pm 0.41\left(\mathrm{stat}\right)\pm 1.48\left(\mathrm{syst}\right)\pm 1.14\left(\mathrm{lumi}\right)\mathrm{p}\mathrm{b}, $$where the first uncertainty is statistical and the second reflects all systematic effects apart from that arising from the luminosity, which is given as the third uncertainty. Differential cross-sections are presented as functions of the Z-boson rapidity and of the angular variable ϕ ∗, which is related to the Z-boson transverse momentum.[Figure not available: see fulltext.] AB - Abstract: A measurement of the cross-section for Z-boson production in the forward region of pp collisions at 8 TeV centre-of-mass energy is presented. The measurement is based on a sample of Z → e +e − decays reconstructed using the LHCb detector, corresponding to an integrated luminosity of 2.0 fb −1. The acceptance is defined by the requirements 2.0 < η < 4.5 and p T > 20 GeV for the pseudorapidities and transverse momenta of the leptons. Their invariant mass is required to lie in the range 60-120 GeV. The cross-section is determined to beσpp→Z→e+e−=93.81±0.41stat±1.48syst±1.14lumipb,$$ \sigma \left(\mathrm{pp}\to \mathrm{Z}\to {\mathrm{e}}^{+}{\mathrm{e}}^{-}\right)=93.81\pm 0.41\left(\mathrm{stat}\right)\pm 1.48\left(\mathrm{syst}\right)\pm 1.14\left(\mathrm{lumi}\right)\mathrm{p}\mathrm{b}, $$where the first uncertainty is statistical and the second reflects all systematic effects apart from that arising from the luminosity, which is given as the third uncertainty. Differential cross-sections are presented as functions of the Z-boson rapidity and of the angular variable ϕ ∗, which is related to the Z-boson transverse momentum.[Figure not available: see fulltext.] Aaij R, Adeva B, Adinolfi M, Affolder A, Ajaltouni Z, Akar S et al. Measurement of forward Z → e+e− production at √s = 8 TeV. Journal of High Energy Physics. 2015 may. 1;2015(5):1-21. 109. doi: 10.1007/JHEP05(2015)109
CommonCrawl
Abstract: Often a "classification problem" can be regarded as an equivalence relation on a standard Borel space (i.e., a Polish space equipped just with its σ-algebra of Borel sets). For instance, the classification problem for countable linear orders (on $\omega$) corresponds to the isomorphism equivalence relation on a suitable subspace of $\mathcal P(\omega\times\omega)$. This allows for an analysis of the complexity of the isomorphism problem for many classes of countable structures using techniques from an area of descriptive set theory called Borel equivalence relations. In this talk we shall describe some recent results in Borel equivalence relations, as well as a couple of interactions with model theory.
CommonCrawl
Shor's algorithm Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor.[1][2] It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical (that is, non-quantum) algorithms.[3] On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future.[4] Another concern is that noise in quantum circuits may undermine results,[5] requiring additional qubits for quantum error correction. Shor proposed multiple similar algorithms solving the factoring problem, the discrete logarithm problem, and the period finding problem. "Shor's algorithm" usually refers to his algorithm solving factoring, but may also refer to each of the three. The discrete logarithm algorithm and the factoring algorithm are instances of the period finding algorithm, and all three are instances of the hidden subgroup problem. Shor's algorithm allows to factor an integer $N$ on a quantum computer in polylogarithmic time, meaning that the running time of the algorithm is polynomial in $\log N$.[6] Specifically, it takes quantum gates of order $O\!\left((\log N)^{2}(\log \log N)(\log \log \log N)\right)$ using fast multiplication,[7] or even $O\!\left((\log N)^{2}(\log \log N)\right)$ utilizing the asymptotically fastest multiplication algorithm currently known due to Harvey and Van Der Hoven,[8] thus demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is consequently in the complexity class BQP. This is significantly faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time: $O\!\left(e^{1.9(\log N)^{1/3}(\log \log N)^{2/3}}\right)$.[9] Feasability and impact If a quantum computer with a sufficient number of qubits could operate without succumbing to quantum noise and other quantum-decoherence phenomena, then Shor's algorithm could be used to break public-key cryptography schemes, such as • The RSA scheme • The Finite Field Diffie-Hellman key exchange • The Elliptic Curve Diffie-Hellman key exchange[10] RSA is based on the assumption that factoring large integers is computationally intractable. As far as is known, this assumption is valid for classical (non-quantum) computers; no classical algorithm is known that can factor integers in polynomial time. However, Shor's algorithm shows that factoring integers is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers, and for the study of new quantum-computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called post-quantum cryptography. Physical implementation Given the high error rates of contemporary quantum computers and too few qubits to use quantum error correction, laboratory demonstrations obtain correct results only in a fraction of attempts. In 2001, Shor's algorithm was demonstrated by a group at IBM, who factored $15$ into $3\times 5$, using an NMR implementation of a quantum computer with $7$ qubits.[11] After IBM's implementation, two independent groups implemented Shor's algorithm using photonic qubits, emphasizing that multi-qubit entanglement was observed when running the Shor's algorithm circuits.[12][13] In 2012, the factorization of $15$ was performed with solid-state qubits.[14] Later, in 2012, the factorization of $21$ was achieved.[15] In 2019, an attempt was made to factor the number $35$ using Shor's algorithm on an IBM Q System One, but the algorithm failed because of accumulating errors.[16] Though larger numbers have been factored by quantum computers using other algorithms,[17] these algorithms are similar to classical brute-force checking of factors, so unlike Shor's algorithm, they are not expected to ever perform better than classical factoring algorithms.[18] Theoretical analyses of Shor's algorithm assume a quantum computer free of noise and errors. However, near-term practical implementations will have to deal with such undesired phenomena (when more qubits are available, Quantum error correction can help). In 2023, Jin-Yi Cai studied the impact of noise and concluded that "Shor's Algorithm Does Not Factor Large Integers in the Presence of Noise."[5] Algorithm The problem that we are trying to solve is: given an odd composite number $N$, find its integer factors. To achieve this, Shor's algorithm consists of two parts: 1. A classical reduction of the factoring problem to the problem of order-finding. This reduction is similar to that used for other factoring algorithms, such as the quadratic sieve. 2. A quantum algorithm to solve the order-finding problem. Classical reduction A complete factoring algorithm is possible using extra classical methods if we're able to factor $N$ into just two integers $p$ and $q$; therefore the algorithm only needs to achieve that. A basic observation is that, using Euclid's algorithm, we can always compute the GCD between two integers efficiently. In particular, this means we can check efficiently whether $N$ is even, in which case 2 is trivially a factor. Let us thus assume that $N$ is odd for the remainder of this discussion. Afterwards, we can use efficient classical algorithms to check if $N$ is a prime power;[19] again, the rest of the algorithm requires that $N$ is not a prime power, and if it is, $N$ has been completely factored. If those easy cases do not produce a nontrivial factor of $N$, the algorithm proceeds to handle the remaining case. We pick a random integer $2\leq a<N$. A possible nontrivial divisor of $N$ can be found by computing $\gcd(a,N)$, which can be done classically and efficiently using the Euclidean algorithm. If this produces a nontrivial factor (meaning $\gcd(a,N)\neq 1$), the algorithm is finished, and the other nontrivial factor is $ {\frac {N}{\gcd(a,N)}}$. If a nontrivial factor was not identified, then that means that $N$ and the choice of $a$ are coprime. Here, the algorithm runs the quantum subroutine, which will return the order $r$ of $a$, meaning $a^{r}\equiv 1{\bmod {N}}.$ The quantum subroutine requires that $a$ and $N$ are coprime,[2] which is true since at this point in the algorithm, $\gcd(a,N)$ did not produce a nontrivial factor of $N$. It can be seen from the equivalence that $N$ divides $a^{r}-1$, written $N\mid a^{r}-1$. This can be factored using difference of squares: $N\mid (a^{r/2}-1)(a^{r/2}+1)$ Since we have factored the expression in this way, the algorithm doesn't work for odd $r$ (because $a^{r/2}$ must be an integer), meaning the algorithm would have to restart with a new $a$. Hereafter we can therefore assume $r$ is even. It cannot be the case that $N\mid a^{r/2}-1$, since this would imply $a^{r/2}\equiv 1{\bmod {N}}$, which would contradictorily imply that $ {\frac {r}{2}}$ would be the order of $a$, which was already $r$. At this point, it may or may not be the case that $N\mid a^{r/2}+1$. If it is not true that $N\mid a^{r/2}+1$, then that means we are able to find a nontrivial factor of $N$. We compute $d=\gcd(N,a^{r/2}-1)$ If $d=1$, then that means $N\mid a^{r/2}+1$ was true, and a nontrivial factor of $N$ cannot be achieved from $a$, and the algorithm must restart with a new $a$. Otherwise, we have found a nontrivial factor of $N$, with the other being $ {\frac {N}{d}}$, and the algorithm is finished. For this step, it is also equivalent to compute $\gcd(N,a^{r/2}+1)$; it will produce a nontrivial factor if $\gcd(N,a^{r/2}-1)$ is nontrivial, and will not if it's trivial (where $N\mid a^{r/2}+1$). The algorithm restated shortly follows: let $N$ be odd, and not a prime power. We want to output two nontrivial factors of $N$. 1. Pick a random number $1<a<N$. 2. Compute $K=\gcd(a,N)$, the greatest common divisor of $a$ and $N$. 3. If $K\neq 1$, then $K$ is a nontrivial factor of $N$, with the other factor being $ {\frac {N}{K}}$ and we are done. 4. Otherwise, use the quantum subroutine to find the order $r$ of $a$. 5. If $r$ is odd, then go back to step 1. 6. Compute $g=\gcd(N,a^{r/2}+1)$. If $g$ is nontrivial, the other factor is $ {\frac {N}{g}}$, and we're done. Otherwise, go back to step 1. It has been shown that this will be likely to succeed after a few runs.[2] Quantum order-finding subroutine The goal of the quantum subroutine of Shor's algorithm is, given coprime integers $N$ and $1<a<N$, to find the order $r$ of $a$ modulo $N$, which is the smallest positive integer such that $a^{r}\equiv 1{\pmod {N}}$. To achieve this, Shor's algorithm uses a quantum circuit involving two registers. The second register uses $n$ qubits, where $n$ is the smallest integer such that $N\leq 2^{n}$. The size of the first register determines how accurate of an approximation the circuit produces. It can be shown that using $2n+1$ qubits gives sufficient accuracy to find $r$. The exact quantum circuit depends on the parameters $a$ and $N$, which define the problem. The algorithm consists of two main steps: 1. Use quantum phase estimation with unitary $U$ representing the operation of multiplying by $a$ (modulo $N$), and input state $|0\rangle ^{\otimes 2n+1}\otimes |1\rangle $ (where the second register is $|1\rangle $ made from $n$ qubits). The eigenvalues of this $U$ encode information about the period, and $|1\rangle $ can be seen to be writable as a sum of its eigenvectors. Thanks to these properties, the quantum phase estimation stage gives as output a random integer of the form ${\frac {j}{r}}2^{2n+1}$ for random $j=0,1,...,r-1$. 2. Use the continued fractions algorithm to extract the period $r$ from the measurement outcomes obtained in the previous stage. This is a procedure to post-process (with a classical computer) the measurement data obtained from measuring the output quantum states, and retrieve the period. The connection with quantum phase estimation was not discussed in the original formulation of Shor's algorithm,[2] but was later proposed by Kitaev.[20] Quantum phase estimation In general the quantum phase estimation algorithm, for any unitary $U$ and eigenstate $|\psi \rangle $ such that $U|\psi \rangle =e^{2\pi i\theta }|\psi \rangle $, sends inputs states $|0\rangle |\psi \rangle $ into output states close to $|\phi \rangle |\psi \rangle $, where $\phi $ is an integer close to $2^{2n+1}\theta $. In other words, it sends each eigenstate $|\psi _{j}\rangle $ of $U$ into a state close to the associated eigenvalue. For the purposes of quantum order-finding, we employ this strategy using the unitary defined by the action $U|k\rangle ={\begin{cases}|ak{\pmod {N}}\rangle &0\leq k<N,\\|k\rangle &N\leq k<2^{n}.\end{cases}}$ The action of $U$ on states $|k\rangle $ with $N\leq k<2^{n}$ is not crucial to the functioning of the algorithm, but needs to be included to ensure the overall transformation is a well-defined quantum gate. Implementing the circuit for quantum phase estimation with $U$ requires being able to efficiently implement the gates $U^{2^{j}}$. This can be accomplished via modular exponentiation, which is the slowest part of the algorithm. The gate thus defined satisfies $U^{r}=I$, which immediately implies that its eigenvalues are the $r$-th roots of unity $\omega _{r}^{k}=e^{2\pi ik/r}$. Furthermore, each eigenvalue $\omega _{r}^{k}$ has an eigenvector of the form $ |\psi _{j}\rangle =r^{-1/2}\sum _{k=0}^{r-1}\omega _{r}^{-kj}|a^{k}\rangle $, and these eigenvectors are such that ${\begin{aligned}{\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|\psi _{j}\rangle &={\frac {1}{r}}\sum _{j=0}^{r-1}\sum _{k=0}^{r-1}\omega _{r}^{jk}|a^{k}\rangle \\&=|1\rangle +{\frac {1}{r}}\sum _{k=1}^{r-1}\left(\sum _{j=0}^{r-1}\omega _{r}^{jk}\right)|a^{k}\rangle =|1\rangle ,\end{aligned}}$ where the last identity follows from the geometric series formula, which implies $ \sum _{j=0}^{r-1}\omega _{r}^{jk}=0$. Using quantum phase estimation on an input state $|0\rangle ^{\otimes 2n+1}|\psi _{j}\rangle $ would result in an output $|\phi _{j}\rangle |\psi _{j}\rangle $ with each $\phi _{j}$ representing a superposition of integers that approximate $2^{2n+1}j/r$, with the most accurate measurement having a chance of $ {\frac {4}{\pi ^{2}}}\approx 40.55$ of being measured (which can be made arbitrarily high using extra qubits). Thus using as input $|0\rangle ^{\otimes 2n+1}|1\rangle $ instead, the output is a superposition of such states with $j=0,...,r-1$. In other words, using this input amounts to running quantum phase estimation on a superposition of eigenvectors of $U$. More explicitly, the quantum phase estimation circuit implements the transformation $|0\rangle ^{\otimes 2n}|1\rangle ={\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|0\rangle ^{\otimes 2n}|\psi _{j}\rangle \to {\frac {1}{\sqrt {r}}}\sum _{j=0}^{r-1}|\phi _{j}\rangle |\psi _{j}\rangle .$ Measuring the first register, we now have a balanced probability $1/r$ to find each $|\phi _{j}\rangle $, each one giving an integer approximation to $2^{2n+1}j/r$, which can be divided by $2^{2n+1}$ to get a decimal approximation for $j/r$. Continued fraction algorithm to retrieve the period Then, we apply the continued fractions algorithm to find integers $ b$ and $ c$, where $ {\frac {b}{c}}$ gives the best fraction approximation for the approximation measured from the circuit, for $ b,c<N$ and coprime $ b$ and $ c$. The number of qubits in the first register, $2n+1$, which determines the accuracy of the approximation, guarantees that ${\frac {b}{c}}={\frac {j}{r}}$ given the best approximation from the superposition of $ |\phi _{j}\rangle $ was measured (which can be made arbitrarily likely by using extra bits and truncating the output). However, while $ b$ and $ c$ are coprime, it may be the case that $ j$ and $ r$ are not coprime. Because of that, $ b$ and $ c$ may have lost some factors that were in $ j$ and $ r$. This can be remedied by rerunning the quantum subroutine an arbitrary number of times, to produce a list of fraction approximations ${\frac {b_{1}}{c_{1}}}_ ,}\;{\frac {b_{2}}{c_{2}}}_ ,}\ldots {\vphantom {\frac {b_{1}}{c_{1}}}}_ ,}\;{\frac {b_{s}}{c_{s}}}$ where $ s$ is the number of times the algorithm was run. Each $ c_{k}$ will have different factors taken out of it because the circuit will (likely) have measured multiple different possible values of $ j$. To recover the actual $ r$ value, we can take the least common multiple of each $ c_{k}$: $\mathrm {lcm} (c_{1},c_{2},\ldots ,c_{s})$ The least common multiple will be the order $ r$ of the original integer $ a$ with high probability. Choosing the size of the first register Phase estimation requires choosing the size of the first register to determine the accuracy of the algorithm, and for the quantum subroutine of Shor's algorithm, $2n+1$ qubits is sufficient to guarantee that the optimal bitstring measured from phase estimation (meaning the $|k\rangle $ where $ k/2^{2n+1}$ is the most accurate approximation of the phase from phase estimation) will allow the actual value of $r$ to be recovered. Each $|\phi _{j}\rangle $ before measurement in Shor's algorithm represents a superposition of integers approximating $2^{2n+1}j/r$. Let $|k\rangle $ represent the most optimal integer in $|\phi _{j}\rangle $. The following theorem guarantees that the continued fractions algorithm will recover $j/r$ from $k/2^{2{n}+1}$: Theorem — If $j$ and $r$ are $n$ bit integers, and $\left\vert {\frac {j}{r}}-\phi \right\vert \leq {\frac {1}{2r^{2}}}$ then the continued fractions algorithm run on $\phi $ will recover both $ {\frac {j}{\gcd(j,\;r)}}$. and $ {\frac {r}{\gcd(j,\;r)}}$. [3] As $k$ is the optimal bitstring from phase estimation, $k/2^{2{n}+1}$ is accurate to $j/r$ by $2n+1$ bits. Thus, $\left\vert {\frac {j}{r}}-{\frac {k}{2^{2n+1}}}\right\vert \leq {\frac {1}{2^{2{n}+1}}}\leq {\frac {1}{2N^{2}}}\leq {\frac {1}{2r^{2}}}$ which implys that the continued fractions algorithm will recover $j$ and $r$ (or with their greatest common divisor taken out). The bottleneck The runtime bottleneck of Shor's algorithm is quantum modular exponentiation, which is by far slower than the quantum Fourier transform and classical pre-/post-processing. There are several approaches to constructing and optimizing circuits for modular exponentiation. The simplest and (currently) most practical approach is to mimic conventional arithmetic circuits with reversible gates, starting with ripple-carry adders. Knowing the base and the modulus of exponentiation facilitates further optimizations.[21][22] Reversible circuits typically use on the order of $n^{3}$ gates for $n$ qubits. Alternative techniques asymptotically improve gate counts by using quantum Fourier transforms, but are not competitive with fewer than 600 qubits owing to high constants. Period finding and discrete logarithms Shor's algorithms for the discrete log and the order finding problems are instances of an algorithm solving the period finding problem.. All three are instances of the hidden subgroup problem. Shor's algorithm for discrete logarithms Given a group $G$ with order $p$ and generator $g\in G$, suppose we know that $x=g^{r}\in G$, for some $r\in \mathbb {Z} _{p}$, and we wish to compute $r$, which is the discrete logarithm: $r={\log _{g}}(x)$. Consider the abelian group $\mathbb {Z} _{p}\times \mathbb {Z} _{p}$, where each factor corresponds to modular addition of values. Now, consider the function $f\colon \mathbb {Z} _{p}\times \mathbb {Z} _{p}\to G\;;\;f(a,b)=g^{a}x^{-b}.$ This gives us an abelian hidden subgroup problem, where $f$ corresponds to a group homomorphism. The kernel corresponds to the multiples of $(r,1)$. So, if we can find the kernel, we can find $r$. A quantum algorithm for solving this problem exists. This algorithm is, like the factor-finding algorithm, due to Peter Shor and both are implemented by creating a superposition through using Hadamard gates, followed by implementing $f$ as a quantum transform, followed finally by a quantum Fourier transform.[3] Due to this, the quantum algorithm for computing the discrete logarithm is also occasionally referred to as "Shor's Algorithm." The order-finding problem can also be viewed as a hidden subgroup problem.[3] To see this, consider the group of integers under addition, and for a given $a\in \mathbb {Z} $ such that: $a^{r}=1$, the function $f\colon \mathbb {Z} \to \mathbb {Z} \;;\;f(x)=a^{x},\;f(x+r)=f(x).$ For any finite abelian group $G$, a quantum algorithm exists for solving the hidden subgroup for $G$ in polynomial time.[3] See also • GEECM, a factorization algorithm said to be "often much faster than Shor's"[23] • Grover's algorithm References 1. Shor, P.W. (1994). "Algorithms for quantum computation: Discrete logarithms and factoring". Proceedings 35th Annual Symposium on Foundations of Computer Science. IEEE Comput. Soc. Press. pp. 124–134. doi:10.1109/sfcs.1994.365700. ISBN 0818665807. S2CID 15291489. 2. Shor, Peter W. (October 1997). "Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer". SIAM Journal on Computing. 26 (5): 1484–1509. arXiv:quant-ph/9508027. doi:10.1137/S0097539795293172. ISSN 0097-5397. S2CID 2337707. 3. Nielsen, Michael A.; Chuang, Isaac L. (9 December 2010). Quantum Computation and Quantum Information (PDF) (7th ed.). Cambridge University Press. ISBN 978-1-107-00217-3. Archived (PDF) from the original on 2019-07-11. Retrieved 24 April 2022. 4. Gidney, Craig; Ekerå, Martin (2021). "How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits". Quantum. 5: 433. arXiv:1905.09749. doi:10.22331/q-2021-04-15-433. S2CID 162183806. 5. cai, Jin-Yi (June 15, 2023). "Shor's Algorithm Does Not Factor Large Integers in the Presence of Noise". arXiv:2306.10072 [quant-ph]. 6. See also pseudo-polynomial time. 7. Beckman, David; Chari, Amalavoyal N.; Devabhaktuni, Srikrishna; Preskill, John (1996). "Efficient Networks for Quantum Factoring" (PDF). Physical Review A. 54 (2): 1034–1063. arXiv:quant-ph/9602016. Bibcode:1996PhRvA..54.1034B. doi:10.1103/PhysRevA.54.1034. PMID 9913575. S2CID 2231795. 8. Harvey, David; van Der Hoeven, Joris (2020). "Integer multiplication in time O(n log n)". Annals of Mathematics. doi:10.4007/annals.2021.193.2.4. S2CID 109934776. 9. "Number Field Sieve". wolfram.com. Retrieved 23 October 2015. 10. Roetteler, Martin; Naehrig, Michael; Svore, Krysta M.; Lauter, Kristin E. (2017). "Quantum resource estimates for computing elliptic curve discrete logarithms". In Takagi, Tsuyoshi; Peyrin, Thomas (eds.). Advances in Cryptology – ASIACRYPT 2017 – 23rd International Conference on the Theory and Applications of Cryptology and Information Security, Hong Kong, China, December 3–7, 2017, Proceedings, Part II. Lecture Notes in Computer Science. Vol. 10625. Springer. pp. 241–270. arXiv:1706.06752. doi:10.1007/978-3-319-70697-9_9. 11. Vandersypen, Lieven M. K.; Steffen, Matthias; Breyta, Gregory; Yannoni, Costantino S.; Sherwood, Mark H. & Chuang, Isaac L. (2001), "Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance" (PDF), Nature, 414 (6866): 883–887, arXiv:quant-ph/0112176, Bibcode:2001Natur.414..883V, CiteSeerX 10.1.1.251.8799, doi:10.1038/414883a, PMID 11780055, S2CID 4400832 12. Lu, Chao-Yang; Browne, Daniel E.; Yang, Tao & Pan, Jian-Wei (2007), "Demonstration of a Compiled Version of Shor's Quantum Factoring Algorithm Using Photonic Qubits" (PDF), Physical Review Letters, 99 (25): 250504, arXiv:0705.1684, Bibcode:2007PhRvL..99y0504L, doi:10.1103/PhysRevLett.99.250504, PMID 18233508, S2CID 5158195 13. Lanyon, B. P.; Weinhold, T. J.; Langford, N. K.; Barbieri, M.; James, D. F. V.; Gilchrist, A. & White, A. G. (2007), "Experimental Demonstration of a Compiled Version of Shor's Algorithm with Quantum Entanglement" (PDF), Physical Review Letters, 99 (25): 250505, arXiv:0705.1398, Bibcode:2007PhRvL..99y0505L, doi:10.1103/PhysRevLett.99.250505, hdl:10072/21608, PMID 18233509, S2CID 10010619 14. Lucero, Erik; Barends, Rami; Chen, Yu; Kelly, Julian; Mariantoni, Matteo; Megrant, Anthony; O'Malley, Peter; Sank, Daniel; Vainsencher, Amit; Wenner, James; White, Ted; Yin, Yi; Cleland, Andrew N.; Martinis, John M. (2012). "Computing prime factors with a Josephson phase qubit quantum processor". Nature Physics. 8 (10): 719. arXiv:1202.5707. Bibcode:2012NatPh...8..719L. doi:10.1038/nphys2385. S2CID 44055700. 15. Martín-López, Enrique; Martín-López, Enrique; Laing, Anthony; Lawson, Thomas; Alvarez, Roberto; Zhou, Xiao-Qi; O'Brien, Jeremy L. (12 October 2012). "Experimental realization of Shor's quantum factoring algorithm using qubit recycling". Nature Photonics. 6 (11): 773–776. arXiv:1111.4147. Bibcode:2012NaPho...6..773M. doi:10.1038/nphoton.2012.259. S2CID 46546101. 16. Amico, Mirko; Saleem, Zain H.; Kumph, Muir (2019-07-08). "An Experimental Study of Shor's Factoring Algorithm on IBM Q". Physical Review A. 100 (1): 012305. arXiv:1903.00768. doi:10.1103/PhysRevA.100.012305. ISSN 2469-9926. S2CID 92987546. 17. Karamlou, Amir H.; Simon, William A.; Katabarwa, Amara; Scholten, Travis L.; Peropadre, Borja; Cao, Yudong (2021-10-28). "Analyzing the performance of variational quantum factoring on a superconducting quantum processor". npj Quantum Information. 7 (1): 156. arXiv:2012.07825. Bibcode:2021npjQI...7..156K. doi:10.1038/s41534-021-00478-z. ISSN 2056-6387. S2CID 229156747. 18. "Quantum computing motte-and-baileys". Shtetl-Optimized. 2019-12-28. Retrieved 2021-11-15. 19. Bernstein, Daniel (1998). "Detecting perfect powers in essentially linear time". Mathematics of Computation. 67 (223): 1253–1283. doi:10.1090/S0025-5718-98-00952-1. ISSN 0025-5718. 20. Kitaev, A. Yu (1995-11-20). "Quantum measurements and the Abelian Stabilizer Problem". arXiv:quant-ph/9511026. 21. Markov, Igor L.; Saeedi, Mehdi (2012). "Constant-Optimized Quantum Circuits for Modular Multiplication and Exponentiation". Quantum Information and Computation. 12 (5–6): 361–394. arXiv:1202.6614. Bibcode:2012arXiv1202.6614M. doi:10.26421/QIC12.5-6-1. S2CID 16595181. 22. Markov, Igor L.; Saeedi, Mehdi (2013). "Faster Quantum Number Factoring via Circuit Synthesis". Phys. Rev. A. 87 (1): 012310. arXiv:1301.3210. Bibcode:2013PhRvA..87a2310M. doi:10.1103/PhysRevA.87.012310. S2CID 2246117. 23. Bernstein, Daniel J.; Heninger, Nadia; Lou, Paul; Valenta, Luke (2017). "Post-quantum RSA" (PDF). International Workshop on Post-Quantum Cryptography. Lecture Notes in Computer Science. 10346: 311–329. doi:10.1007/978-3-319-59879-6_18. ISBN 978-3-319-59878-9. Archived (PDF) from the original on 2017-04-20. Further reading • Nielsen, Michael A. & Chuang, Isaac L. (2010), Quantum Computation and Quantum Information, 10th Anniversary Edition, Cambridge University Press, ISBN 9781107002173. • Phillip Kaye, Raymond Laflamme, Michele Mosca, An introduction to quantum computing, Oxford University Press, 2007, ISBN 0-19-857049-X • "Explanation for the man in the street" by Scott Aaronson, "approved" by Peter Shor. (Shor wrote "Great article, Scott! That’s the best job of explaining quantum computing to the man on the street that I’ve seen."). An alternate metaphor for the QFT was presented in one of the comments. Scott Aaronson suggests the following 12 references as further reading (out of "the 10105000 quantum algorithm tutorials that are already on the web."): • Shor, Peter W. (1997), "Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer", SIAM J. Comput., 26 (5): 1484–1509, arXiv:quant-ph/9508027v2, Bibcode:1999SIAMR..41..303S, doi:10.1137/S0036144598347011. Revised version of the original paper by Peter Shor ("28 pages, LaTeX. This is an expanded version of a paper that appeared in the Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, Nov. 20--22, 1994. Minor revisions made January, 1996"). • Quantum Computing and Shor's Algorithm, Matthew Hayward's Quantum Algorithms Page, 2005-02-17, imsa.edu, LaTeX2HTML version of the original LaTeX document, also available as PDF or postscript document. • Quantum Computation and Shor's Factoring Algorithm, Ronald de Wolf, CWI and University of Amsterdam, January 12, 1999, 9 page postscript document. • Shor's Factoring Algorithm, Notes from Lecture 9 of Berkeley CS 294–2, dated 4 Oct 2004, 7 page postscript document. • Chapter 6 Quantum Computation, 91 page postscript document, Caltech, Preskill, PH229. • Quantum computation: a tutorial by Samuel L. Braunstein. • The Quantum States of Shor's Algorithm, by Neal Young, Last modified: Tue May 21 11:47:38 1996. • III. Breaking RSA Encryption with a Quantum Computer: Shor's Factoring Algorithm, Lecture notes on Quantum computation, Cornell University, Physics 481–681, CS 483; Spring, 2006 by N. David Mermin. Last revised 2006-03-28, 30 page PDF document. • Lavor, C.; Manssur, L. R. U.; Portugal, R. (2003). "Shor's Algorithm for Factoring Large Integers". arXiv:quant-ph/0303175. • Lomonaco, Jr (2000). "Shor's Quantum Factoring Algorithm". arXiv:quant-ph/0010034. This paper is a written version of a one-hour lecture given on Peter Shor's quantum factoring algorithm. 22 pages. • Chapter 20 Quantum Computation, from Computational Complexity: A Modern Approach, Draft of a book: Dated January 2007, Sanjeev Arora and Boaz Barak, Princeton University. Published as Chapter 10 Quantum Computation of Sanjeev Arora, Boaz Barak, "Computational Complexity: A Modern Approach", Cambridge University Press, 2009, ISBN 978-0-521-42426-4 • A Step Toward Quantum Computing: Entangling 10 Billion Particles, from "Discover Magazine", Dated January 19, 2011. • Josef Gruska - Quantum Computing Challenges also in Mathematics unlimited: 2001 and beyond, Editors Björn Engquist, Wilfried Schmid, Springer, 2001, ISBN 978-3-540-66913-5 External links • Version 1.0.0 of libquantum: contains a C language implementation of Shor's algorithm with their simulated quantum computer library, but the width variable in shor.c should be set to 1 to improve the runtime complexity. • PBS Infinite Series created two videos explaining the math behind Shor's algorithm, "How to Break Cryptography" and "Hacking at Quantum Speed with Shor's Algorithm". Quantum information science General • DiVincenzo's criteria • NISQ era • Quantum computing • timeline • Quantum information • Quantum programming • Quantum simulation • Qubit • physical vs. logical • Quantum processors • cloud-based Theorems • Bell's • Eastin–Knill • Gleason's • Gottesman–Knill • Holevo's • Margolus–Levitin • No-broadcasting • No-cloning • No-communication • No-deleting • No-hiding • No-teleportation • PBR • Threshold • Solovay–Kitaev • Purification Quantum communication • Classical capacity • entanglement-assisted • quantum capacity • Entanglement distillation • Monogamy of entanglement • LOCC • Quantum channel • quantum network • Quantum teleportation • quantum gate teleportation • Superdense coding Quantum cryptography • Post-quantum cryptography • Quantum coin flipping • Quantum money • Quantum key distribution • BB84 • SARG04 • other protocols • Quantum secret sharing Quantum algorithms • Amplitude amplification • Bernstein–Vazirani • Boson sampling • Deutsch–Jozsa • Grover's • HHL • Hidden subgroup • Quantum annealing • Quantum counting • Quantum Fourier transform • Quantum optimization • Quantum phase estimation • Shor's • Simon's • VQE Quantum complexity theory • BQP • EQP • QIP • QMA • PostBQP Quantum processor benchmarks • Quantum supremacy • Quantum volume • Randomized benchmarking • XEB • Relaxation times • T1 • T2 Quantum computing models • Adiabatic quantum computation • Continuous-variable quantum information • One-way quantum computer • cluster state • Quantum circuit • quantum logic gate • Quantum machine learning • quantum neural network • Quantum Turing machine • Topological quantum computer Quantum error correction • Codes • CSS • quantum convolutional • stabilizer • Shor • Bacon–Shor • Steane • Toric • gnu • Entanglement-assisted Physical implementations Quantum optics • Cavity QED • Circuit QED • Linear optical QC • KLM protocol Ultracold atoms • Optical lattice • Trapped-ion QC Spin-based • Kane QC • Spin qubit QC • NV center • NMR QC Superconducting • Charge qubit • Flux qubit • Phase qubit • Transmon Quantum programming • OpenQASM-Qiskit-IBM QX • Quil-Forest/Rigetti QCS • Cirq • Q# • libquantum • many others... • Quantum information science • Quantum mechanics topics Number-theoretic algorithms Primality tests • AKS • APR • Baillie–PSW • Elliptic curve • Pocklington • Fermat • Lucas • Lucas–Lehmer • Lucas–Lehmer–Riesel • Proth's theorem • Pépin's • Quadratic Frobenius • Solovay–Strassen • Miller–Rabin Prime-generating • Sieve of Atkin • Sieve of Eratosthenes • Sieve of Pritchard • Sieve of Sundaram • Wheel factorization Integer factorization • Continued fraction (CFRAC) • Dixon's • Lenstra elliptic curve (ECM) • Euler's • Pollard's rho • p − 1 • p + 1 • Quadratic sieve (QS) • General number field sieve (GNFS) • Special number field sieve (SNFS) • Rational sieve • Fermat's • Shanks's square forms • Trial division • Shor's Multiplication • Ancient Egyptian • Long • Karatsuba • Toom–Cook • Schönhage–Strassen • Fürer's Euclidean division • Binary • Chunking • Fourier • Goldschmidt • Newton-Raphson • Long • Short • SRT Discrete logarithm • Baby-step giant-step • Pollard rho • Pollard kangaroo • Pohlig–Hellman • Index calculus • Function field sieve Greatest common divisor • Binary • Euclidean • Extended Euclidean • Lehmer's Modular square root • Cipolla • Pocklington's • Tonelli–Shanks • Berlekamp • Kunerth Other algorithms • Chakravala • Cornacchia • Exponentiation by squaring • Integer square root • Integer relation (LLL; KZ) • Modular exponentiation • Montgomery reduction • Schoof • Trachtenberg system • Italics indicate that algorithm is for numbers of special forms
Wikipedia
german gothic script utworzone przez | Gru 2, 2020 | Uncategorized | 0 komentarzy 1. A3 Office Paper 3. Other small letters have analogous forms. MS Gothic font family. Schwabacher typefaces dominated in Germany from about 1480 to 1530, and the style continued in use occasionally until the 20th century. Graphic fonts Alphabets. These books needed to be produced quickly to keep up with demand. German-speaking countries had a unique script style that evolved alongside the better-known "Fraktur" or "blackletter" Gothic typefaces. The numbers 1 to 10 are also shown. English forms of blackletter have been studied extensively and may be divided into many categories. Characteristics of Fraktur are: Here is the entire alphabet in Fraktur (minus the long s and the sharp s ⟨ß⟩), using the AMS Euler Fraktur typeface: A Blackletter is sometimes referred to as Old English, but it is not to be confused with the Old English language (or Anglo-Saxon), which predates blackletter by many centuries and was written in the insular script or in Futhorc. 0000003473 00000 n For examples of old German Gothic handwriting see the PDF file Handwriting Guide: German Gothic. How to learn a font? Italics Handwriting Elegant Weird Ancient Arabic. Blackletter developed from Carolingian as an increasingly literate 12th-century Europe required new books in many different subjects. German Gothic Script This is an example of German Gothic script. In cursiva, descenders are more frequent, especially in the letters ⟨f⟩ and ⟨s⟩, and ascenders are curved and looped rather than vertical (seen especially in the letter ⟨d⟩). Blackletter script should not be confused with either the ancient alphabet of the Gothic language nor with the sans-serif typefaces that are also sometimes called Gothic. Translates German script and Fraktur (old print), German genealogy records both Gothic script and typed. J Textualis, also known as textura or Gothic bookhand, was the most calligraphic form of blackletter, and today is the form most associated with "Gothic". Italian blackletter also is known as rotunda, as it was less angular than those produced by northern printing centers. French textualis was tall and narrow compared to other national forms, and was most fully developed in the late 13th century in Paris. Cancelleresca influenced the development of bastarda in France and secretary hand in England. How to write Gothic calligraphy and what tools to use? [1] It continued to be commonly used for the Danish language until 1875,[2] and for German, Estonian and Latvian until the 1940s. 0000026679 00000 n In the 13th century there also was an extremely small version of textualis used to write miniature Bibles, known as "pearl script". This German script originated in Germany in the sixteenth century. The University of Oxford borrowed the littera parisiensis in the 13th century and early 14th century, and the littera oxoniensis form is almost indistinguishable from its Parisian counterpart; however, there are a few differences, such as the round final ⟨s⟩ forms, resembling the number ⟨8⟩, rather than the long ⟨s⟩ used in the final position in the Paris script. The German-speaking areas are, however, where blackletter remained in use the longest. It developed in the 14th century as a simplified form of textualis, with influence from the form of textualis as used for writing charters. 19th century agasilva alchemy auld blackletter block printing book decipher decorative esoteric fancy fraktur German gothic halloween historic invitation legible magic magick medieval old old book olde pharmacy readable scribe serif styled swirly tattoo wiggly … Pilot Parallel Pen 2. The left side of the small letter ⟨o⟩ is formed by an angular stroke, the right side by a rounded stroke. Later Frakturreferred to a printing typeface, normally used in Germany prior to World War II. so-called Gothic script, sometimes called Fraktur. Textualis was most widely used in France, the Low Countries, England, and Germany. 0000012774 00000 n Z Fraktur is a gothic typeface created at the end of the 15th century at the direction of Emperor Maximilian and used well into the 20th century. O Deutsch Gothic by James Fordyce 877,681 downloads (221 yesterday) 5 comments 100% Free. It was originally written with quill pens made from bird feathers, these generally had a broad nib. Schwabacher, a blackletter with more rounded letters, soon became the usual printed typeface, but it was replaced by Fraktur in the early 17th century. Printers of the late 15th and early 16th centuries commonly used blackletter typefaces, but under the influence of Renaissance tastes, Roman typefaces grew in popularity, until by about 1590 most presses had converted to them. The writing is a standard form of the earlier and very different chancery … English blackletter developed from the form of Caroline minuscule used there after the Norman Conquest, sometimes called "Romanesque minuscule". the letter a has a straight back stroke, and the top loop eventually became closed, somewhat resembling the number ⟨8⟩. Click to find the best 25 free fonts in the Gothic German style. H�b```"+V/!��e9.01H�`������!�e\A`k�i�%W��Ӳ3��WI�/}�q8y���а����"AA!%ec��F%%��I@���"�H��!XD���� �����}�@*�ч5@��Av��� Í�&�r���00I40HO`neX�4�3@�a9c�����^�89\cI`�v�a8u2@� �I4 endstream endobj 68 0 obj 213 endobj 37 0 obj << /Type /Page /Parent 33 0 R /Resources 38 0 R /Contents [ 40 0 R 42 0 R 44 0 R 46 0 R 48 0 R 53 0 R 63 0 R 65 0 R ] /MediaBox [ 0 0 612 792 ] /CropBox [ 0 0 612 792 ] /Rotate 0 >> endobj 38 0 obj << /ProcSet [ /PDF /Text ] /Font << /F2 51 0 R /F3 49 0 R /F4 50 0 R /F5 60 0 R /F6 58 0 R >> /ExtGState << /GS1 66 0 R >> >> endobj 39 0 obj 1596 endobj 40 0 obj << /Filter /FlateDecode /Length 39 0 R >> stream C [8][9], Chaucer's works had been printed in blackletter in the late 15th century, but were subsequently more usually printed in Roman type. Textualis was first used in the 13th and 14th centuries, and subsequently become more elaborate and decorated, as well as being reserved used for liturgical works only. R the letters ⟨g⟩, ⟨j⟩, ⟨p⟩, ⟨q⟩, ⟨y⟩, and the hook of ⟨h⟩ have descenders, but no other letters are written below the line. Biting is a common feature in rotunda, but breaking is not. Welcome to BYU's Script Tutorial. Strong gothic letters. Blackletter is sometimes referred to as Old English, but it is not to be … Johannes Gutenberg carved a textualis typeface – including a large number of ligatures and common abbreviations – when he printed his 42-line Bible. This block of characters should be used only for setting mathematical text, as mathematical texts use blackletter symbols contrastively to other letter styles. A Gothic font with Victorian influences, Bracker is a fantastic display font which is best … Its use declined after World War I, but was revived briefly during the Third Reich. The first examples of Gothic script occurred in Italy in the tenth century and in other countries of Western and Central Europe at the end of the 11th century. 0000030677 00000 n Fraktur came into use when Emperor Maximilian I (1493–1519) established a series of books and had a new typeface created specifically for this purpose. 0000012450 00000 n This particular entry is the record of the wedding between Franz Benesch of Vechnov & Theresia Haschek of Zlatkov in 1831. The small letter ⟨o⟩ is rounded on both sides, though at the top and at the bottom, the two strokes join in an angle. An updated version of Kurrent calledSütterlin was developed in the early 20th Century, and was used and taught in German schools until the government changed it todeutsche Normalschrif… Its use persisted into the nineteenth century for editions of the State Translation of the Bible, but had otherwise become obsolete. Another form of French textualis in this century was the script developed at the University of Paris, littera parisiensis, which also is small in size and designed to be written quickly, not calligraphically. For the legal concept, see, "Gothic letter" redirects here. GERMAN ALPHABET. Similarly, the term "Fraktur" or "Gothic" is sometimes applied to all of the blackletter typefaces (known in German as Gebrochene Schrift, "Broken Script"). o Textualis formata ("Old English" or "blackletter"), textualis prescissa (or textualis sine pedibus, as it generally lacks feet on its minims), textualis quadrata (or psalterialis) and semi-quadrata, and textualis rotunda are various forms of high-grade formata styles of blackletter. The letters ⟨a⟩, ⟨g⟩ and ⟨s⟩ (at the end of a word) are very similar to their Carolingian forms. Carolingian minuscule was the direct ancestor of blackletter. It was not until 1941 when the government officially began discouraging the use of the old style Gothic handwriting. Home Tattoos Cute Graffiti Gothic More ... Search. Note: (The above may not render fully in all web browsers.). The gothic cursive had been in use throughout much of the medieval ages and had developed into a staggering number of different writing styles. Every font is free to download! German Genealogy German Language, Handwriting, and Script. 0000009579 00000 n P The glyphs in the SMP should only be used for mathematical typesetting, not for ordinary text. Turn around Text b A more angular form of bastarda was used in Burgundy, the lettre de forme or lettre bourgouignonne, for books of hours such as the Très Riches Heures of John, Duke of Berry. This German script originated in Germany in the sixteenth century. Words from other languages, especially from Romance languages including Latin, are usually typeset in antiqua instead of blackletter. By the 1300s, it had become the so-called Gothic script, sometimes called Fraktur. Handwritten documents were composed in cursive using a type of script known as blackletter. The best website for free high-quality German Gothic fonts, with 24 free German Gothic fonts for immediate download, and 74 professional German Gothic fonts for the best price on the Web. Mar 16, 2018 - Calligraphy inspiration. What is balanced composition? Old German gothic handwriting and print are very different from the Roman script most English- speaking genealogists use. D tall, narrow letters, as compared to their Carolingian counterparts. 0000001027 00000 n similarly related is the form of the letter ⟨d⟩ when followed by a letter with a bow; its ascender is then curved to the left, like the. However, not all of these features are found in every example of cursiva, which makes it difficult to determine whether or not a script may be called cursiva at all. Blackletter fonts have letters that are very bold and ornate. 0000025289 00000 n Not only were blackletter forms called Gothic script, but any other seemingly barbarian script, such as Visigothic, Beneventan, and Merovingian, were also labeled Gothic. Suetterlin script:a script, created by the Berlin graphic artist Ludwig Sütterlin (1865-1917),which was taught from 1915 to 1941 in German schools. Suetterlin script: a script, created by the Berlin graphic artist Ludwig Sütterlin (1865-1917), which was taught from 1915 to 1941 in German schools. The small letter ⟨g⟩ has a horizontal stroke at its top that forms crosses with the two downward strokes. A bit of Patience and Commitment Old German gothic handwriting and print are very different from the Roman script most English- speaking genealogists use. Johann Gutenberg used a textualis typeface for his famous Gutenberg Bible in 1455. Instead, they use letterspacing (German Sperrung) for emphasis. v Its use was so common that often any blackletter form is called Fraktur in Germany. Littera cursiva currens was used for textbooks and other unimportant books and it had very little standardization in forms. w j Notoriously difficult to read, the Fraktur form of blackletter has been giving German genealogy researchers fits for centuries. e It continued to be used occasionally until the 20th century. 0000028869 00000 n The earliest cursive blackletter form is Anglicana, a very round and looped script, which also had a squarer and angular counterpart, Anglicana formata. For Lieftinck, the highest form of textualis was littera textualis formata, used for de luxe manuscripts. S In the 19th century, the use of antiqua alongside Fraktur increased, leading to the Antiqua-Fraktur dispute, which lasted until the Nazis abandoned Fraktur in 1941. Will translate into German from English and English to German. Horace Walpole wrote in 1781 that "I am too, though a Goth, so modern a Goth that I hate the black letter, and I love Chaucer better in Dryden and Baskerville than in his own language and dress."[10]. 4 A Guide to Writing the old German "Kurrent" Script - by Margarete Mücke - translated by www.hoonsh.de You should take into consideration the rightward angled strokes of the letters, which is an angle of 60 to 70 degrees. German cursiva is similar to the cursive scripts in other areas, but forms of ⟨a⟩, ⟨s⟩ and other letters are more varied; here too, the letter ⟨w⟩ is often used. 0000009685 00000 n While an antiqua typeface is usually compound of roman types and italic types since the 16th-century French typographers, the blackletter typefaces never developed a similar distinction. W — – « ʚ Your old english gothic font text will show up here ɞ «- — ℌ – — » Duplicate To Clipboard . n 0000012536 00000 n Some characteristics of the script are: Schwabacher was a blackletter form that was much used in early German print typefaces. The most common form of Italian rotunda was littera bononiensis, used at the University of Bologna in the 13th century. 0000001755 00000 n "German Script" and "Kurrent" The Kurrent script, which is commonly known as "The Old German Script" evolved from the gothic cursive handwriting at the beginning of the 16th century. 0000001401 00000 n The letter s often has a diagonal line connecting its two bows, also somewhat resembling an ⟨8⟩, but the. The origins of the name remain unclear; some assume that a typeface-carver from the village of Schwabach—one who worked externally and who thus became known as the Schwabacher—designed the typeface. Cursiva refers to a very large variety of forms of blackletter; as with modern cursive writing, there is no real standard form. Italian cursive developed in the 13th century from scripts used by notaries. l The predominance of Gothic script in books began to be noted in the 12th century in Germany, France, and other countries using Latin. 3. lines that do not necessarily connect with each other, especially in curved letters. 0000003451 00000 n This is a notice from the Nazi times, that clearly points out that the Gothic, Black-Face lettering which was used all over Germany, even in school text books (see below), was seen as a real sign of "German-ness". English cursiva began to be used in the 13th century, and soon replaced littera oxoniensis as the standard university script. It developed first in those areas closest to France and then spread to the east and south in the 13th century. B ", Cooper Union for the Advancement of Science and Art, Association for the German Script and Language, London Review of Books article about blackletter fonts and font history in general, Intellectual property protection of typefaces, Punctuation and other typographic symbols, https://en.wikipedia.org/w/index.php?title=Blackletter&oldid=991052523, Articles with unsourced statements from February 2019, Creative Commons Attribution-ShareAlike License. trailer << /Size 69 /Info 34 0 R /Root 36 0 R /Prev 273742 /ID[<598ae620669643d484a1908b75b8a1f6><598ae620669643d484a1908b75b8a1f6>] >> startxref 0 %%EOF 36 0 obj << /Type /Catalog /Pages 33 0 R >> endobj 67 0 obj << /S 122 /Filter /FlateDecode /Length 68 0 R >> stream It continued to be commonly used for the Danish language until 1875, and for German, Estonian and Latvian until the 1940s. L z Bastarda, the "hybrid" mixture of cursiva and textualis, developed in the 15th century and was used for vernacular texts as well as Latin. Blackletter is also known as Old English or Gothic script. 0000001777 00000 n 0000009472 00000 n In the 18th century, the pointed quill was adopted for blackletter handwriting. This in contrast to Carolingian minuscule, a highly legible script which the humanists called littera antiqua ("the ancient letter"), wrongly believing that it was the script used by the ancient Romans. Old German Handschrift (handwriting), known as die Kurrentschrift orKurrent for short in German, but also known simply as die alte deutsche Schrift ('Old German script'), was closely modelled on the handwriting used in das Mittelalter(medieval times). g Download . The use of bold text for emphasis is also alien to blackletter typefaces. 0000004991 00000 n The Kurrent Alphabet. A short handy reference guide with an alphabet, reading tips, and record samples is found here. Fraktur is a notable script of this type, and sometimes the entire group of blackletter faces is incorrectly referred to as Fraktur. This page was last edited on 28 November 2020, at 00:40. Gothic fonts were used in printed books in Germany right up to the twentieth century! What size of letters in Modern Calligraphy? The word derives from Latin fractūra ("a break"), built from fractus, passive participle of frangere ("to break"), the same root as the English word "fracture". Textualis forms developed after 1190 and were used most often until approximately 1300, after which it became used mainly for de luxe manuscripts. s It continued to be commonly used for the Danish language until 1875, and for German, Estonian and Latvian until the 1940s. It is also called the "the German handwriting". Tablet or Some wide surface 4. F However, textualis was rarely used for typefaces after this. The character names use "Fraktur" for the mathematical alphanumeric symbols, while "blackletter" is used for those symbol characters in the letterlike symbols range. Blackletter (sometimes black letter), also known as Gothic script, Gothic minuscule, or Textura, was a script used throughout Western Europe from approximately 1150 until the 17th century. 03/11/2020; 2 minutes to read; In this article Overview. 0000001380 00000 n German Textualis is usually very heavy and angular, and there are few characteristic features that are common to all occurrences of the script. 2. %PDF-1.3 %���� The term Gothic was first used to describe this script in 15th-century Italy, in the midst of the Renaissance, because Renaissance humanists believed this style was barbaric and Gothic was a synonym for barbaric. The blackletter style is then determined by a font with blackletter glyphs. [6] Like that, single antiqua words or phrases may occur within a blackletter text. when a letter with a bow (in ⟨b⟩, ⟨d⟩, ⟨p⟩, ⟨q⟩) is followed by another letter with a bow (such as ⟨be⟩ or ⟨po⟩), the bows overlap and the letters are joined by a straight line (this is known as "biting"). The writing is a standard form of the earlier and very different chancery writing which was … 0000001082 00000 n Below is a full chart of the Gothic handwriting alphabet in Kurrent style. See more ideas about gothic script, lettering, calligraphy fonts. The Donatus-Kalender (also known as Donatus-und-Kalender or D-K) is the name for the metal type design that Gutenberg used in his earliest surviving printed works, dating from the early 1450s. q We present our fonts based on the Medieval age, prepared for your best text. german gothic font calligraphy alphabet. The Kurrent script, which is commonly known as "The Old German Script"evolved from the gothic cursive handwriting at the beginning of the 16th century. i Gothic handwriting was used by clerks and scribes as early as the fifteenth century and predominated in documents produced in Germany, Switzerland, Austria, and the countries of Scandinavia, well into the twentieth century. 0000025689 00000 n A list of given names with handwritten examples from records is found here: German given names handwritin… German-speaking countries had a unique script style that evolved alongside the better-known "Fraktur" or "blackletter" Gothic typefaces. Despite the frequent association of blackletter with German, the script was actually very slow to develop in German-speaking areas. The more calligraphic form is known as minuscola cancelleresca italiana (or simply cancelleresca, chancery hand), which developed into a book hand, a script used for writing books rather than charters, in the 14th century. It is a Western calligraphy style that was used in Europe from 1100s to the 1600s. Deciphering early German handwriting can be a challenge. Bracker. c For examples of old German Gothic handwriting see the PDF file Handwriting Guide: German Gothic. d It was therefore, easier to write quickly on paper in a cursive script. [citation needed] Its large size consumed a lot of manuscript space in a time when writing materials were very costly. Translates German script and Fraktur (old print), German genealogy records both Gothic script and typed. M Most importantly, all of the works of Martin Luther, leading to the Protestant Reformation, as well as the Apocalypse of Albrecht Dürer (1498), used this typeface. 0000028891 00000 n p 0000026701 00000 n Italian Rotunda also is characterized by unique abbreviations, such as ⟨q⟩ with a line beneath the bow signifying qui, and unusual spellings, such as ⟨x⟩ for ⟨s⟩ (milex rather than miles). The usual form, simply littera textualis, was used for literary works and university texts. An Anglicana bastarda form developed from a mixture of Anglicana and textualis, but by the 16th century, the principal cursive blackletter used in England was the Secretary script, which originated in Italy and came to England by way of France. m German Gothic Script This is an example of German Gothic script. Flavio Biondo, in Italia Illustrata (1474), wrote that the Germanic Lombards invented this script after they invaded Italy in the 6th century. German has all 26 letters used in the English alphabet, plus a few additional … Secretary script has a somewhat haphazard appearance, and its forms of the letters ⟨a⟩, ⟨g⟩, ⟨r⟩ and ⟨s⟩ are unique, unlike any forms in any other English script. Will translate into German from English and English to German. 2. Y It is a mixture of textualis and cursiva, developed in the early 15th century. Native German speaker, bilingual. The letters and numbers on this chart are the ones that you will be learning how to write momentarily. h MS Gothic is a Japanese font features plain strokes similar to sans serif designs, … Since then a For the alphabet of the Gothic language, see, ℭ ℌ ℑ ℜ ℨ, , John Man, How One Man Remade the World with Words, "What's The Name For The Dot Over "i" And "j"? T In the early 20th century, the Sütterlin script was introduced in the schools. A textualis form, commonly known as Gotisch or "Gothic script" was used for general publications from the fifteenth century on, but became restricted to official documents and religious publications during the seventeenth century. letters formed by sharp, straight, angular lines, unlike the typically round Carolingian; as a result, there is a high degree of "breaking", i.e. Since it was so common, all kinds of blackletter tend to be called Fraktur in German. f As early as the 11th century, different forms of Carolingian were already being used, and by the mid-12th century, a clearly distinguishable form, able to be written more quickly to meet the demand for new books,[citation needed] was being used in northeastern France and the Low Countries. ; A short handy reference guide with an alphabet, reading tips, and record samples is found here. The word 'Gothic' derives from the name of the historical Gothic period when such alphabets were most used – basically, the Middle Ages from around 1200-1500. 'Gothic' also suggests Germanic origins, and it is indeed a very Germanic script. Labor-intensive Carolingian, though legible, was unable to effectively keep up. {\displaystyle {\mathfrak {a}}{\mathfrak {b}}{\mathfrak {c}}{\mathfrak {d}}{\mathfrak {e}}{\mathfrak {f}}{\mathfrak {g}}{\mathfrak {h}}{\mathfrak {i}}{\mathfrak {j}}{\mathfrak {k}}{\mathfrak {l}}{\mathfrak {m}}{\mathfrak {n}}{\mathfrak {o}}{\mathfrak {p}}{\mathfrak {q}}{\mathfrak {r}}{\mathfrak {s}}{\mathfrak {t}}{\mathfrak {u}}{\mathfrak {v}}{\mathfrak {w}}{\mathfrak {x}}{\mathfrak {y}}{\mathfrak {z}}}. 0000026336 00000 n 0000006527 00000 n From textualis, it borrowed vertical ascenders, while from cursiva, it borrowed long ⟨f⟩ and ⟨ſ⟩, single-looped ⟨a⟩, and ⟨g⟩ with an open descender (similar to Carolingian forms). It was originally written with quill pens made from bird feathers, these generally had a broad nib. I Fonts supporting the range include Code2001, Cambria Math, and Quivira (textura style). For stylized blackletter prose, the normal Latin letters should be used, with font choice or other markup used to indicate blackletter styling. What is Gothic, how and when it was founded? Characteristics of Schwabacher are: Fraktur is a form of blackletter that became the most common German blackletter typeface by the mid-16th century. This website offers guidance in the deciphering of documents written in handwriting styles or alphabets no longer in general use. Gothic fonts were used in printed books in Germany right up to the twentieth century! 0000006505 00000 n Lieftinck's third form, littera textualis currens, was the cursive form of blackletter, extremely difficult to read and used for textual glosses, and less important books. Lieftinck also divided cursiva into three styles: littera cursiva formata was the most legible and calligraphic style. 0000007949 00000 n 0000009367 00000 n The word 'Gothic' derives from the name of the historical Gothic period when such alphabets were most used – basically, the Middle Ages from around 1200-1500. French cursiva was used from the 13th to the 16th century, when it became highly looped, messy, and slanted. Page 1 of 1 When using that method, blackletter ligatures like ⟨ch⟩, ⟨ck⟩, ⟨tz⟩ or ⟨ſt⟩ remain together without additional letterspacing (⟨ſt⟩ is dissolved, though). Old script typeface used throughout Western Europe, "Black letter" redirects here. K Q 0000005013 00000 n The name is taken from two works: the Ars grammatica of Aelius Donatus, a Latin grammar, and the Kalender (calendar). [5] It is a form of textura. 'Gothic' also suggests Germanic origins, and it is indeed a very Germanic script. "German script is for foreign-based Germans an indispensable defence against the threat of becoming less German". When writing, it may help to think of the hands of a It was in fact invented in the reign of Charlemagne, although only used significantly after that era, and actually formed the basis for the later development of blackletter.[3]. 0000026147 00000 n Hybrida is also called bastarda (especially in France), and as its name suggests, is a hybrid form of the script. A hybrida form, which was basically cursiva with fewer looped letters and with similar square proportions as textualis, was used in the 15th and 16th centuries. H G Blackletter (sometimes black letter), also known as Gothic script, Gothic minuscule, or Textura, was a script used throughout Western Europe from approximately 1150 until the 17th century. {\displaystyle {\mathfrak {A}}{\mathfrak {B}}{\mathfrak {C}}{\mathfrak {D}}{\mathfrak {E}}{\mathfrak {F}}{\mathfrak {G}}{\mathfrak {H}}{\mathfrak {I}}{\mathfrak {J}}{\mathfrak {K}}{\mathfrak {L}}{\mathfrak {M}}{\mathfrak {N}}{\mathfrak {O}}{\mathfrak {P}}{\mathfrak {Q}}{\mathfrak {R}}{\mathfrak {S}}{\mathfrak {T}}{\mathfrak {U}}{\mathfrak {V}}{\mathfrak {W}}{\mathfrak {X}}{\mathfrak {Y}}{\mathfrak {Z}}}, a y U This does not apply, however, to loanwords that have been incorporated into the language. The capital letters are compound of rounded ⟨c⟩-shaped or ⟨s⟩-shaped strokes. Frakturis a notable script of this type, and sometimes the entire group of blackletter faces is incorrectly referred to as Fraktur. 0000007971 00000 n At the top and at the bottom, both strokes join in an angle. One common feature is the use of the letter ⟨w⟩ for Latin ⟨vu⟩ or ⟨uu⟩. Littera cursiva textualis (or libraria) was the usual form, used for writing standard books, and it generally was written with a larger pen, leading to larger letters. ʚ Write stuff here ɞ »— – Enter your text here! H�tW��$E��+:Fڡ]��d%$"BD��.��~��vU����s��W���p|y|���x|�����GK�Ҏ�3��z�Оg1��x. t N "German script is for foreign-based Germans an indispensable defence against the threat of becoming less German". x [7] However, blackletter was considered to be more readily legible (especially by the less literate classes of society), and it therefore remained in use throughout the 17th century and into the 18th for documents requiring widespread dissemination, such as proclamations and Acts of Parliament, and for literature aimed at the common people, such as ballads, chivalric romances, and jokebooks. u Will also do LDS microfilm searches. According to Dutch scholar Gerard Lieftinck, the pinnacle of blackletter use was reached in the 14th and 15th centuries. Black letter, also called Gothic script or Old English script, in calligraphy, a style of alphabet that was used for manuscript books and documents throughout Europe—especially in German-speaking countries—from the end of the 12th century to the 20th century. Please use the pulldown menu to view different character maps contained in this font. Native German speaker, bilingual. When writing, it may help to think of the hands of a Two major styles emerged corresponding to the two handwriting styles: Gothic, with pointed, heavy-bodied letters, and Roman, with lighter, more simple letters. Although "s" can be quite tricky, as there are at least three different ways to … 0000011627 00000 n Other small letters have analogous forms. New universities were founded, each producing books for business, law, grammar, history and other pursuits, not solely religious works, for which earlier scripts typically had been used. Blackletter (sometimes black letter), also known as Gothic script, Gothic minuscule, or Textura, was a script used throughout Western Europe from approximately 1150 until the 17th century. The capital letter ⟨H⟩ has a peculiar form somewhat reminiscent of the small letter ⟨h⟩. Handwriting evolved over time in German-speaking countries. V 0000001608 00000 n The formata form was used until the 15th century and also was used to write vernacular texts. Thegothic cursive had been in use throughout much of the medieval ages and haddeveloped into a staggering number of different writing styles. r 35 0 obj << /Linearized 1 /O 37 /H [ 1082 319 ] /L 274570 /E 30906 /N 4 /T 273752 >> endobj xref 35 34 0000000016 00000 n For normal text writing, the ordinary Latin code points are used. Before the 1940s, most records in German-speaking areas (as well as surname books, newspapers, journals and gazetteers) used a Gothic font called Fraktur. The best website for free high-quality German Gothic fonts, with 24 free German Gothic fonts for immediate download, and 74 professional German Gothic fonts for the best price on the Web. Encient German Gothic. Will also do LDS microfilm searches. Love your letter "S". Ironically, Hitler terminated its use in 1941 because he thought it looked "un-German" and was "of Jewish origin". 4 A Guide to Writing the old German "Kurrent" Script - by Margarete Mücke - translated by www.hoonsh.de You should take into consideration the rightward angled strokes of the letters, which is an angle of 60 to 70 degrees. 0000009707 00000 n Historical research, including genealogical research, in original documents is impossible without the ability to interpret Gothic handwriting. Gothic, barroque and Romanic fonts. Looking for Gothic German fonts? This particular entry is the record of the wedding between Franz Benesch of Vechnov & Theresia Haschek of Zlatkov in 1831. X Mathematical blackletter characters are separately encoded in Unicode in the Mathematical alphanumeric symbols range at U+1D504-1D537 and U+1D56C-1D59F (bold), except for individual letters already encoded in the Letterlike Symbols range (plus long s at U+017F). For this class you'll need: 1. k It is also calledthe "the German handwriting". Johann Bämler, a printer from Augsburg, probably first used it as early as 1472. Cursiva developed partly because of the introduction of paper, which was smoother than parchment. E Restaurants With Strawberry Lemonade Near Me, Nycha Section 8, 12 Volt Air Blower, Cloud Computing Market, Tug Hill Weather, Chile Weather Yearly, Pipal Tree In English, Mr o Hello world! Copyright 2019 Fundacja Sienna. Wszelkie prawa zastrzeżone.
CommonCrawl
The Egyptian Heart Journal The optimal diagnostic strategies for patient with coronary artery diseases and stable chest pain syndrome: a cost-effectiveness analysis Parvin Jafari1, Reza Goudarzi ORCID: orcid.org/0000-0003-4399-34982, Mohammadreza Amiresmaeili3 & Hamidreza Rashidinejad4 The Egyptian Heart Journal volume 72, Article number: 82 (2020) Cite this article Numerous invasive and noninvasive diagnostic tests with different cost and effectiveness exist for detection of coronary artery disease. This diversity leads to unnecessary utilization of health services. For this reason, this study focused on the cost-effectiveness analysis of diagnostic strategies for coronary artery disease from the perspective of the health care system with 1-year time horizon. Incremental cost effectiveness ratios of all strategies were less than the threshold except for the electrocardiography-computed tomography angiography-coronary angiography strategy, and cost of the cardiac magnetic resonance imaging-based strategy was higher than the cost of other strategies. Also, the number of correct diagnosis in the electrocardiography-coronary angiography strategy was higher than the other strategies, and its ICER was 15.197 dollars per additional correct diagnosis. Moreover, the sensitivity analysis found that the probability of doing MRI and sensitivity of the exercise electrocardiography had impact on the results. The most cost-effective strategy for acute patient is ECG-CA strategy, and for chronic patient, the most cost-effective strategies are electrocardiography-single photon emission computed tomography-coronary angiography and electrocardiography-exercise electrocardiography-coronary angiography. Applying these strategies in the same clinical settings may lead to a better utilization of resources. In the last decade, cardiovascular diseases have become one of the factors which threaten human health [1].One of the most common cardiovascular diseases is coronary artery disease which occurs due to the accumulation of masses of lipids such as cholesterol and fibrous tissue in the form of plaque on the artery walls and which provides problem for the blood flow in the vessels [2]. Economic burden of coronary artery disease on health care system is significant. For example, in Australia in 2014, coronary artery disease was responsible for 27% of the health care spending [3]. Also, in 2016, a UK study reported £ 62,210 and £ 35,549 cost attributable to coronary artery heart diseases for low- and high-risk patients respectively [4]. An Iranian study in 2017 indicated that cost of this disease was approximately between 4715 and 4908 billion dollars [5]. Every day, a large number of people with chest pain refer to heart centers with near half of them without a real cardiac problem. Hence, correct diagnosis and appropriate treatment of these patients make challenges not only for physicians and hospitals but also for governments, health-insurance companies, and health maintenance organizations [6]. According to available guidelines, diagnostic tests for coronary artery disease include the following: electrocardiography (ECG), echocardiography (ECHO), exercise electrocardiography (Ex-ECG), computed tomography coronary angiography (CTA), coronary angiography (CA), cardiac single-photon emission computed tomography (SPECT), stress cardiac magnetic resonance imaging (C-MRI), exercise echocardiography (EX-ECHO), and stress echocardiography (stress ECHO) [7,8,9]. These tests with having different cost and effectiveness might lead to unnecessary utilization of health services and impose an enormous economic burden on families, health care systems, and government. For this, optimal allocation of health resources has become important issues for the health care system [10]. Economic evaluation is one of the explicit methods for resource allocation. Economic evaluations are widely employed in health policies, including the evaluation of preventive and diagnostic programs, intervention, treatment, and decision-making. The most commonly used form of economic evaluation is the cost-effectiveness analysis [11]. Because there is no information about cost-effectiveness of the diagnostic tests in Iran, the aim of this study was to evaluate the cost effectiveness of seven diagnostic tests including: ECG, ECHO, Ex-ECG, CTA, CA, SPECT, and C-MRI that are the most common in Iran. This study is a cost-effectiveness analysis from the viewpoint of health care system over a 1-year time horizon. For the purpose of the present study, nine diagnostic strategies were selected. Relevant data were derived from the medical records (2017–2018) of two Iranian hospitals in 2019. Each of the strategies comprised of two to four diagnostic tests out of seven available tests (ECG, ECHO, EX-ECG, CTA, C-MRI, CA, and SPECT) (see Table 1). All of the strategies started with electrocardiography; however, next steps of the strategy depended on the result of its precedent, i.e., if positive or uncertain result is achieved, strategy will continue. For example, for the patient with chest pain, ECG test is done. If the initial test is positive or uncertain, then ECHO is performed. If ECHO test is also positive, the patient would be subjected to CA. Table 1 Diagnostic test in each strategy Decision tree was used for modeling, which consisted of nine branches, each one representing a unique strategy. All strategies consisted of several sub-branches, and for each of them, costs, effectiveness, and probabilities were entered into the model. Since all patients underwent electrocardiographic test on arrival at the hospital, it was not included in the modeling, but its cost was calculated for all of the strategies (see Fig. 1). Decision analytic tree for diagnostic strategies: CA (invasive coronary angiography), C-MRI (cardiac magnetic resonance imaging), CTA (computed tomography coronary angiography), ECHO (echocardiography), Ex-ECG (exercise electrocardiography), SPECT (single-photon emission computed tomography), ECG (electrocardiography) Model parameters included tests sensitivity, real or false positive probability, cost, and effectiveness. Values of probabilities, costs, and effectiveness were calculated based on available data whereas sensitivities were extracted from previous studies (see Table 2). Table 2 Input data for decision tree model All costs associated with inpatient and outpatient services were considered from the perspective of the health care system. These included direct medical and non-medical costs. Direct medical costs encompassed cost of labor, laboratory, pathology, pharmaceuticals, medical goods and equipment, hospitalization, and diagnostic imaging. Direct non-medical costs included cost of capital depreciation, energy consumption, and administrative affairs. Direct medical costs were collected by referring to the medical records department using the patient records, and direct non-medical costs were collected from the hospital accounting department. Finally, these costs were calculated for each method and strategy separately. Effectiveness was measured by the number of cases who were correctly diagnosed because angiography is considered a gold standard with 100% sensitivity [14], and given that all strategies ultimately end to angiography, if the angiographic result was positive, it showed that the person had been correctly diagnosed, and if it was negative, indicated that the patient had not been correctly diagnosed. Cost-effectiveness analysis Incremental cost effectiveness ratio (ICER) was calculated with the following formula [15]: \( \mathrm{ICER}=\frac{Cn- Cc}{En- Ec} \) in which: Cn = cost of new intervention En = effect of new intervention Cc = cost of current Ec = effect of current According to previous studies [12, 14, 16], clinical guidelines, and expert opinions, we found out that strategies 1 and 2 can be used to diagnose cases with acute coronary syndrome and for diagnosing case with chronic coronary syndrome; strategies 3–9 are helpful; hence, diagnostic strategies were analyzed in three categories of total, acute, and chronic. An expert panel made up of cardiologists, economists, and policymakers suggested the cost of sixth strategy as the baseline with its maximum cost to be regarded as threshold. Therefore, the threshold of the study was set at 2600 $ per correct diagnosis. All data analysis was performed using decision tree model through TreeAge pro 2011. To test the robustness of the model, the impact of uncertain parameters such as cost, effectiveness, sensitivity, and probabilities on results were assessed. The parameters were analyzed by tornado diagram (Fig. 2); finally, considering the output of the tornado diagram, parameters that had the most effect on the model were analyzed by one-way and two-way sensitivity analysis. Also, probability sensitivity analysis with Monte Carlo simulation was performed for cost parameters using gamma distribution and effectiveness parameters using beta distribution. Tornado diagram: P (probability), CA (invasive coronary angiography), C-MRI (cardiac magnetic resonance imaging), CTA (computed tomography coronary angiography), ECHO (echocardiography), Ex-ECG (exercise electrocardiography), SPECT (single-photon emission computed tomography), ECG (electrocardiography) Base case results Analysis indicated that ECG–Echo–EXECG–CA strategy with $48.183 cost and 0.003% correct diagnosis had the minimum cost and effectiveness, so it was chosen as the current strategy. ECG-CA strategy with 93.899% correct diagnoses is the most effective strategy. ECG-CMRI-CA strategy had the highest cost. ICER of ECG-CTA-CA strategy vs strategy 5 is 94.450 dollars per additional case that is above the threshold and is not acceptable. ICER of ECG-CA strategy is 15.197 dollars per case, and it is the most cost-effective strategy. The other information is available in Table 3. Table 3 Base case cost-effectiveness analysis All strategies were located at the northeast of cost effectiveness acceptability plane. The eighth (ECG-TA-CA) strategy has higher cost, and it is placed above of threshold line and therefore is dominated. Sensitivity results All model parameters were considered in Tornado analysis, but the diagram covered only variables which affected results. According to the diagram, probability of C-MRI, CA, and EX-ECG, and sensitivity of C-MRI and EX-ECG have the most impact on the results of model. One-way sensitivity analysis indicated that probability of MRI and sensitivity of EXECG influenced the ICERs; however, two-way sensitivity analysis showed that by increasing the probability of MRI, the ninth strategy would become more cost effective, and by increasing the sensitivity of EXECG, the sixth strategy would become more cost effective (Fig. 3). Two-way sensitivity analysis: two-way sensitivity analysis between sensitivity EX-ECG, P, and CMRI The cost-effectiveness acceptability curve (Fig. 4) with 1000 iterations showed that the cost-effectiveness probability of strategy 5 (ECG-ECHO-EXECG-CA) under 500$ threshold is 100%. Whereas for the willingness-to-pay higher than the threshold, the cost-effectiveness probability of strategy 2 (ECG-CA), strategy 3 (ECG-SPECT-CA), and strategy 5 is the highest. Cost-effectiveness acceptability curve This study was done in two Iranian hospitals during 2017–2018, to evaluate and compare the cost and effectiveness of nine diagnostic strategies for coronary artery disease with chest pain. Based on the results, the SPECT test had the highest cost per case, followed by CTA, MRI, ECHO, EX-ECG, and ECG, respectively. Similarly, Boldt et al. [17] and Min et al. [16] showed that the highest cost belongs to SPECT. Zacharias et al. (2015) and Zacharias et al. (2016) indicated that the cost of ECHO was less than the cost of EXECG which contradicts the present study [18, 19]. The MRI-based strategy with the cost of $3450.017 was more expensive than the others followed by the SPECT-based strategy and CTA-based strategy respectively. Likewise, Min et al. showed that CTA-based strategy was costlier than EXECG-based strategy. Bertoldi et al.'s study indicated that cardiac MRI-based strategies and SPECT-based strategies had the highest costs and can be used depending on threshold; the study by Walker et al. showed that the cost of the ninth strategy (MRI), which was $ 18,284, is greater than the cost of the SPECT-based strategy [12, 14, 20]. Moschetti et al., Thom et al., and Boldt et al. also depicted that C-MRI is costly but can be used as a good option to diagnose people with a high probability of coronary artery disease and to reduce angiography [17, 21, 22]. In contrast, Min et al. [20] showed that ECG-CA strategy, at $ 14,003, is more expensive than other strategies, and the study by Genders et al. [23] indicated ECG-CTA-CA strategy was less expensive. According to the results, it is clear that the cost of tests and diagnostic strategies in Iran is lower than the costs in other countries because of the different types of medical insurances that the government provided for people, and the other reason is because of the majority of hospital staffs have low salary. In this study, the effectiveness of the second strategy (ECG-CA) was 93.899%, which is higher than the other strategies, followed by MRI-based strategy and SPECT-based strategy. Similarly, Thom et al. showed that rate of correct diagnosis by MRI and ECHO was 80% and 75% respectively [22]. Hamilton et al. showed the rate of correct diagnosis by CTA and EXECG were 26% and 51% respectively [24]. Similar to the findings of current study, Sharples et al. also indicated that the SPECT-based strategy had 83% correct diagnosis [25]. To sum up, for acute patient's strategies, the cost and effectiveness of the second strategy (ECG-CA) were significantly higher than ECG-ECH-CA strategy. ICERs indicate that the first strategy generates $12.071 more cost per correct diagnosis. Therefore, the second strategy is more cost-effective for patients with acute coronary artery disease. However, the finding of cost-effectiveness analysis of chronic patients' strategies suggested that the ECG-ECHO-SPECT-CA and ECG-EX ECG-SPECT-CA strategies were not cost effective and therefore not acceptable because of their higher ICER and ACER. Strategies 3 (ECG-SPECT-CA) and 6 (ECG-EXECG-CA) had an appropriate ICER. Based on the results, it can be concluded that strategies 3 and 6 are more appropriate for people with lower risk and emerged as the most effective strategy. The American Heart Association's latest guidelines for diagnosis and management of angina made more traditional recommendations for EXECG as the best first line compared to ECHO and SPECT or CTA [12]. In this study, ECG-CTA-CA strategy had high cost. Because in Iran the threshold for a correct diagnosis is low, this strategy is not cost-effective. Although Min et al. study's indicated that the eighth strategy even with ICER $ 17 516 per patient is most cost effective, and also, the studies of Priest et al., Joseph et al., Hamilton et al., and Min et al. showed that the ECG-CTA-CA strategy is the most cost-effective strategy [16, 20, 24, 26, 27]. A possible limitation of this study was that we did not include all diagnostic strategies and only analyzed strategies which are more common in Iran. Another limitation of the present study is the fact that test sensitivity was extracted from other studies which may be different from the real one. This study was carried out from economic aspect so it might have different results according to the patient's situation. Given that the study did not assess the clinical outcomes, three important concepts are lacking such as the quality-adjusted life years (QALY) assessment, standard gambling (SG), and time-trade off (TTO) concept. Despite these limitations, to our knowledge, this study is the first study to evaluate the cost-effectiveness of coronary artery disease diagnostic tests in Iran and also this research has analyzed the diagnostic strategies of coronary artery disease in general as well as in two groups of acute and chronic patient strategies. This study indicated that all strategies except CTA-based strategy are cost-effective, but the ECG-CA strategy is the most cost-effective strategy for acute patients. For chronic patients, ECG-SPECT-CA and ECG-EX ECG-CA strategies are the best choices. Due to the limited resources in the health care system, applying these strategies to patient in the same clinical setting may lead to a better utilization of resources. Strategy 9 (ECG-CMRI-CA) in high threshold may lead to early diagnosis of the disease and thereby saving resource. To summarize, it is recommended to consider economic issues as well a clinical issues for choosing diagnostic strategies, and in the same condition, the cost effectiveness of the strategies should be the basis of choice. Due to the difference in cost effectiveness of diagnostic strategies in Iran compared with other countries, these results should be included in developing local clinical guidelines. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. ICER: Incremental cost-effectiveness ratio ACER: Average cost-effectiveness ratio ECG: EX-ECG: Exercise electrocardiography SPECT: ECHO: CTA: Computed tomography coronary angiography CMRI: Cardiac magnetic resonance imaging Invasive coronary angiography QALY: Quality-adjusted life years SG: Standard gambling TTO: Time-trade off Ramezani Y, Mobasheri M, Moosavi SG et al (2011) Exposure rate of cardiovascular risk factors among clients of health-care clinics in Kashan, Autumn 2010. J Shahrekord Univ Med Sci 13(2):76-82. Salehi N, Nilchi AR (2015) Automatic diagnosis of coronary arteries in heart angiogram images using tree tree tracking of three-dimensional structures scientific forums. Mashhad Ferdowsi University, Mashhad, pp 139–144 McCreanor V, Graves N, Barnett A et al (2018) A systematic review and critical analysis of cost-effectiveness studies for coronary artery disease treatment. Research. 7:77 Asaria M, Walker S, Palmer S et al (2016) Using electronic health records to predict costs and outcomes in stable coronary artery disease. Heart. https://doi.org/10.1136/heartjnl-2015-308850 Raghfar H, Sargazi N, Mehraban S et al (2018) The economic burden of coronary heart disease in Iran: a bottom-up approach in 2014. J Ardabil Univ Med Sci 18(3):341–356 Bassan R (2002) Chest pain units: a modern way of managing patients with chest pain in the emergency department. Arq Bras Cardiol 79:203–209 Joseph J, Velasco A, Hage FG (2018) Guidelines in review: comparison of ESC and ACC/AHA guidelines for the diagnosis and management of patients with stable coronary artery disease. J Nucl Cardiol 25(2):509–515 Goodacre S, Thokala P, Carroll C et al (2013) Systematic review, meta-analysis and economic modelling of diagnostic strategies for suspected acute coronary syndrome. Health Technol Assess 17(1):v–vi 1-188 Montalescot G, Sechtem U, Achenbach S et al (2013) 2013 ESC guidelines on the management of stable coronary artery disease: the Task Force on the management of stable coronary artery disease of the European Society of Cardiology. Eur Heart J 34(38):2949–3003 Akbari Sari A, Gholami M (2018) Study of the relationship between health transformation plan and amount of Cathlab room procedures in three hospitals, Tehran, Iran. Res Med 42(3):137–143 Husereau D, Drummond M, Petrou S et al (2013) Consolidated health economic evaluation reporting standards (CHEERS) statement. Cost Effect Resour Allocation 11(1):6 Bertoldi EG, Stella SF, Rohde LE et al (2016) Long-term cost-effectiveness of diagnostic tests for assessing stable chest pain: modeled analysis of anatomical and functional strategies. Clin Cardiol 39(5):249–256 Dresden S, Mitchell P, Rahimi L et al (2014) Right ventricular dilatation on bedside echocardiography performed by emergency physicians aids in the diagnosis of pulmonary embolism. Ann Emerg Med 63(1):16–24 Walker S, Girardin F, McKenna C et al (2013) Cost-effectiveness of cardiovascular magnetic resonance in the diagnosis of coronary heart disease: an economic evaluation using data from the CE-MARC study. Heart. https://doi.org/10.1136/heartjnl-2013-303624 Fox Rushby J (2005) Economic evaluation. McGraw-Hill Education (UK), England Min JK, Gilmore A, Budoff MJ et al (2010) Cost-effectiveness of coronary CT angiography versus myocardial perfusion SPECT for evaluation of patients with chest pain and no known coronary artery disease. Radiology. 254(3):801–808 Boldt J, Leber AW, Bonaventura K et al (2013) Cost-effectiveness of cardiovascular magnetic resonance and single-photon emission computed tomography for diagnosis of coronary artery disease in Germany. J Cardiovasc Magn Reson 15(1):30 Zacharias K, Ahmadvazir S, Ahmed A et al (2015) Relative diagnostic, prognostic and economic value of stress echocardiography versus exercise electrocardiography as initial investigation for the detection of coronary artery disease in patients with new onset suspected angina. IJC Heart Vasc 7:124–130 Zacharias K, Ahmed A, Shah BN et al (2016) Relative clinical and economic impact of exercise echocardiography vs. exercise electrocardiography, as first line investigation in patients without known coronary artery disease and new stable angina: a randomized prospective study. Eur Heart J Cardiovasc Imaging 18(2):195–202 Min JK, Gilmore A, Jones EC et al (2017) Cost-effectiveness of diagnostic evaluation strategies for individuals with stable chest pain syndrome and suspected coronary artery disease. Clin Imaging 43:97–105 Moschetti K, Muzzarelli S, Pinget C et al (2012) Cost evaluation of cardiovascular magnetic resonance versus coronary angiography for the diagnostic work-up of coronary artery disease: application of the European Cardiovascular Magnetic Resonance registry data to the German, United Kingdom, Swiss, and United States health care systems. J Cardiovasc Magn Reson 14(1):35 Thom H, West NE, Hughes V et al (2014) Cost-effectiveness of initial stress cardiovascular MR, stress SPECT or stress echocardiography as a gate-keeper test, compared with upfront invasive coronary angiography in the investigation and management of patients with stable chest pain: mid-term outcomes from the CECaT randomised controlled trial. BMJ Open 4(2):e003419 Genders TS, Petersen SE, Pugliese F et al (2015) The optimal imaging strategy for patients with stable chest pain a cost-effectiveness analysis. Ann Intern Med 162(7):474–484 Hamilton-Craig C, Fifoot A, Hansen M et al (2014) Diagnostic performance and cost of CT angiography versus stress ECG—a randomized prospective study of suspected acute coronary syndrome chest pain in the emergency department (CT-COMPARE). Int J Cardiol 177(3):867–873 Sharples L, Hughes V, Crean A et al (2007) Cost-effectiveness of functional cardiac testing in the diagnosis and management of coronary artery disease: a randomised controlled trial. The CECaT trial. Health Technol Assess 11(49):1-115. Ladapo JA, Jaffer FA, Hoffmann U et al (2009) Clinical outcomes and cost-effectiveness of coronary computed tomography angiography in the evaluation of patients with chest pain. J Am Coll Cardiol 54(25):2409–2422 Priest VL, Scuffham PA, Hachamovitch R et al (2011) Cost-effectiveness of coronary computed tomography and cardiac stress imaging in the emergency department: a decision analytic model comparing diagnostic strategies for chest pain in patients at low risk of acute coronary syndromes. JACC Cardiovasc Imaging 4(5):549–556 Special thanks to all hospitals staffs that providing these data for this study. Social Determinants of Health Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran Parvin Jafari Health Services Management Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran Reza Goudarzi Department of Health Management and Economics, Faculty of Management and Medical Informatics, Kerman University of Medical Sciences, Kerman, Iran Mohammadreza Amiresmaeili Cardiovascular Research center,institute of basic and clinical physiology science., Kerman University of Medical Sciences, Kerman, Iran Hamidreza Rashidinejad PJ: data collection, data analysis, and providing draft version of the paper. RG: idea conception, data analysis, and editing draft version of the paper. MA: data analysis and editing draft version of the paper. HR: data analysis and editing draft version of the paper. All authors read and approved the final manuscript. Correspondence to Reza Goudarzi. Evaluated by: Kerman University of Medical Sciences, approval date: May 25, 2019, and reference number is IR.KMU.REC.1398.117. Consent to participate is not applicable. We have no conflict of interest. Jafari, P., Goudarzi, R., Amiresmaeili, M. et al. The optimal diagnostic strategies for patient with coronary artery diseases and stable chest pain syndrome: a cost-effectiveness analysis. Egypt Heart J 72, 82 (2020). https://doi.org/10.1186/s43044-020-00111-y Accepted: 20 October 2020 Diagnostic strategies
CommonCrawl
A normal divided by the $\sqrt{\chi^2(s)/s}$ gives you a t-distribution — proof let $Z \sim N(0,1)$ and $W \sim \chi^2(s)$. If $Z$ and $W$ are independently distributed then the variable $Y = \frac{Z}{\sqrt{W/s}}$ follows a $t$ distribution with degrees of freedom $s$. I am looking for a proof of this fact, a reference is good enough if you do not want to write down the complete argument. probability distributions references Richard Hardy MonoliteMonolite $\begingroup$ This is demonstrated formally at stats.stackexchange.com/questions/52906: the ratio, when written as an integral, is seen to be a mixture of Gaussians, and that demonstration shows that the mixture is a t distribution. $\endgroup$ – whuber♦ May 13 '15 at 13:22 $\begingroup$ In some textbooks this is a definition of a t-distribution. You do not need to prove it. How to derive a pdf given such a definition is however a valid question. $\endgroup$ – mpiktas May 15 '15 at 8:04 Let $Y$ be a chi-square random variable with $n$ degrees of freedom. Then the square-root of $Y$, $\sqrt Y\equiv \hat Y$ is distributed as a chi-distribution with $n$ degrees of freedom, which has density $$ f_{\hat Y}(\hat y) = \frac {2^{1-\frac n2}}{\Gamma\left(\frac {n}{2}\right)} \hat y^{n-1} \exp\Big \{{-\frac {\hat y^2}{2}} \Big\} \tag{1}$$ Define $X \equiv \frac {1}{\sqrt n}\hat Y$. Then $ \frac {\partial \hat Y}{\partial X} = \sqrt n$, and by the change-of-variable formula we have that $$ f_{X}(x) = f_{\hat Y}(\sqrt nx)\Big |\frac {\partial \hat Y}{\partial X} \Big| = \frac {2^{1-\frac n2}}{\Gamma\left(\frac {n}{2}\right)} (\sqrt nx)^{n-1} \exp\Big \{{-\frac {(\sqrt nx)^2}{2}} \Big\}\sqrt n $$ $$=\frac {2^{1-\frac n2}}{\Gamma\left(\frac {n}{2}\right)} n^{\frac n2}x^{n-1} \exp\Big \{{-\frac {n}{2}x^2} \Big\} \tag{2}$$ Let $Z$ be a standard normal random variable, independent from the previous ones, and define the random variable $$T = \frac{Z}{\sqrt \frac Yn}= \frac ZX $$. By the standard formula for the density function of the ratio of two independent random variables, $$f_T(t) = \int_{-\infty}^{\infty} |x|f_Z(xt)f_X(x)dx $$ But $f_X(x) = 0$ for the interval $[-\infty, 0]$ because $X$ is a non-negative r.v. So we can eliminate the absolute value, and reduce the integral to $$f_T(t) = \int_{0}^{\infty} xf_Z(xt)f_X(x)dx $$ $$ = \int_{0}^{\infty} x \frac{1}{\sqrt{2\pi}}\exp \Big \{{-\frac{(xt)^2}{2}}\Big\}\frac {2^{1-\frac n2}}{\Gamma\left(\frac {n}{2}\right)} n^{\frac n2}x^{n-1} \exp\Big \{{-\frac {n}{2}x^2} \Big\}dx $$ $$ = \frac{1}{\sqrt{2\pi}}\frac {2^{1-\frac n2}}{\Gamma\left(\frac {n}{2}\right)} n^{\frac n2}\int_{0}^{\infty} x^n \exp \Big \{-\frac 12 (n+t^2) x^2\Big\} dx \tag{3}$$ The integrand in $(3)$ looks promising to eventually be transformed into a Gamma density function. The limits of integration are correct, so we need to manipulate the integrand into becoming a Gamma density function without changing the limits. Define the variable $$m \equiv x^2 \Rightarrow dm = 2xdx \Rightarrow dx = \frac {dm}{2x}, \; x = m^{\frac 12}$$ Making the substitution in the integrand we have $$I_3=\int_{0}^{\infty} x^n \exp \Big \{-\frac 12 (n+t^2) m\Big\} \frac {dm}{2x} \\ = \frac 12\int_{0}^{\infty} m^{\frac {n-1}{2}} \exp \Big \{-\frac 12 (n+t^2) m\Big \} dm \tag{4}$$ The Gamma density can be written $$ Gamma(m;k,\theta) = \frac {m^{k-1} \exp\Big\{-\frac{m}{\theta}\Big \}}{\theta^k\Gamma(k)}$$ Matching coefficients, we must have $$k-1 = \frac {n-1}{2} \Rightarrow k^* = \frac {n+1}{2}, \qquad \frac 1\theta =\frac 12 (n+t^2) \Rightarrow \theta^* = \frac 2 {(n+t^2)} $$ For these values of $k^*$ and $\theta^*$ the terms in the integrand involving the variable are the kernel of a gamma density. So if we divide the integrand by $(\theta^*)^{k^*}\Gamma(k^*)$ and multiply outside the integral by the same magnitude, the integral will be the gamma distr. function and will equal unity. Therefore we have arrived at $$I_3 = \frac12(\theta^*)^{k^*}\Gamma(k^*) = \frac12 \Big (\frac 2 {n+t^2}\Big ) ^{\frac {n+1}{2}}\Gamma\left(\frac {n+1}{2}\right) = 2^ {\frac {n-1}{2}}n^{-\frac {n+1}{2}}\Gamma\left(\frac {n+1}{2}\right)\left(1+\frac {t^2}{n}\right)^{-\frac 12 (n+1)} $$ Inserting the above into eq. $(3)$ we get $$f_T(t) = \frac{1}{\sqrt{2\pi}}\frac {2^{1-\frac n2}}{\Gamma\left(\frac {n}{2}\right)} n^{\frac n2}2^ {\frac {n-1}{2}}n^{-\frac {n+1}{2}}\Gamma\left(\frac {n+1}{2}\right)\left(1+\frac {t^2}{n}\right)^{-\frac 12 (n+1)}$$ $$=\frac{\Gamma[(n+1)/2]}{\sqrt{n\pi}\,\Gamma(n/2)}\left(1+\frac {t^2}{n}\right)^{-\frac 12 (n+1)}$$ ...which is what is called the (density function of) the Student's t-distribution, with $n$ degrees of freedom. Alecos PapadopoulosAlecos Papadopoulos Although E. S. Pearson didn't like it, Fisher's original argument was geometric, simple, convincing, and rigorous. It relies on a small number of intuitive and easily established facts. They are easily visualized when $s=1$ or $s=2$, where the geometry can be visualized in two or three dimensions. In effect, it amounts to using cylindrical coordinates in $\mathbb{R}^s\times\mathbb{R}$ to analyze $s+1$ iid Normal variables. $s+1$ independent and identically distributed Normal variates $X_1, \ldots, X_{s+1}$ are spherically symmetrical. This means that the radial projection of the point $(X_1, \ldots, X_{s+1})$ onto the unit sphere $S^s \subset \mathbb{R}^{s+1}$ has a uniform distribution on $S^s$. A $\chi^2(s)$ distribution is that of the sum of squares of $s$ independent standard Normal variates. Thus, setting $Z=X_{s+1}$ and $W = X_1^2 + \cdots + X_s^2$, the ratio $Z/\sqrt{W}$ is the tangent of the latitude $\theta$ of the point $(X_1, \ldots, X_s, X_{s+1})$ in $\mathbb{R}^{s+1}$. $\tan\theta$ is unchanged by radial projection onto $S^s$. The set determined by all points of latitude $\theta$ on $S^s$ is an $s-1$ dimensional sphere of radius $\cos \theta$. Its $s-1$ dimensional measure therefore is proportional to $$\cos^{s-1}\theta = (1 + \tan^2\theta)^{-(s-1)/2}.$$ The differential element is $\mathrm{d}(\tan\theta) = \cos^{-2}\theta \,\mathrm{d}\theta = (1 + \tan^2\theta) \,\mathrm{d}\theta$. Writing $t = Z/\sqrt{W/s} = \sqrt{s}\tan\theta$ gives $\tan\theta = t/\sqrt{s}$, whence $$1+t^2/s = 1+\tan^2\theta$$ and $$\mathrm{d}t = \sqrt{s}\,\mathrm{d}\tan\theta = \sqrt{s}(1+\tan^2\theta)\,\mathrm{d}\theta.$$ Together these equations imply $$\mathrm{d}\theta = \frac{1}{\sqrt{s}} \left(1+t^2/s\right)^{-1}\mathrm{d}t.$$ Incorporating the factor of $1/\sqrt{s}$ into a normalizing constant $C(s)$ shows the density of $t$ is proportional to $$(1 + \tan^2\theta)^{-(s-1)/2}\,\mathrm{d}\theta = (1 + t^2/s)^{-(s-1)/2}\ (1 + t^2/s)^{-1}\,\mathrm{d}t = (1 + t^2/s)^{-(s+1)/2}\,\mathrm{d}t.$$ That is the Student t density. The figure depicts the upper hemisphere (with $Z \ge 0$) of $S^s$ in $\mathbb{R}^{s+1}$. The crossed axes span the $W$-hyperplane. The black dots are part of a random sample of a $s+1$-variate standard Normal distribution: they are the values projecting to a constant given latitude $\theta$, shown as the yellow band. The density of these dots is proportional to the $s-1$-dimensional volume of that band, which itself is an $S^{s-1}$ of radius $\theta$. The cone over that band is drawn to terminate at a height of $\tan \theta$. Up to a factor of $\sqrt{s}$, the Student t distribution with $s$ degrees of freedom is the distribution of this height as weighted by the measure of the yellow band upon normalizing the area of the unit sphere $S^s$ to unity. Incidentally, the normalizing constant must be $1/\sqrt{s}$ (as previously mentioned) times the relative volumes of the spheres, $$\eqalign{ C(s) &= \frac{1}{\sqrt{s}} \frac{|S^{s-1}|}{|S^s|} = \frac{1}{\sqrt{s}} \frac{s \pi^{s/2} \Gamma(\frac{s+1}{2} + 1)}{(s+1)\pi^{(s+1)/2} \Gamma(\frac{s}{2}+1)} \\ &=\frac{1}{\sqrt{s}} \frac{s \pi^{s/2} (s+1)/2\Gamma(\frac{s+1}{2})}{(s+1)\pi^{(s+1)/2} (s/2)\Gamma(\frac{s}{2})} \\ &= \frac{\Gamma(\frac{s+1}{2})}{\sqrt{s\pi}\Gamma(\frac{s}{2})}. }$$ The final expression, although conventional, slightly disguises the beautifully simple initial expression, which clearly reveals the meaning of $C(s)$. Fisher explained this derivation to W. S. Gosset (the original "Student") in a letter. Gosset attempted to publish it, giving Fisher full credit, but Pearson rejected the paper. Fisher's method, as applied to the substantially similar but more difficult problem of finding the distribution of a sample correlation coefficient, was eventually published. R. A. Fisher, Frequency Distribution of the Values of the Correlation Coefficient in Samples from an Indefinitely Large Population. Biometrika Vol. 10, No. 4 (May, 1915), pp. 507-521. Available on the Web at https://stat.duke.edu/courses/Spring05/sta215/lec/Fish1915.pdf (and at many other places via searching, once this link disappears). Joan Fisher Box, Gosset, Fisher, and the t Distribution. The American Statistician, Vol. 35, No. 2 (May, 1981), pp. 61-66. Available on the Web at http://social.rollins.edu/wpsites/bio342spr13/files/2015/03/Studentttest.pdf. E.L. Lehmann, Fisher, Neyman, and the Creation of Classical Statistics. Springer (2011), Chapter 2. whuber♦whuber $\begingroup$ This is a fantastic proof! I sincerely hope you find this message, although it has been several years now. In the sixth step of this proof, I believe there is an error. Cos^-2(theta)= (1+tan^2(theta)), not its inverse. Praying there is an easy fix? $\endgroup$ – Math Enthusiast Dec 23 '19 at 5:27 $\begingroup$ @Math Thank you for your remarks. I don't find any error at step 6. Perhaps you are trying to read "$\cos^{-2}(\theta)$" (which means the $-2$ power of $\cos(\theta)$) as if it meant "$\left(\operatorname{ArcCos}(\theta)\right)^{2}$"? $\endgroup$ – whuber♦ Dec 23 '19 at 13:42 $\begingroup$ I used the simple identity $sec^{2}\theta = tan^{2}\theta + 1$ to deduce that $cos\theta=(tan^{2}\theta+1)^{-1/2}$ in Line 5. But by this same reasoning in Line 6, $cos^{-2}\theta = sec^{2}\theta = (tan^{2}\theta + 1)$. This conflicts with the claim that the differential element is equal to $(tan^{2}\theta + 1)^{-1}$ $\endgroup$ – Math Enthusiast Dec 23 '19 at 15:55 $\begingroup$ @Math Thank you--you're right, of course. I have edited points (6) and (7) to correct the algebra. $\endgroup$ – whuber♦ Dec 23 '19 at 17:24 $\begingroup$ Whew, what a relief! Happy holidays to you $\endgroup$ – Math Enthusiast Dec 23 '19 at 17:43 I would try change of variables. Set $Y=\frac{Z}{\sqrt{\frac{W}{s}}}$ and $X=Z$ for example. So $Z=X$, $W=\frac{sX^2}{Y^2}$. Then $f_{X,Y}(x,y)=f_{Z,W}(x,\frac{sx^2}{y^2})|\det(J)|$. Where $J$ is the Jacobian matrix for the multivariate function of $Z$ and $W$ of $X$ and $Y$. Then you can integrate $x$ out from the joint density. $\frac{\partial Z}{\partial X}=1$, $\frac{\partial Z}{\partial Y}=0$, $\frac{\partial W}{\partial X}=\frac{2sX}{Y^2}$, and $\frac{\partial W}{\partial Y}=\frac{-2sX^2}{Y^3}$. $$ J= \begin{pmatrix} 1&0\\ *&\frac{-2sX^2}{Y^3} \end{pmatrix} $$ So $|\det(J)|=\frac{2sx^2}{y^3}$. I just took a look at Elements of Distribution Theory by Thomas A. Severini and there, they take $X=W$. Integrating things out becomes easier using properties of a Gaama distribution. If I use $X=Z$, I probably would need to complete squares. But I don't want to do the calculation. whuber♦ ztyhztyh $\begingroup$ I did not downvote you, in fact I just upvoted you. But I think maybe the downvote arrived before your edit. $\endgroup$ – Monolite May 12 '15 at 1:58 $\begingroup$ Sorry about that, I will be careful from now on. $\endgroup$ – ztyh May 12 '15 at 12:46 Not the answer you're looking for? Browse other questions tagged probability distributions references or ask your own question. Student t as mixture of gaussian Why is a T distribution used for hypothesis testing a linear regression coefficient? How can I obtain a Cauchy distribution from two standard normal distributions? Why Test Statistic for the Pearson Correlation Coefficient is $\frac {r\sqrt{n-2}}{\sqrt{1-r^2}}$ What is the difference between auxiliary variable and Latent variable? Deegrees of freedom of Student's distribution Chi-square origin of the name A Normal random variable divided by a Chi-squared random variable How does one arrive at the t-distribution? How do you derive the conditional variance for $s^2$, the OLS estimator of $\sigma^2$? Distribution of the convolution of squared normal and chi-squared variables? Variance of "modified" inverse chi-square distribution Proof of the distribution of the residual standard error Chi-square and normal distribution Showing a Normal and a Chi square are independent The probability density function of half-chi-square distribution
CommonCrawl
Second-order cone programming A second-order cone program (SOCP) is a convex optimization problem of the form minimize $\ f^{T}x\ $ subject to $\lVert A_{i}x+b_{i}\rVert _{2}\leq c_{i}^{T}x+d_{i},\quad i=1,\dots ,m$ $Fx=g\ $ where the problem parameters are $f\in \mathbb {R} ^{n},\ A_{i}\in \mathbb {R} ^{{n_{i}}\times n},\ b_{i}\in \mathbb {R} ^{n_{i}},\ c_{i}\in \mathbb {R} ^{n},\ d_{i}\in \mathbb {R} ,\ F\in \mathbb {R} ^{p\times n}$, and $g\in \mathbb {R} ^{p}$. $x\in \mathbb {R} ^{n}$ is the optimization variable. $\lVert x\rVert _{2}$ is the Euclidean norm and $^{T}$ indicates transpose.[1] The "second-order cone" in SOCP arises from the constraints, which are equivalent to requiring the affine function $(Ax+b,c^{T}x+d)$ to lie in the second-order cone in $\mathbb {R} ^{n_{i}+1}$.[1] SOCPs can be solved by interior point methods[2] and in general, can be solved more efficiently than semidefinite programming (SDP) problems.[3] Some engineering applications of SOCP include filter design, antenna array weight design, truss design, and grasping force optimization in robotics.[4] Applications in quantitative finance include portfolio optimization; some market impact constraints, because they are not linear, cannot be solved by quadratic programming but can be formulated as SOCP problems.[5][6][7] Second-order cone The standard or unit second-order cone of dimension $n+1$ is defined as ${\mathcal {C}}_{n+1}=\left\{{\begin{bmatrix}x\\t\end{bmatrix}}{\Bigg |}x\in \mathbb {R} ^{n},t\in \mathbb {R} ,||x||_{2}\leq t\right\}$. The second-order cone is also known by quadratic cone, ice-cream cone, or Lorentz cone. The second-order cone in $\mathbb {R} ^{3}$ is $\left\{(x,y,z){\Big |}{\sqrt {x^{2}+y^{2}}}\leq z\right\}$. The set of points satisfying a second-order cone constraint is the inverse image of the unit second-order cone under an affine mapping: $\lVert A_{i}x+b_{i}\rVert _{2}\leq c_{i}^{T}x+d_{i}\Leftrightarrow {\begin{bmatrix}A_{i}\\c_{i}^{T}\end{bmatrix}}x+{\begin{bmatrix}b_{i}\\d_{i}\end{bmatrix}}\in {\mathcal {C}}_{n_{i}+1}$ and hence is convex. The second-order cone can be embedded in the cone of the positive semidefinite matrices since $||x||\leq t\Leftrightarrow {\begin{bmatrix}tI&x\\x^{T}&t\end{bmatrix}}\succcurlyeq 0,$ i.e., a second-order cone constraint is equivalent to a linear matrix inequality (Here $M\succcurlyeq 0$ means $M$ is semidefinite matrix). Similarly, we also have, $\lVert A_{i}x+b_{i}\rVert _{2}\leq c_{i}^{T}x+d_{i}\Leftrightarrow {\begin{bmatrix}(c_{i}^{T}x+d_{i})I&A_{i}x+b_{i}\\(A_{i}x+b_{i})^{T}&c_{i}^{T}x+d_{i}\end{bmatrix}}\succcurlyeq 0$. Relation with other optimization problems When $A_{i}=0$ for $i=1,\dots ,m$, the SOCP reduces to a linear program. When $c_{i}=0$ for $i=1,\dots ,m$, the SOCP is equivalent to a convex quadratically constrained linear program. Convex quadratically constrained quadratic programs can also be formulated as SOCPs by reformulating the objective function as a constraint.[4] Semidefinite programming subsumes SOCPs as the SOCP constraints can be written as linear matrix inequalities (LMI) and can be reformulated as an instance of semidefinite program.[4] The converse, however, is not valid: there are positive semidefinite cones that do not admit any second-order cone representation.[3] In fact, while any closed convex semialgebraic set in the plane can be written as a feasible region of a SOCP,[8] it is known that there exist convex semialgebraic sets that are not representable by SDPs, that is, there exist convex semialgebraic sets that can not be written as a feasible region of a SDP.[9] Examples Quadratic constraint Consider a convex quadratic constraint of the form $x^{T}Ax+b^{T}x+c\leq 0.$ This is equivalent to the SOCP constraint $\lVert A^{1/2}x+{\frac {1}{2}}A^{-1/2}b\rVert \leq \left({\frac {1}{4}}b^{T}A^{-1}b-c\right)^{\frac {1}{2}}$ Stochastic linear programming Consider a stochastic linear program in inequality form minimize $\ c^{T}x\ $ subject to $\mathbb {P} (a_{i}^{T}x\leq b_{i})\geq p,\quad i=1,\dots ,m$ where the parameters $a_{i}\ $ are independent Gaussian random vectors with mean ${\bar {a}}_{i}$ and covariance $\Sigma _{i}\ $ and $p\geq 0.5$. This problem can be expressed as the SOCP minimize $\ c^{T}x\ $ subject to ${\bar {a}}_{i}^{T}x+\Phi ^{-1}(p)\lVert \Sigma _{i}^{1/2}x\rVert _{2}\leq b_{i},\quad i=1,\dots ,m$ where $\Phi ^{-1}(\cdot )\ $ is the inverse normal cumulative distribution function.[1] Stochastic second-order cone programming We refer to second-order cone programs as deterministic second-order cone programs since data defining them are deterministic. Stochastic second-order cone programs are a class of optimization problems that are defined to handle uncertainty in data defining deterministic second-order cone programs.[10] Solvers and scripting (programming) languages Name License Brief info AMPLcommercialAn algebraic modeling language with SOCP support Artelys Knitrocommercial CPLEXcommercial FICO Xpresscommercial Gurobi Optimizercommercial MATLABcommercialThe coneprog function solves SOCP problems[11] using an interior-point algorithm[12] MOSEKcommercialparallel interior-point algorithm NAG Numerical LibrarycommercialGeneral purpose numerical library with SOCP solver References 1. Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization (PDF). Cambridge University Press. ISBN 978-0-521-83378-3. Retrieved July 15, 2019. 2. Potra, lorian A.; Wright, Stephen J. (1 December 2000). "Interior-point methods". Journal of Computational and Applied Mathematics. 124 (1–2): 281–302. Bibcode:2000JCoAM.124..281P. doi:10.1016/S0377-0427(00)00433-7. 3. Fawzi, Hamza (2019). "On representing the positive semidefinite cone using the second-order cone". Mathematical Programming. 175 (1–2): 109–118. arXiv:1610.04901. doi:10.1007/s10107-018-1233-0. ISSN 0025-5610. S2CID 119324071. 4. Lobo, Miguel Sousa; Vandenberghe, Lieven; Boyd, Stephen; Lebret, Hervé (1998). "Applications of second-order cone programming". Linear Algebra and Its Applications. 284 (1–3): 193–228. doi:10.1016/S0024-3795(98)10032-0. 5. "Solving SOCP" (PDF). 6. "portfolio optimization" (PDF). 7. Li, Haksun (16 January 2022). Numerical Methods Using Java: For Data Science, Analysis, and Engineering. APress. pp. Chapter 10. ISBN 978-1484267967. 8. Scheiderer, Claus (2020-04-08). "Second-order cone representation for convex subsets of the plane". arXiv:2004.04196 [math.OC]. 9. Scheiderer, Claus (2018). "Spectrahedral Shadows". SIAM Journal on Applied Algebra and Geometry. 2 (1): 26–44. doi:10.1137/17M1118981. ISSN 2470-6566. 10. Alzalg, Baha M. (2012-10-01). "Stochastic second-order cone programming: Applications models". Applied Mathematical Modelling. 36 (10): 5122–5134. doi:10.1016/j.apm.2011.12.053. ISSN 0307-904X. 11. "Second-order cone programming solver - MATLAB coneprog". MathWorks. 2021-03-01. Retrieved 2021-07-15. 12. "Second-Order Cone Programming Algorithm - MATLAB & Simulink". MathWorks. 2021-03-01. Retrieved 2021-07-15.
Wikipedia